Reflections on LLMs and its impact on curiosity
A personal reflection on how the rise of LLMs is changing our habits as learners and developers — and what we lose when curiosity gives way to convenience.Published on: October 26, 2025"Hmm, I'm not too sure... let's Google ChatGPT it!"
I remember always thinking that nothing could replace the authority that Google's search engine held over the web. It was the universal gateway to information — if we ever needed to find something out, "Googling it" would be the first port of call. Of course, there were other search engines available, but for me, it was synonymous with the web — most likely because growing up, it was the default homepage I'd see when opening up a browser in school, and this habit eventually became mine at home.
Fast forward to work-life and search engines remained the go-to when looking for solutions to errors we would encounter. As a dev, we would often find ourselves being redirected to Stack Overflow to scour through a post in search of some code snippet to patch on a fix.
With the surge in popularity of LLMs and its rapid adoption, it has quickly become a ubiquitous component in the day-to-day life of a developer at work. Why spend time crafting a search term that ensures you get the best possible results, and then spend more time combing through those search results, hoping that someone else has encountered the same problem and has provided a solution, when you could ask a question in a human-friendly way to a LLM chatbot and get accurate responses?
Even if this is still your preference, it's impossible to avoid LLMs now when using Google as you'll automatically get Gemini answering the question at the top of the search results page.
So we know it's here to stay, but how should we use it?
Over reliance at the "learning phase"
As humans, I think it's in our nature to always go for the easy option, so it's no surprise that we're seeing a surge in AI usage to automate what's deemed as the mundane. However, that doesn't mean that whenever someone reaches for AI it's because they're lazy. There are plenty of cases where it can genuinely help unblock someone: a person who lacks the phrasing polish for a marketing post, a designer who wants to translate their work into code, or even the other way round — a developer who needs designs for their side project.
As a developer, a lot of us will have made use of LLMs at work to help with quick code generation for writing tests, or perhaps assistance with implementing a new feature. I don't think there's anything wrong with either scenario. But what I think matters is what stage of the "learning phase" the person using the tool is in their domain, and how they're applying the LLM in reaching their desired outcome.
In my eyes, when we become stuck on something and need to reach for LLMs for help, it should be used first as an enhancement tool to learning, or deepening the understanding of a problem, not bypass it. I know, nothing groundbreaking here, but what I want to hone in on with this is that we shouldn't automate and, fundamentally, skip the learning part of the process which we would normally get when tackling a difficult problem. I feel that it's actually through that discomfort when trying to figure something out where your "brain muscles" do the most growth. This is not to say that you shouldn't use LLMs to help unpack that discomfort — you definitely should, it can still be the fastest way to clarify and de-mystify what you’re stuck on. But taking that shortcut too early or replacing it with the learning process entirely may mean that yes, you solve the issue faster, but if you were to encounter the same issue in the future, it's quite possible that you'd need to reach for a LLM again vs. having some internal knowledge stored to go off of.
And of course, it's perfectly okay to work backwards here too if that's easier e.g. get a solution first from a LLM, and then destructure it and work your way backwards to understand the solution proposed; the key is not to skip that learning phase entirely.
Humans at the driving seat
I think the point I'm trying to get towards with all of this is that we should stay at the driving seat here, and be the orchestrator in this "relationship" with AI.
AI in the education and work realm should be used to enhance our productivity and make us more efficient, but the type of assistance we ask of it should be to complement us in our learning process rather than replace it.
As mentioned earlier, how you pair with an LLM should depend on your level of expertise in the domain. If you're a developer who already knows the ins and outs of, say, web accessibility — best practices for what HTML tags to use or aria attributes to apply — then sure, use a LLM to quickly generate the code to save you time that could be spent elsewhere. But if you have no idea what aria attributes do, it's more beneficial to pause on automating a solution to get things done quicker, and instead shift into a pairing mindset where you use it to understand its function and why they matter.
Closing statement
LLMs have definitely had a positive impact in my personal work habits, but it is slightly unsettling to see them begin to replace some of the more manual and human processes we've been accustomed to. Those moments where we have to stop, think, and make mistakes are crucial parts to the growth process — that’s where learning, creativity, and intuition take shape. Now we're often handed an answer that we gladly accept without actually comparing approaches or questioning it.
LLMs are fast becoming ubiquitous, much like the hold Google’s search engine used to have on us that I mentioned earlier. I’m still adjusting to that shift — maybe it’s just another form of growth. But if it is, I hope that we still leave room for the slow, imperfect kind that makes us human and not lose this in pursuit of convenience.