Who is finding LLMs useful and who is not? And why is this the case?

Since around two years ago, I’ve been floored by the fact that I can, with increasing adherence to my instructions, get a model to write software and complete tasks for me given a good description of what I want.

This technology feels world-changing to me. I feel this way because it has changed my world and the things I am capable of accomplishing in fixed time given the skills I possess today.

I’ve talked to anyone who will listen about why I think this technology is a step-function change in how we will accomplish many knowledge tasks in the future and how I am doing so in the present.

The utility of language models is beyond apparent to me personally. Yet, I frequently read articles by folks I respect who seem to feel the opposite way. These folks do not believe language models make them faster or more effective at accomplishing their work or goals. Some seem to believe the models make them dumber and lazier. Some seem to believe the models are a poor facsimile of intelligence. Some find they don’t live up to the hype and promise.

Increasingly, I’ve sought to understand the perspective of these folks. What about how they think, how they work, or how they operate makes an LLM not useful for them?

I recently shadowed a coworker while they used Claude Code to write some software. Earlier on, I identified differences in how we engaged with and prompted the model. But more interestingly, I realized that prompting an LLM (particularly a coding agent) is a distinctly different skill than writing software.

I noticed that my coworker was prompting for specific technical implementations, and Claude was struggling, pulling in too much context and taking an unfocused approach, whereas I would have been much more vague and general to start and refined as I saw whether Claude was on the right track.

The difference mostly amounted to prompting what vs. how. I typically prompt for what I want done. They were prompting for how they wanted something done.

None of this is to say that either approach is better or worse, but rather that I think I have drastically underappreciated how many different ways there are to prompt an LLM and how different the outcomes can be as a result.

So who is not finding models useful and why? One archetype I imagined might be someone for whom writing software is the clearest way in which they are able to communicate an idea.

If it were harder for you to describe your ideas in natural language than in code, I can see why you wouldn’t think writing software with a model would be useful.

Personally, I am far better at describing my ideas in natural language than in code.

Another archetype I imagined would be someone who is expecting a perfect result from the start. They might prompt the model to “launch a Flappy Bird app clone in the app store” and end up disappointed when the model doesn’t do it.

I know that I could personally use an LLM coding agent to perform that task, but it would still take me several hours. Far less than in a pre-LLM world, but not a trivial amount of effort.

It seems if you are willing to be patient, probe, and explore, models yield some of the best results. They won’t write the perfect code on the first attempt, but they will allow you to sculpt your ideal implementation, trying several different approaches along the way.

So why do I find models useful?

I find models useful because I can describe what I want and the details I care about, and the model will follow my instructions, then fill in the rest of the details that need to be decided, but that weren’t initially my focus.

I find models useful because I can follow up and modify the details the model filled in for me and easily change those too.

I find models useful because I can use them to help me from wherever I am currently, rather than me having to modify the shape of my work to adapt to a different solution I may find reading documentation or example resources.

I find models useful because I feel a certain amount of confidence knowing that I have a tool that can reliably get my mental wheels turning when I get stuck on a task and need to figure out a path forward.

I find models useful because I can prompt them to take a first pass at something that I think could be interesting to explore further, but I won’t really be sure until I see it.

I find models useful because they generally seem to understand and be able to implement prompts that I give them in support of the vision I have, even when many other people sometimes do not.

I find models useful because they allow me to learn more about exactly what I am interested in at the exact moment I am interested in it.

I find models useful because they speed up my feedback loop of creation just enough to keep me engaged and working in a flow state.

I find models useful because they allow me to make my ideas real and tangible in more ways.

I find models useful because they help me communicate more clearly with others.

I find models useful to help me pick reasonable starting points when I am stuck or experiencing decision paralysis.

I find models useful because they help me build tools that I can use to accomplish my goals and make my life easier.

I find models useful because they help me build fun and useful things for friends and family.

I find models useful because I can prompt them to challenge my thinking to give me another perspective.

Much of my use of language models is code-focused, but I’ve had a lot of success using them to help me get started as a beginning designer, to understand legal documents and their implications, and to learn more about cloud network architecture, security, and the protocols that make the internet work.

I don’t rely on the model as gospel, but I involve them in a substantial part of my work.

Is using LLMs making me dumber? I’m not entirely sure. I may write less code by hand than ever, but I do more now than I ever have before.

I may never learn to write Haskell or D3 at a high level of proficiency without the help of a model. Given we have limited time and resources, models let me use more of my time working on the things that interest me and that I enjoy. And they give me the ability to work with more tools in service of those interests and goals.

To me, models seem to reward the curious and the persistent, but do require a healthy amount of wariness given their flaws, limitations, and record of being incorrect at times. I can’t say I always know what that looks like or practice it effectively, but I do believe working with LLMs has made a strongly positive impact on me.