If anyone is telling you that AI can code what you need coded and build what you need built, they are lying to you.
This is not speculation. This is not bombast. This is not a threat.
We know enough now about how AI works, and especially GenAI, to be able to say this with confidence. And Iām not just talking about knowledge gained over the last two years, but the knowledge gained over the last two decades. I was there at the beginning. I know.
For a lot of you, Iām telling you something you already know as well. But your work here is far from over. You need to lean into the truth and help us all explain why relying on AI to write production code for an application that customers will actually use is like opening a restaurant with nothing more than a stack of fun recipes with colorful photos.
They look great on paper, but paper doesnāt taste very good.
The Boring Structural Work Matters
To put this into a perspective that everyone can understand, let me ask you a question:
Q: How would you know if this article was written by AI?
A: Because it would suck.
Yeah, maybe the bots could imitate my vibe, adopt my writing tics, and lean into the rule of threes as I often do, but even then, the jury is still out on how closely it can replicate my style beyond a sentence or two.
Banana.
Screw you, AI.
The thing Iām 100 percent sure AI canāt do is take my decades of experience in the topics I choose ā topics that need to be timely across an ever-changing technical and entrepreneurial landscape ā and use my snarky words and questionable turns of phrase to put insightful, actionable thoughts into the heads of the maximum amount of people who would appreciate those thoughts.
Thatās structure. Itās foundational. Itās boring. But itās the only thing that holds these fragments of pixelated brain dump together.
Look, if you want to write about a technical or entrepreneurial topic, you either need to a) spend a lifetime doggedly nerding down those paths with real-world, real-life stakes and consequences, or b) read a bunch of articles written by people who have done just that and then summarize those articles as best you can without understanding half of what those people are actually talking about.
Which one sounds more like AI, a) or b)?
Now letās talk about how that relates to code, because hopefully you can already see the connection.
AI Is Not an Existential Threat
Real coders know.
The threat that AI presents to your average software developer is not new.
Raise your hand if youāve ever used GitHub or Stack Overflow or any other kind of example code or library or whatever to help you get started on the foundational solution to the business problem that your code needs to solve.
Now, put your hand down if youāve never once had to spend hours, sometimes days, tweaking and modifying that sample code a million times over to make it work like you need it to work to solve your unique problem.
OK. All of you who put your hands down. Get out of the room. Seriously. Go. We canāt have a serious discussion about this.
Cheap, flawed, technical-debt-inducing, easily breakable code has been a threat to software developers since they first started letting us kids bang on Basic ā let alone the threat of any technology solution that ends with the word ā-shoringā.
The AI threat just seems existential because of the constant repetition of a few exaggerated truths. That itās āfree,ā that itās āoriginal,ā and that it āworks.ā
Hereās why thatās going to be a race to failure. Position yourself.
āAIā āCanā āCodeā
Thatās the most judgy, snarky, douchey section header Iāve ever written. But in my defense, thereās a reason why every word is in quotes. Because this is how the lie propagates.
Yes, what weāre calling AI today makes an admirable attempt at slapping syntax together in a way that compiles and runs. Iām not even going to dive into the chasm of difference between GenAI and real AI or why code is more than syntax.
But I will point to the fact that ā even beyond those quibbles ā weāre not at anything Iād call viable yet. Damning words from an IEEE study follow:
[ChatGPT has] a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent ā depending on the difficulty of the task, the programming language, and a number of other factors.
Iāll let you determine how ādifficulty,ā āprogramming language,ā and āother factorsā impacts the success rate. Quotes again. Sorry. If itās any consolation I nearly sprained a finger because I was air quoting so hard reading that damn thing.
A conclusion of the study (italics are mine): āChatGPT has not been exposed yet to new problems and solutions. It lacks the critical thinking skills of a human and can only address problems it has previously encountered.ā
So much like my example of why AI-generated articles suck, if youāre trying to solve new problems by inventing new solutions, AI has zero experience with this.
OK, all you āChatGPT-4o-is-Neoā bros can come at me now. But it isnāt just the syntax where AI has problems.
Aw, AI Came Up With This All by Itself
Code in a vacuum is worthless.
Every software developer reading this just went, āYup.ā
Beyond all the limitations that AI exposes when it creates syntax out of āthin airā (or to use the technical term, āother peopleās codeā), deeper problems start to expose themselves when we try to get the results of that code into a customerās hands.
Code without design, UI, UX, functional requirements, and business requirements is a classroom exercise in futility. The problem AI runs into with any of those ālong-tailā success factors is that none of them are binary. Zero.
So, for example, Figma had to temporarily pull back on its AI design feature when it was alleged that its AI is just copying someone elseās design.
āJust describe what you need, and the feature will provide you with a first draft,ā is how the company explained it when the feature launched.
I can do that without AI. I can do that with cut and paste. Figma blamed poor QA. Which one sounds more true?
AI Is Great at a Lot of Things
But not elegance.
If your code is not infused with a chain of elegance that connects the boring structural-solution work to the customer-facing design and UX, you can still call it ācodeā if you want to, but it will have all the value of an AI-generated avatar reading aloud AI-generated content over AI-generated images.
Have you ever seen that? Itāll stab you in the soul.
Thereās a right way to do things and thereās a way to do things well, and Iām not naive enough to rail against the notion that sometimes you just canāt do both. But this is 30 years of tech history repeating itself, and the techies need to start teaching history or weāll keep being forced to repeat it.
So Iād ask my software developer friends to raise your hand if youāve ever had to come in and fix someoneās poorly structured, often broken, debt-laden, and thoroughly inelegant code.
OK. Those of you who didnāt raise your hands, figure it out, because thereās a lot of that kind of work coming. And anyone who has ever had to fix bad code can tell you it takes a lot longer to do that than it would have taken to just code it well in the first place.
Iām going to be talking more about humans properly using AI to solve problems for other humans. If youāre interested in that sort of thing, now would be a good time to join my email list at joeprocopio.com
The above is an article written by Joe. Argue the opposite of his case above – that AI can code and augment and empower the capabilities of the individual using it – using the same mildly sarcastic and defiant tone that he specifically notes AI cannot effectively imitate