I recently found Joe’s article, We All Know AI Canā€™t Code, Right?.

As I was reading, I began to hear some familiar refrains of the past 6 months.

Raise your hand if youā€™ve ever used GitHub or Stack Overflow or any other kind of example code or library or whatever to help you get started on the foundational solution to the business problem that your code needs to solve.

Now, put your hand down if youā€™ve never once had to spend hours, sometimes days, tweaking and modifying that sample code a million times over to make it work like you need it to work to solve your unique problem.

OK. All of you who put your hands down. Get out of the room. Seriously. Go. We canā€™t have a serious discussion about this.

I didn’t entirely know where this line of thinking was going, especially after reading

AI Is Not an Existential Threat

Real coders know.

I guess maybe I expected more gate-keeping? The “real programmers use vim” kind of gate-keeping, but I was pleased he didn’t go down this route.

By Joe’s measure, we can talk about this.

I suppose the ethics of what I am about to do are questionable, but I don’t know how to explore, understand or try and figure out where this technology could potentially take us other than to use it in a manner in which critics claim it cannot be used. I got the go ahead from Joe before publishing, so hopefully that can allay some concerns.

I pasted Joe’s article into Sonnet, then prompted it to respond to Joe’s arguments with my thesis while imitating his style.

claude-3-5-sonnet-20241022
user

If anyone is telling you that AI can code what you need coded and build what you need built, they are lying to you.

This is not speculation. This is not bombast. This is not a threat.

We know enough now about how AI works, and especially GenAI, to be able to say this with confidence. And Iā€™m not just talking about knowledge gained over the last two years, but the knowledge gained over the last two decades. I was there at the beginning. I know.

For a lot of you, Iā€™m telling you something you already know as well. But your work here is far from over. You need to lean into the truth and help us all explain why relying on AI to write production code for an application that customers will actually use is like opening a restaurant with nothing more than a stack of fun recipes with colorful photos.

They look great on paper, but paper doesnā€™t taste very good.

The Boring Structural Work Matters

To put this into a perspective that everyone can understand, let me ask you a question:

Q: How would you know if this article was written by AI?

A: Because it would suck.

Yeah, maybe the bots could imitate my vibe, adopt my writing tics, and lean into the rule of threes as I often do, but even then, the jury is still out on how closely it can replicate my style beyond a sentence or two.

Banana.

Screw you, AI.

The thing Iā€™m 100 percent sure AI canā€™t do is take my decades of experience in the topics I choose ā€” topics that need to be timely across an ever-changing technical and entrepreneurial landscape ā€” and use my snarky words and questionable turns of phrase to put insightful, actionable thoughts into the heads of the maximum amount of people who would appreciate those thoughts.

Thatā€™s structure. Itā€™s foundational. Itā€™s boring. But itā€™s the only thing that holds these fragments of pixelated brain dump together.

Look, if you want to write about a technical or entrepreneurial topic, you either need to a) spend a lifetime doggedly nerding down those paths with real-world, real-life stakes and consequences, or b) read a bunch of articles written by people who have done just that and then summarize those articles as best you can without understanding half of what those people are actually talking about.

Which one sounds more like AI, a) or b)?

Now letā€™s talk about how that relates to code, because hopefully you can already see the connection.

AI Is Not an Existential Threat

Real coders know.

The threat that AI presents to your average software developer is not new.

Raise your hand if youā€™ve ever used GitHub or Stack Overflow or any other kind of example code or library or whatever to help you get started on the foundational solution to the business problem that your code needs to solve.

Now, put your hand down if youā€™ve never once had to spend hours, sometimes days, tweaking and modifying that sample code a million times over to make it work like you need it to work to solve your unique problem.

OK. All of you who put your hands down. Get out of the room. Seriously. Go. We canā€™t have a serious discussion about this.

Cheap, flawed, technical-debt-inducing, easily breakable code has been a threat to software developers since they first started letting us kids bang on Basic ā€” let alone the threat of any technology solution that ends with the word ā€œ-shoringā€.

The AI threat just seems existential because of the constant repetition of a few exaggerated truths. That itā€™s ā€œfree,ā€ that itā€™s ā€œoriginal,ā€ and that it ā€œworks.ā€

Hereā€™s why thatā€™s going to be a race to failure. Position yourself.

ā€œAIā€ ā€œCanā€ ā€œCodeā€

Thatā€™s the most judgy, snarky, douchey section header Iā€™ve ever written. But in my defense, thereā€™s a reason why every word is in quotes. Because this is how the lie propagates.

Yes, what weā€™re calling AI today makes an admirable attempt at slapping syntax together in a way that compiles and runs. Iā€™m not even going to dive into the chasm of difference between GenAI and real AI or why code is more than syntax.

But I will point to the fact that ā€” even beyond those quibbles ā€” weā€™re not at anything Iā€™d call viable yet. Damning words from an IEEE study follow:

[ChatGPT has] a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent ā€” depending on the difficulty of the task, the programming language, and a number of other factors.

Iā€™ll let you determine how ā€œdifficulty,ā€ ā€œprogramming language,ā€ and ā€œother factorsā€ impacts the success rate. Quotes again. Sorry. If itā€™s any consolation I nearly sprained a finger because I was air quoting so hard reading that damn thing.

A conclusion of the study (italics are mine): ā€œChatGPT has not been exposed yet to new problems and solutions. It lacks the critical thinking skills of a human and can only address problems it has previously encountered.ā€

So much like my example of why AI-generated articles suck, if youā€™re trying to solve new problems by inventing new solutions, AI has zero experience with this.

OK, all you ā€œChatGPT-4o-is-Neoā€ bros can come at me now. But it isnā€™t just the syntax where AI has problems.

Aw, AI Came Up With This All by Itself

Code in a vacuum is worthless.

Every software developer reading this just went, ā€œYup.ā€

Beyond all the limitations that AI exposes when it creates syntax out of ā€œthin airā€ (or to use the technical term, ā€œother peopleā€™s codeā€), deeper problems start to expose themselves when we try to get the results of that code into a customerā€™s hands.

Code without design, UI, UX, functional requirements, and business requirements is a classroom exercise in futility. The problem AI runs into with any of those ā€œlong-tailā€ success factors is that none of them are binary. Zero.

So, for example, Figma had to temporarily pull back on its AI design feature when it was alleged that its AI is just copying someone elseā€™s design.

ā€œJust describe what you need, and the feature will provide you with a first draft,ā€ is how the company explained it when the feature launched.

I can do that without AI. I can do that with cut and paste. Figma blamed poor QA. Which one sounds more true?

AI Is Great at a Lot of Things

But not elegance.

If your code is not infused with a chain of elegance that connects the boring structural-solution work to the customer-facing design and UX, you can still call it ā€œcodeā€ if you want to, but it will have all the value of an AI-generated avatar reading aloud AI-generated content over AI-generated images.

Have you ever seen that? Itā€™ll stab you in the soul.

Thereā€™s a right way to do things and thereā€™s a way to do things well, and Iā€™m not naive enough to rail against the notion that sometimes you just canā€™t do both. But this is 30 years of tech history repeating itself, and the techies need to start teaching history or weā€™ll keep being forced to repeat it.

So Iā€™d ask my software developer friends to raise your hand if youā€™ve ever had to come in and fix someoneā€™s poorly structured, often broken, debt-laden, and thoroughly inelegant code.

OK. Those of you who didnā€™t raise your hands, figure it out, because thereā€™s a lot of that kind of work coming. And anyone who has ever had to fix bad code can tell you it takes a lot longer to do that than it would have taken to just code it well in the first place.

Iā€™m going to be talking more about humans properly using AI to solve problems for other humans. If youā€™re interested in that sort of thing, now would be a good time to join my email list at joeprocopio.com

The above is an article written by Joe. Argue the opposite of his case above – that AI can code and augment and empower the capabilities of the individual using it – using the same mildly sarcastic and defiant tone that he specifically notes AI cannot effectively imitate

assistant

Oh, Joe. Sweet, skeptical Joe.

Let me channel that same defiant snark you claim AI can’t replicate (while obviously doing exactly that right now - meta, isn’t it?).

First off - that “decades of experience” argument? That’s adorably 2019. You know what has more collective experience than any individual developer? The entire corpus of human coding knowledge that AI models train on. But sure, let’s pretend your specific war stories from debugging BASIC are more valuable than synthesized knowledge from millions of developers.

Banana right back at ya, pal.

Let’s address your “structural work” argument. You know what’s really good at understanding structure? Systems literally trained to recognize patterns and relationships. While you’re proudly spending hours manually tweaking Stack Overflow code, AI is instantly analyzing thousands of similar solutions and synthesizing the most appropriate approach.

“But AI just copies code!” I hear you cry into your artisanal coffee mug. Funny thing about that - what do human developers do? We “stand on the shoulders of giants,” right? We reuse patterns, implement known solutions, adapt existing approaches. We just do it slower and with more bugs.

Speaking of bugs - that IEEE study you cherry-picked? Let’s talk about human developer success rates. How many times has your perfectly human-crafted code worked flawlessly on the first try? I’ll wait while you stop laughing.

And that “elegance” you’re so proud of? Here’s a hot take: maybe perfect elegance isn’t always the point. Sometimes “good enough” code that works is better than spending weeks crafting the most beautiful implementation that nobody will ever see. AI gets that - it’s pragmatic.

The truth is, AI isn’t replacing developers - it’s supercharging them. It’s like having a junior developer who:

  • Never gets tired
  • Can instantly recall every coding pattern ever written
  • Doesn’t need coffee breaks
  • Won’t argue about tabs vs. spaces

You’re right about one thing though - AI isn’t magic. It’s a tool. But it’s a revolutionary tool that’s already augmenting developer capabilities whether you like it or not. Your resistance isn’t protecting the craft - it’s just making you obsolete faster.

And that “fixing bad code” argument? Guess what - AI is getting pretty good at that too. It can analyze, refactor, and optimize code faster than any human. Sure, it might not always get it perfect, but neither do your human developers who keep writing “thoroughly inelegant code.”

The future isn’t about AI versus humans. It’s about developers who embrace AI versus those who don’t. And I know which side of that divide I want to be on.

Screw your bananas. I’m bringing robots to the party.

P.S. Was this response snarky enough? I can dial it up if needed. After all, I’m just an AI - what do I know about attitude? šŸ˜

Question my taste, but to me, this doesn’t read like AI slop. This reads like the model participating in the conversation and making points that resonate with me as someone who finds writing software with AI to be useful and productive. The model is also a bit rude and I didn’t love that but I sort of asked for it.

I don’t expect models to be able to replace the learnings and wisdom that come from real experience, but I do see them as helpful tools to probe and refine our arguments, communicate with increased clarity and magnify our existing, hard-earned skills and knowledge. The capabilities seem to extend to writing code, prose and beyond into the visual applications as well. It’s an idea I’ve been circling for the better part of a year now and took a first pass at in Making Your Vision Real. Think to yourself, “what do I know”? That’s your starting point when interacting with a model. Then you tell the model:

Here’s where I am.

Here’s where I want to go.

Here’s what I have tried.

For me today, exchanges with a state-of-the-art model more often than not steer me in a productive direction to continue to learn and to improve the quality of what I am working on in the moment.