I wrote a few paragraphs disagreeing with Paul’s take, asserting that, like Simon suggests, we should think of language models like ChatGPT as a โ€œcalculator for wordsโ€.

I argued things like “language models are another tool in the technology toolbox” and “language models will meaningfully change what day-to day-life looks like, but transformations like this have happened throughout history”. Ironically, in trying to write a coherent opinion in disagreement with Paul’s, I realized I agreed with him.

Up until recently, it was difficult to produce coherent writing without thinking. If you sought to prove a point, you needed to enumerate your arguments, support those with evidence, then craft sentences to communicate what you were trying to say. With LLMs, you can generate written propose that masquerades as your argument (and it may even be a good one), but unless you do some amount of writing, it’s not really your argument. You probably couldn’t defend it and even if you could, you’d have studied it deeply enough that you’d either find flaws in it or likely could have written it yourself.

When you write your own argument you’re forced to think, because if you care about what your writing, you’re forced to consider if what you are saying holds up to scrutiny. Writing is unique in that you can take time to collect your thoughts and interrogate your own arguments to make sure they accurately reflect what you believe or what you are trying to say. Written words are the bridge between others and abstract ideas in your mind. At least they are for me.