Remember Microsoft Songsmith? Don’t worry if you have forgotten, I suspect Microsoft has too. Launched over a decade ago, it promised that songwriting was available to anyone—just sing a few lines or hum a few bars and the AI does the rest. It worked! And it pulled off the very weird trick of being very impressive and entirely laughable at the same time.
Songsmith’s most popular use was taking isolated vocal tracks from popular songs and letting the algorithm overlay cheesy elevator music on top. It was viral for a short time. And then forgotten about.
My experience of ChatGPT has so far reminded me of Songsmith. I asked it to draft this article for me.
A piece of advice I picked up a few years ago (and have shamelessly repeated as my own) is not to worry about intros, even though it’s possibly the most important part of any article. Just get the words down. Then, when finished, ask yourself: can I delete this first paragraph? There’s a good chance that you can, and just get straight to the point—leave out all of the scene setting that your audience already knows.
ChatGPT’s intro is fine. It’s coherent, it makes logical sense, and I’d delete it with barely a moment’s consideration. It’s the cheesy synths and beats of Songsmith, but as prose.
This isn’t surprising. ChatGPT is a large language model (LLM), trained with massive amounts of data to predict what word comes next in a sentence. It’s effective in creating something that looks good but doesn’t actually have much thought behind it. If you want copy that’s pretty much the same as everything else, ChatGPT will provide it.
That’s not to say ChatGPT isn’t an impressive achievement. But so was ELIZA, the first chatbot, yet all it did was spit out open-ended therapy-like questions, no matter what the user entered. Those who tried it were so taken in, they spilled their innermost secrets to their new virtual confidante, so much so that our tendency to anthropomorphise computers and to believe that software thinks and understands is known as ‘the ELIZA effect’.
The ELIZA effect may be half a century old but it’s still real. Last year a Google employee was fired after claiming that an AI, the Language Model for Dialogue Applications, or LaMDA, was sentient, self-aware, and even feared death. If an expert can be fooled, it’s no wonder lay people look at the likes of ChatGPT and see an inevitable future where creative industries from fine art to filmmaking and, yes, even B2B copywriting, are all dead, made redundant by AI. But on the evidence so far, all we really have is a really cool photocopier—impressive, fun, perhaps even able to spark creativity, but not in itself creative.
I’m biased, of course. Anyone who thinks their job is at risk from automation is going to be suspicious and lash out at the machine that will take over. Some might say I’m a Luddite. But it’s important to remember that the Luddites weren’t actually afraid of progress, they just wanted a fair deal for the machine workers—pensions, a minimum wage, and labour standards. The Luddites were right.
Today’s AIs aren’t quite so threatening, promising not the industrial revolution and all of its hardships but instead a future of derivative artworks and lukewarm word soup. Could this change? As ChatGPT might end a blog post: What will the future hold? Only time will tell.