<img src="https://www.detailsdata7.com/799087.png" style="display:none;"> Skip to main content
Audio Narration - Why Bother Writing Anything Anymore?
15:41

 

“Just throw it into ChatGPT, give it a few tweaks, and upload it. We’re in a hurry. This needs to go up on the site yesterday.”

Dear reader, you appear to have stumbled upon my worst nightmare. 

Ever since generative AI (GenAI) burst onto the scene like a young Micah Richards, I’ve been inundated with a tidal wave of AI slop – as, I’m sure, have many of you. My LinkedIn feed these days is a never-ending parade of people from all across the planet giving suspiciously similar takes about all manner of topics.

This is – I believe – a symptom of a deeper problem.

Lots of people don’t like writing.

It’s like public speaking. It’s hard. It’s uncomfortable. Sometimes it requires you to go out in front of an audience and say something completely absurd.

That’s not the problem, though. The problem is that – as soon as these people found that they could outsource the cognitive labour of writing (and I use that term loosely) to a chatbot – they did so, liberating themselves from the admittedly painful task of working out what it is they want to say and how to say it.

Now, we’re drowning in AI slop. It’s all over LinkedIn, it’s littering companies’ blog pages, and it’s even infiltrated your grandmother’s Facebook page. In that context, it’s no wonder some people throw their hands up and ask: “Why should I bother writing anything anymore? Surely the chatbots have it covered?”

But that question leads me to another. If the chatbots are writing more words than we could ever hope to read, who’s on the other side? 

Who’s reading them?

 

A Deluge of Synthetic Slop

The internet is oversaturated with synthetic material. Platforms like Facebook, Twitter, and Instagram have been riddled with AI slop like Shrimp Jesus and a lightsaber-wielding Donald Trump – and it hasn’t stopped at people’s social media feeds. In February 2024, Sam Altman said OpenAI was generating 100 billion words a day, much of that from corporate users. Unfortunately, all those tokens have to go somewhere.

Originality AI, an AI detection startup, revealed to WIRED that over 54% of longer-form English language posts on LinkedIn were likely to be AI-generated. With such a flood of low-effort material being pumped into the public square, how is that coming across to readers: people who usually get their news or industry insights from LinkedIn or other community platforms?

If you write for a living, you’ll know the tell-tale signs: generic treatment of the subject matter; a love of, “it’s not just X—it’s Y,” phrasing (otherwise known as contrastive parallelism/antithesis); and far too many adjectives. The fact is, though, that you can’t always tell. In ‘Human heuristics for AI-generated language are flawed’, researchers from Cornell and Stanford Universities found that, when shown human and AI-generated self-presentations, participants could only tell the difference in 50–52% of cases.

In a similar vein, Bowdoin College ran a January 2025 study that offered 650 people the chance to read and assess an AI-generated story, paying them $3.50 for the privilege of reading a piece by famous author Jason Brown. Crucially, only half of them were told that the piece was actually synthetic material. During this study, researchers asked people how much money and time they’d give up to reach the end of the story. In the end, both groups sacrificed the same amount, whether they knew it was AI-generated or not. 

What’s most interesting is how the readers felt about the piece. Those who knew they were reading a synthetic story rated it much more harshly. They thought it was predictable, inauthentic, and dull. 

You might ask why these participants felt compelled to read to the end anyway. Maybe they were just prejudiced; maybe they’re actually fine with synthetic stories (if they don’t know they’re synthetic). But being paid to start reading a story makes you far more likely to finish it. It’s an artificial leg-up that most AI writing won’t get.

Regardless, what we’ve learned is that, when somebody knows they’re reading AI-generated content, it breaks the spell. The lustre disappears. The spell is broken, and the Wizard of Oz is really just an old man behind a curtain.

You may remember The Last Screenwriter, a film set to premiere at The Prince Charles cinema in June 2024. Its script was “written” by OpenAI’s GPT-4o. It was – as you can imagine – not a glittering success. The cinema received over 200 complaints, and the showing was cancelled.

So, why are people so attached to the idea of human writing – and upset by the concept of it being replaced by synthetic material?

 

AdobeStock_144168058

 

Castles on Sand

First and foremost, large language models are incapable of original thought. A neural network learns how to predict the most likely next word (or token) in a sentence based on its training data – but that’s it. While it might find new ways of remixing the same ideas, it won’t take you anywhere different. 

Academic and researcher Emily M. Bender and her co-authors coined the phrase “stochastic parrot” to describe a system that “haphazardly [stitches] together sequences of linguistic forms it has observed… according to probabilistic information about how they combine, but without any reference to meaning”. 

Good communication relies on theory of mind: “the ability to attribute mental states to oneself and others, understanding that others have beliefs, desires, intentions, and perspectives that are different from one’s own.” A human writer chooses words and phrases which will resonate with their reader, which sometimes involves taking risks; a chatbot will go straight down the middle, every time.

Another phrase that’s come into vogue is “potemkin understanding”, as proposed by researchers at MIT, Harvard, and the University of Chicago. The name is a reference to the fake, portable villages built by Grigory Potemkin, Russian field marshal and former lover of Empress Catherine II. A “potemkin” refers to an instance when a large language model defines a concept perfectly well in a benchmark test, but falls flat when asked to apply it in practice.

Potemkins are distinct from hallucinations – when an AI model’s ability to predict the next word in a sentence goes wrong and it simply makes up an answer out of thin air. Hallucinations are perhaps an LLM’s most damning flaw. You can’t trust anything it tells you. 

You will recall Google Gemini telling you to eat glue and chew on rocks. In the past few years, we’ve also seen Air Canada forced to provide an imaginary discount invented by its customer service chatbot and a lawyer forced to apologise after submitting AI-generated fake quotes in a murder case. If people can’t trust LLMs to not invent information (which, I might add, is exactly what they’re designed to do), how can they gain traction in highly regulated sectors like law or finance?

The Bowdoin College study showed us that when people know they’re reading AI-generated writing they judge it much more harshly. Its synthetic status takes away from its perceived value. There’s a great quote from Piers Tomlinson at an FT Longitude event: “Why would you bother to read something from a company that hasn’t bothered to write it properly, or even think about it properly?” 

Fundamentally, if your audience works out that they’re reading something that’s come from a chatbot – why wouldn’t they cut out the middleman and ask the chatbot themselves next time?

 

Image (13)

Content vs. Writing

Good writing exists to communicate, not just to fill space. There are two main reasons why people read anything: to be entertained, or to be informed. If a piece isn’t doing either, you’d be well-justified in asking why it exists in the first place.

For years, I’ve been seeing people get increasingly annoyed with the amount of shallow thought leadership and engagement bait posted online – especially on LinkedIn. This material is often created for the sake of exposure and SEO, treated as a commodity, and pumped out as fast as possible. 

This was true even before we had LLMs to do it for us. Now that we do, it’s become very clear that the game of low-value content production is over. As Susie Alegre at WIRED put it: “Ironically, the advent of AI-generated search, stalling traffic to original websites, will kill off the need for pointless “content” to game the system and will push people to demand better.”

Nobody can write faster or more cheaply than AI. Much like you’d struggle to weave fabric faster than a power loom. What we can do is write better. The audience doesn’t care if a company is producing content more efficiently. They care about the output – is it informative? Funny? Surprising? That’s the difference between ‘content’ and writing. The reader wants to see a human on the other side.

 

Growing an Audience

For most brands, the goal of producing content is to build an audience. You need to attract and then – a perhaps more difficult step – retain a loyal set of readers. There’s no shortage of online information these days, and that’s compounded by more and more people using GenAI to “keep up.” That presents two key problems. Once everybody starts to sound the same, what makes your target readers choose you? And if everyone’s pumping out AI slop like there’s no tomorrow, who’ll be reading it?

I’m far from the first person to notice this. And, thankfully, there’s a growing audience for personalised, human-authored writing – and many people willing to pay for it. For example, in venture capital, many investors who may have previously penned long LinkedIn posts to talk about the market and their observations are now taking advantage of Substack to amass a following. 

In Steph Bailey’s excellent piece for Sifted on this trend (The substackification of VC) she discusses the case of Rubén Dominguez Ibar, whose Substack, The VC Corner, has thousands of paying subscribers, and is now directly funding his future investments in startups. 

Meanwhile, while our counterparts in tech journalism have endured round after round of devastating layoffs, we’ve also seen specialist publications like The Stack and UKTN enjoy some success via the launch of new paid media offerings. The Stack’s paid community membership tier costs £250/year, and it has promised to hire a new technology reporter for every 300 annual subscribers. While this is, no doubt, a reflection of the challenges involved in running a purely ad-driven news outlet, personally, I’m entirely behind good writing being paid for. Especially when it generates new opportunities for more good, original writing. A virtuous circle, indeed.

For brands, too, there is huge value to be gained by producing your own high-quality, original thought leadership. This is partly because it’s so rare. Not many organisations can afford to invest the time and effort it takes to work up something new and interesting – even fewer will do so regularly. But that’s exactly what makes it so valuable. Journalists and potential customers want to speak with real experts in their field – but, first, they need to figure out who they are.

We’ve seen this first-hand in our work with data centre infrastructure specialists Vespertec. The world of data centre hardware is notoriously dense the further you go down the rabbit-hole – full of acronyms like PUE, PDU, and PCIe – and it takes serious effort to create thought leadership and breakdowns that are both accessible, accurate, and able to be taken seriously by CTOs and heads of infrastructure. When we helped the company create its own microsite breaking down NVIDIA’s vast product portfolio, it took exhaustive research, interviews with experts, and good, old-fashioned writing and rewriting to make sure what we created was both accurate and of the highest possible quality.

 

Text reading: "How Storytelling drives Sales, Funding, and Visibility for Founders", atop a blue background, with a Resonance logo and a hand emitting a stylised wavelength symbol.

 

Do We Work With AI, or Work Against AI?

With all that said, genies don’t tend to go back in bottles. And Pandora never managed to close that box. AI will be used for writing, whether we like it or not. Our job, then, as marketing and communications professionals, is to take responsibility for the outputs.

AI will make things up. It will talk in circles (John R. Gallagher, professor at the University of Illinois, in his excellent newsletter, terms this ‘orbital argumentation’). And, if we’re not careful, it will rob us of our ability to think. 

The last point, to me, is the most damning. An MIT study published in June 2025 scanned the brains of people using ChatGPT to write essays, finding less activity in those people’s cognitive processing centres, and warning that, “[o]ver four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.” 

I’m not saying we should never use AI. Partly because that’s a losing battle, and partly because I use it myself. Generative AI, as we all know, has many valid uses. If you’re short on time, for instance, and want to fill a section with placeholder text; or to summarise survey findings (as long as you fact check it); or even to brainstorm ideas for headlines and/or metaphors to use. 

What I’m trying to say is that AI-generated writing cannot be good writing. Good writing can only come from a person: somebody doing the work, diving into mounds of research, thinking deeply about the subject, working out what they think, trying to express that to a specific target reader, and sounding out the rhythm and voice of the prose. Maybe that person has built on top of an AI-generated first draft, or maybe (and this is my preference) they’ve started from scratch. 

The problem comes when people see AI as a way to avoid doing the work. To cognitively offload the mental labour of thinking about something, and content themselves with the approximation of writing. The Potemkin village: a convincing-enough imitation that falls apart when you get too close.

 

So, Where Do We Go From Here?

At the end of the day, you need to get your news, analysis, and insights from somewhere. If you notice that a source you used to like is serving you nothing but AI-generated slop, why not cut out the middle man and go straight to the source? What can an (entirely) AI-generated blog or LinkedIn post offer you that ChatGPT or Claude can’t?

If I had to give one prediction for 2026, it’s that companies who continue to dump wholly synthetic material on the internet will either lose the audience they’ve worked so hard to build – or lose out on the opportunity to capture new market share. 

If I had to give one piece of advice, it would be this. Trust your readers. There is an audience out there for original, well-written material, and people will be loyal to companies who can consistently provide it.

We may not queue up on the street corner for newspapers anymore. And audiences may hunger for new digital formats: short-form video, podcasts, UGC, and content structured for AI search engine crawlers. But – in a world where clarity is all too often left by the wayside – there will always be space for the written word.

 

Related Blogs: