Why ChatGPT Will NOT Replace Human Writing

With everyone abuzz over ChatGPT’s writing, many are questioning whether it will soon replace human journalists, bloggers and authors.

ChatGPT’s capabilities are certainly impressive, and I’ve used it to help me think through this post. However, I believe ChatGPT cannot fully replace human writing just yet. In this post, I’ll explain why and outline which types of writing are likely to be the safest — and which types are most at risk.

One caveat: this post focuses on current versions of large language models like ChatGPT. I have no idea what the world will look like if and when we get to Artificial General Intelligence (AGI), which could be a complete gamechanger.

ChatGPT writing a book

The quality of ChatGPT’s writing depends on the quality of its prompts

The quality and depth of the content ChatGPT produces are highly dependent on its prompts. As they say — Garbage In, Garbage Out. And human input is still crucial in generating those prompts.

For instance, when crafting an article about how COVID vaccines work, the prompts given by a doctor or health expert would likely result in a much more informative and engaging article than the prompts provided by, say, me. The human element of thinking and expertise is still required to guide ChatGPT into writing an accurate, relevant, and compelling article.

Different types of writing

Too often when people talk about ChatGPT replacing “writing”, they don’t bother to break down what “writing” they’re talking about. Not all types of writing are equal. Writing that relates to the real world or trades off a personal brand are less likely to be replaced by ChatGPT. On the other hand, writing that is derivative or generic is more likely to be replaced.

Writing that relates to the real world

A lot of writing conveys information about things that happen in the real world. AI language models do not currently threaten this realm of writing, as they don’t operate in the “real world”. Although they have a lot of training data, so far that training data has mostly consisted of text from the Internet. In the near future, it will probably be able to learn from images and video as well. Still, that does not make up the whole spectrum of human experience.

Examples

  • Biographies. People read biographies because they tell a real story about a real human. The Diary of Anne Frank wouldn’t have had the same emotional impact if written by ChatGPT about a fictional girl. People also often prefer stories told by the people who lived it, so autobiographies are probably safer than ordinary biographies.
  • Reports of original research and experiments. Take Daniel Kahneman and Amos Tversky’s work on human behaviour and decision-making for example. ChatGPT may be proficient at making up plausible-sounding studies, but it does not conduct any original research. Once its training data includes a published study, ChatGPT can write about it as well as most people — but human researchers still have to do the initial leg work.
  • Consumer reviews. We often search the Internet for other people’s opinions to help us decide what to buy or where to eat. ChatGPT can generate consumer reviews that look, on the surface, about as legitimate as any human-generated one. When I asked ChatGPT to write reviews for a fictitious restaurant in my hometown and for the (not yet released) Samsung S24, it happily did so. Those reviews may look like real reviews, but they’re absolutely useless.
ChatGPT can write a review for my fake restaurant, and MidJourney can create an image for it

It’s easy to forget that ChatGPT doesn’t physically experience the world. It can tell us much about real-life events, simply because its training data includes such events. For example, if its training data includes reviews of a particular Vietnamese restaurant, ChatGPT’s review for that restaurant may be indistinguishable from, and just as useful as, a real review. But it can make serious errors when extrapolating from that data, such as if it uses common phrases from Vietnamese restaurant reviews to write a review praising the pho at a new, untested Vietnamese restaurant.

Writing that trades off a personal brand

Some writing is popular because of who the author is. One example is Michelle Obama’s The Light We Carry, one of the bestselling books of 2022. Even if ChatGPT could have somehow written the exact same book, it wouldn’t have been nearly as popular. People bought and read that book because Michelle Obama wrote it.

Established fiction writers also trade off their brand. If the last Stephen King novel did not bear his name, it wouldn’t have sold nearly as well.

Perhaps AIs will later branch out different versions and develop their own unique personalities and brands. A ChatGPT-Meg may act and write like a smart and sarcastic 20-something woman. ChatGPT-Mike could come off like a sweet, old, grandfather figure. Like humans, most AI personalities will be anonymous nobodies, but a few could strike a chord and become famous. If this seems implausible, consider that Bart Simpson has already “written” a book and Hatsune Miku, a virtual character, has held real-life concerts using holographic projections. It may become much harder to get “discovered”, if AI personalities swamp the field and outnumber humans.

Alternatively, people can maintain their own personal brands and simply use AI to enhance it. For example, say Joe can triple his published blog posts with the help of ChatGPT. Even if ChatGPT actually writes more than 50% of each post, Joe can remain the “face” of the blog. It’s like how successful YouTubers hire employees to help create videos, while keeping themselves as the “face” of their channels.

Writing that is derivative or generic

ChatGPT is more likely to replace derivative or generic writing. Summaries are a good example. Not only can ChatGPT summarise things much faster than I can, it can likely do so more objectively. (Note however that ChatGPT is not perfectly impartial, as its training data contains biases too.) Generic and formulaic fiction also falls into this category, and there are already reports of ChatGPT writing trashy novels.

However, derivative writing may still contain some originality. James Clear’s Atomic Habits is a good example of this. While it didn’t contain any original research, it was a massive hit because of Clear’s personal brand and the way he repackaged existing ideas in an accessible way. Clear put thought into what topics to cover and which studies to include. Even if you could learn just as much by asking ChatGPT to summarise key habit-related findings, you may prefer to read the book. Plenty of people (including myself) buy books or courses that simply repackage information you can get for free online or in a library, because it saves time that would otherwise be spent searching.

Conclusion

Although ChatGPT’s writing is certainly impressive, we shouldn’t pack up our pens just yet. ChatGPT’s writing still relies on human-created prompts, and writing that relates to the real world cannot ever be fully replaced by a language model. It also seems unlikely that writing that trades of off a personal brand will entirely disappear anytime soon, either.

The writing that is most at risk is derivative or generic writing that merely summarises or repackages others’ work. Even that will usually contain some originality, but writers in this space should be thinking of how they can move up the value chain — I certainly am.

Ultimately, writing is just one way by which we convey thoughts and ideas. No matter how proficiently AI writes, it will never remove our need to communicate with one another as human beings.

Do you think there’s a future for human writers in light of ChatGPT? Would you like to read more posts like this? Share your thoughts in the comments below!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.