173 Surviving the Textpocalypse

Liza Long

Why Human Writing Matters More than Ever in the Age of Artificial Intelligence

Imagine a world where every story, every song, every piece of writing is algorithmically generated. It would be a world devoid of the human touch, the spark of imagination, and the richness of diversity. While AI and large language models can generate text that is technically proficient, they lack the capacity to truly understand the intricacies of human emotions. They lack the life experiences that shape our perspectives and fuel our creativity. They lack the ability to empathize, to relate, and to genuinely connect on a profound level.

I didn’t write that first paragraph. AI did (Open AI, 2023).

Welcome to our brave new world. Spoiler alert: as with prior technological advances we have seen in our lifetimes, profits, not ethics or even a basic concern for humanity, are dominating discussions and development of generative artificial intelligence.

What does the rise of artificial intelligence mean for jobs? For creativity? For humanity? How will we survive as we face what may turn out to be one of the greatest technological changes since the Industrial Revolution?

Before we get into the ones and zeroes, I want to share a new kind of acknowledgement with you inspired by the work of Laurie Phipps and Donna Lanclos (2023). I acknowledge that ChatGPT does not respect the individual rights of authors and artists and ignores concerns over copyright and intellectual property in its training; additionally, I acknowledge that the system was trained in part through the exploitation of precarious workers in the global south. In this work I specifically used ChatGPT to write the opening lines of this essay and to research examples of prompts that I will share later. I also used Adobe Firefly to generate two images based on my own text prompts.

I’m assuming that you’ve heard about ChatGPT and other large language models. The students in my college English classrooms certainly have. To understand the potential impacts, both positive and negative, I’d like to share the United Nation’s Education, Scientific, and Cultural Organization’s statement on artificial intelligence:

The world is set to change at a pace not seen since the deployment of the printing press six centuries ago. AI technology brings major benefits in many areas, but without the ethical guardrails, it risks reproducing real world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms. AI business models are highly concentrated in just a few countries and a handful of firms — usually developed in male-dominated teams, without the cultural diversity that characterizes our world. Contrast this with the fact that half of the world’s population still can’t count on a stable internet connection. (UNESCO, n.d.)

A Brief History of Writing

To understand how we got here, we need a quick tour of writing as a technology.

Language—the ability to communicate both concrete and abstract thought to other humans—is not new. Most anthropologists think that it’s been a defining characteristic of humanity at least since we became homo sapiens (Pagel, 2017).

But writing is a relatively new technology, appearing on the cultural scene less than 5500 years ago in Mesopotamia (Brown, 2021). For thousands of years, writing remained the provenance of the elites—the scholarly upper class. The invention of the printing press in Germany around 1440 C.E. was a game changer, creating the possibility for mass publication, and with this, mass literacy (Briggs & Burke, 2002). One of my first jobs in college was cataloging the marginalia from some of the first printed books—pocket-sized editions of Roman and Greek texts printed by Aldus Manutius in the late 1400s (Naiditch et al., 2001).

By the 1900s, educators around the world were embracing the goal of universal literacy. Public schools promoted literacy as an unqualified good, a rising tide that would lift all boats. Free libraries brought texts to everyone who wanted to read them (Kober & Rentner, 2020).

And in the 1990s, the advent of the World Wide Web meant that suddenly any person could communicate by text with any other person literally anywhere, in real time. In 1995, only 3% of Americans had ever signed on to the web (Pew Research Center, 1995). By 2000, that number had climbed to 52%. By 2015, 96% of adults ages 18-29 were using the Internet (Perrin & Duggan, 2015).

Not that all the words posted to the World Wide Web were worth reading. But it turns out they were good for something else: a treasure trove of readily available text to train large language models (LLMs) (Schaul et al., 2023).

A few interesting things happened to our brains along our path from oral language to ubiquitous text-based Internet. Writing removes the need for robust memory, allowing us to externalize knowledge (DeSilva et al., 2021). We no longer need to memorize all 26 books of Homer’s Iliad—or even a single line. And we no longer need to remember anything, really, when the advanced computers we all carry in our pockets can look up any knowledge, no matter how esoteric, in an instant.

The British author E.M. Forster is often credited with this famous quote: “How do I know what I think until I see what I say?” If we outsource our writing to large language models, will we still be able to think for ourselves? Will we still want to?

The Internet: Then and Now

Maybe you remember life before the Internet. We wrote letters and sent them by mail. My siblings and I fought over who would get to use the telephone after school. We needed a thick book called a Thomas Guide to navigate the streets of Los Angeles.

Things changed fast with the early Internet. I still remember the thrill of my first Mosaic web search in 1994. Not knowing what else to ask, I queried the surfing conditions in Australia—and was delighted by the instant response!

In the early days of the internet, anything seemed possible. My friends and I wrote long earnest email missives to each other. I booked a hotel online and looked up train schedules in Barcelona. We explored new identities in anonymous chat rooms. We could order any book we wanted from this magical new website: Amazon.com.

Then the towers fell, and America went to war, and everything fell to pieces.

A few years later a new technology appeared, one that promised to connect us with people around the world. I shared tunes on MySpace and posted photos—so many photos!—of my adorable children to Facebook, never once considering their privacy, or my own. I started an anonymous mommy blog, The Anarchist Soccer Mom. Andy Warhol’s predicted 15 minutes of world fame for everyone, everywhere, became a reality (Jones, 2017).

We all know how that ended. Facebook destroyed democracy. Today our young people show devastatingly high levels of anxiety and depression (Abrams, 2023). We are polarized and siloed in our social media echo chambers.

Large Language Models and Generative AI

Now, with little warning, large language models have been unleashed on the world, with all the optimism that accompanied the last three technological revolutions. And of course, none of the ethical considerations.

I typed my first prompt into ChatGPT in late November 2022. Wanting to assess its potential impact on my school’s philosophy program, I asked it to summarize Kant’s deontology, apply it to the ethics of eating meat, and contrast that with utilitarianism. It got utilitarianism right, but it was wrong on Kant. Three weeks later, I tried the same prompts again. This time, it got Kant right too.

Aside: I am not able to share this chat history because ChatGPT only provides my conversation history beginning January 2023, and when I first used it, I did not think about the critical importance of citing this new tool as a source. For this reason, if you’re a student who has used ChatGPT in a similarly cavalier fashion, I completely empathize—but it’s important that we both now know that we can and should cite any AI assistant we use, including Open AI, Quillbot, and Google’s Writing Assistant.

Since January 2023, I’ve used ChatGPT daily. As an instructor, I teach students prompts to generate ideas, focus research questions, clarify difficult concepts like deconstruction, outline papers, and check for grammar. I’ve asked it to design yoga routines, plan meals, create lesson plans, write drafts of emails and recommendation letters, and even compose a collaborative rhyming poem. It’s a simultaneously strange and delightful experience, chatting with this bot. When it lies (which is often, and with great confidence), it always apologizes. How many humans can do that?

I don’t think this tool is intelligent. If you’re familiar with predictive text on your smartphone, ChatGPT is basically that on steroids. Emily Bender, a computational linguist at the University of Washington, has called large language models “stochastic parrots” (Weil, 2023). They function by analyzing probabilities and predicting what you want them to say in response to your prompt. Of course, there are worse things to ask for in a conversation partner.

Ethical Concerns with Generative Artificial Intelligence

And yet, every time I use ChatGPT or any other generative AI program, I am mindful of the problems intrinsic to a training set based on the Internet—gender and racial bias, for starters. This tool acts as a mirror to 30 years of Internet toxicity. No one has meaningfully addressed how this technology will impact knowledge workers, who previously thought their jobs were safe; for example, LLMs are really good at writing code (Meyer, 2023). Time Magazine reported that tech companies exploited Kenyan workers to make ChatGPT’s responses less toxic (Perrigo, 2023).

Then there’s climate change. The energy required to run these large language models is predicted to reach 8-21% of the world’s total energy supply by 2030, in a climate that is already warming far too fast (Magubane, 2023). This massive energy consumption will disproportionately impact the global south and those who do not have the financial means to protect themselves from climate harms.

And of course, let’s not forget the general existential threat to humanity. How forward thinking of the tech bros who unleashed this on us to sign a letter calling for regulation and warning about the potential “risk of extinction” (Roose, 2023).

But AI is just one of many existential threats facing humanity. Matthew Kirschenbaum (2023), a professor of English and digital studies at the University of Maryland, predicts a far more mundane, and I think, more likely future: the textpocalypse. He writes,

What if, in the end, we are done in not by intercontinental ballistic missiles or climate change, not by microscopic pathogens or a mountain-size meteor, but by …a tsunami of text? Think of it as an ongoing planetary spam event, but unlike spam, there may prove to be no reliable way of flagging and filtering the next generation of machine-made text. “Don’t believe everything you read” may become “Don’t believe anything you read” when it’s online.” (para. 1; para. 5)

Is this what our life will become—computers doing all the writing and the reading? Even as I admit to experimenting with training ChatGPT to provide formative feedback on student essays, I think about the 1985 Val Kilmer movie Real Genius, as a tape recorder lectures to tape recorders. Is this what education is destined to become?

Not everyone is as gloomy as Kirschenbaum about the future of AI though.

Adrienne LaFrance (2023) sees another possibility for this new technology. She writes,

Just as the Industrial Revolution sparked transcendentalism in the U.S. and romanticism in Europe—both movements that challenged conformity and prioritized truth, nature, and individualism—today we need a cultural and philosophical revolution of our own…. Artificial intelligence will, unquestionably, help us make miraculous, lifesaving discoveries. The danger lies in outsourcing our humanity to this technology without discipline…. We need a human renaissance in the age of intelligent machines (para. 15).

What might that human renaissance look like? What does it mean to be human in the age of AI? Who is the author of my story?

AI researcher Kartik Chandra (2023) has an answer. She writes,

[T]he question to ask about writing is not Will AI make it worthless? but rather What could possibly be more important? In a world flooded with the monotonous slurry AI excels at producing, power lies with those who can—and do—speak for themselves. Never have the skills of independent, critical thought and expression been more vital than in the AI era (para. 8).

In the end, I’m left—and I’m leaving you—with more questions than answers. But I agree with Kartik—and with ChatGPT’s assessment at the beginning of this essay: Artificial intelligence cannot and should not replace us. And I believe that if writing, thinking, and creating continue to be humanity’s essential work, poets must lead the way. In the words of twentieth century poet Archibald MacLeish, our task as writers in the age of artificial intelligence is clear:

Poets, deserted by the world before,
Turn round into the actual air:
Invent the age! Invent the metaphor! (“Hypocrite Auteur,” lines 65-67).

References

Abrams, Z. (2023, January 1). Kids’ mental health is in crisis. Here’s what psychologists are doing to help. APA Monitor, 54(1). https://www.apa.org/monitor/2023/01/trends-improving-youth-mental-health#:~:text=In%20the%2010%20years%20leading,Youth%20Risk%20Behavior%20Surveillance%20System.

Briggs, A. & Burke, P. (2002). A social history of the media: From Gutenberg to the Internet. Polity.

Brown, S. (2021, April 27). Where did writing come from? The rise, fall, and rediscovery of cuneiform. Getty Museum. https://www.getty.edu/news/where-did-writing-come-from/

Chandra, K. (2023, June 8). What’s a word worth in the AI era? Inside Higher Ed. https://www.insidehighered.com/opinion/views/2023/06/08/what-word-worth-ai-era-opinion

DeSilva, J. M., Traniello, J. F., Claxton, A. G., & Fannin, L. D. (2021). When and why did human brains decrease in size? A new change-point analysis and insights from brain evolution in ants. Frontiers in Ecology and Evolution, 712. https://doi.org/10.3389/fevo.2021.742639

Jones, J. (2017, December 27). Andy Warhol’s 15 Minutes: Discover the postmodern MTV variety show that made Warhol a star in the television age (1985-87). Open Culture. https://www.openculture.com/2017/12/andy-warhols-15-minutes.html

Kirschenbaum, M. (2023, March 8). Prepare for the textpocalypse. Atlantic Monthly. https://www.theatlantic.com/technology/archive/2023/03/ai-chatgpt-writing-language-models/673318/

Kober, N., & Rentner, D. S. (2020). History and evolution of public education in the U.S. Center on Education Policy. https://files.eric.ed.gov/fulltext/ED606970.pdf

LaFrance, A. (2023, June 5). The coming humanist renaissance. Atlantic Monthly. https://www.theatlantic.com/magazine/archive/2023/07/generative-ai-human-culture-philosophy/674165/

Magubane, N. (2023, March 8). The hidden costs of AI: Impending energy and resource strain. Penn Today. https://penntoday.upenn.edu/news/hidden-costs-ai-impending-energy-and-resource-strain

Meyer, P. (2023, May 31). Harnessing the power of LLMs: Code generation unleashed. Medium. https://pub.towardsai.net/harnessing-the-power-of-llms-code-generation-unleashed-cdacb6c827de

Naiditch, P. Barker, N., & Kaplan, S. A. (2001). The Aldine Press: Catalogue of the Ahmanson-Murphy collection of books by or relating to the press in the Library of the University of California, Los Angeles : Incorporating works recorded elsewhere. University of California Press.

Open AI (2023, June 9). Humanity vs. AI. [chat] https://chat.openai.com/share/7c1aa595-e240-451f-95e8-74a116f80973

Pagel, M. (2017). Q&A: What is human language, when did it evolve and why should we care? BMC Biology, 15, 1–6. https://doi.org/10.1186/s12915-017-0405-3

Perrigo, B. (2023, January 18). Exclusive: OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. Time Magazine. https://time.com/6247678/openai-chatgpt-kenya-workers/

Perrin, A., & Duggan, M. (2015, June 26). Americans’ Internet access: 2000-2015. Pew Research Center https://www.pewresearch.org/internet/2015/06/26/americans-internet-access-2000-2015/

Pew Research Center (1995, October 16). Americans going online…Explosive growth, uncertain destinations. https://www.pewresearch.org/politics/1995/10/16/americans-going-online-explosive-growth-uncertain-destinations/

Phipps, L. & Lanclos, D. (2023, January 22). An offering. Digital Is People [blog]. https://digitalispeople.org/an-offering/

Roose, K. (2023). A.I. poses “risk of extinction,” industry leaders warn. New York Times. https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html

Schaul, K., Chen, S.Y., and Tiku, N. (2023, April 19). Inside the secret list of websites that make AI like ChatGPT sound smart. Washington Post https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/

UNESCO (n.d.). Artificial intelligence. https://www.unesco.org/en/artificial-intelligence#:~:text=AI%20actors%20should%20promote%20social,benefits%20are%20accessible%20to%20all.

Weil, E. (2023. March 1). You are not a parrot. New York Magazine. https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html

 

License

Icon for the Creative Commons Attribution 4.0 International License

Surviving the Textpocalypse Copyright © 2020 by Liza Long is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book