These articles and videos may be suitable for classroom discussions or assignments centered on generative artificial intelligence. As this annotated bibliography grows, I will separate this into sections by topic. For now, it’s organized alphabetically by author’s last name. Note: Some of these articles may be behind paywalls.
Atwood, M. (2023, August 26). Murdered by my replica? The Atlantic Monthly. https://www.theatlantic.com/books/archive/2023/08/ai-chatbot-training-books-margaret-atwood/675151/
In this essay, Canadian author Margaret Atwood reacts to learning that 33 of her books were illegally pirated to train AI chatbots. She imagines what that might mean for the future of fiction.
Biever, C. (2023, July 25). ChatGPT broke the Turing test — the race is on for new ways to assess AI. Nature. https://www.nature.com/articles/d41586-023-02361-7?utm_source=pocket-newtab
This article explores the potential–and limits–of large language models. Though they can sound convincingly human, they still struggle with simple visual logic tests.
Blackman, R. & Ammanath, B. (2022, March 21). Ethics and AI: 3 Conversations Companies Need to Have. Harvard Business Review. https://hbr.org/2022/03/ethics-and-ai-3-conversations-companies-need-to-be-having
While concerns about AI and ethical violations have become common in companies, turning these anxieties into actionable conversations can be tough. With the complexities of machine learning, ethics, and of their points of intersection, there are no quick fixes, and conversations around these issues can feel nebulous and abstract.
Bostrom, N. (2003). Ethical issues in advanced artificial intelligence. Science fiction and philosophy: From time travel to superintelligence, 277, 284.https://philpapers.org/archive/BOSEII.pdf
This seminal paper from an Oxford philosophy professor outlines some ethical concerns around the development of artificial intelligence, including the famous “paperclip maximizer” problem where AI might destroy humanity in pursuit of a mundane goal.
Brooks, R. (2017, October 6). The seven deadly sins of AI predictions. MIT Technology Review. https://www.technologyreview.com/2017/10/06/241837/the-seven-deadly-sins-of-ai-predictions/
“Mistaken predictions lead to fears of things that are not going to happen, whether it’s the wide-scale destruction of jobs, the Singularity, or the advent of AI that has values different from ours and might try to destroy us. We need to push back on these mistakes.”
Chandra, K. (2023, June 8). What’s a word worth in the AI era? Inside Higher Ed. https://www.insidehighered.com/opinion/views/2023/06/08/what-word-worth-ai-era-opinion
Kartik Chandra offers a message to the Class of 2023: your words matter, now more than ever.
Chen, S.Y, Tenjarla, R., Oremus, W., & Harris, T. (2023, August 31). How to talk to an AI chatbot: An ordinary human’s guide to getting extraordinary results from a chatbot. Washington Post. https://www.washingtonpost.com/technology/interactive/2023/how-to-talk-ai-chatbot-chatgpt/
This guide shares strategies for asking a chatbot to help with explaining, writing and brainstorming, with specific interactive examples.
Dede, C. (2023, August 6). What is academic integrity in the age of artificial intelligence? [Blog]. Silver Lining for Learning. https://silverliningforlearning.org/what-is-academic-integrity-in-the-era-of-generative-artificial-intelligence/
This article from Chris Dede, Harvard education professor and co-principal investigator of the NSF-funded National Artificial Intelligence Institute in Adult Learning and Online Education, explores the concept of academic integrity while also providing foundational knowledge about LLMs and how they work. Dede argues for intelligence augmentation, not intelligence replacement, when we work with AI.
Future of Life Institute (2023, March 22). Pause giant AI experiments: An open letter. Future of Life.org. https://futureoflife.org/open-letter/pause-giant-ai-experiments/
This open letter highlights the risks and unknowns of generative AI development and calls for an immediate six month pause.
Gent, E. (2023, May 10). What is the AI alignment problem and how can it be solved? New Scientist. https://www.newscientist.com/article/mg25834382-000-what-is-the-ai-alignment-problem-and-how-can-it-be-solved/
Artificial intelligence systems will do what you ask but not necessarily what you meant. The challenge is to make sure they act in line with human’s complex, nuanced values.
Kirschenbaum, M. (2023, March 8). Prepare for the textpocalypse. The Atlantic Monthly. https://www.theatlantic.com/technology/archive/2023/03/ai-chatgpt-writing-language-models/673318/
Our relationship to writing is about to change forever; it may not end well.
Knight, S. Chatting with AI. https://casls.uoregon.edu/wp-content/uploads/sites/7/2023/05/Chatting-with-AI-Vol.-1-Iss.-2.pdf
This 10 minute classroom activity from familiarizes students with ChatGPT and prompt writing.
La France, A. (2023, June 5). The coming humanist renaissance. The Atlantic Monthly. https://www.theatlantic.com/magazine/archive/2023/07/generative-ai-human-culture-philosophy/674165/
We need a cultural and philosophical movement to meet the rise of artificial superintelligence.
Metz, C. (2019, March 1). Is ethical A.I. even possible? New York Times. https://www.nytimes.com/2019/03/01/business/ethics-artificial-intelligence.html?smid=nytcore-ios-share&referringSource=articleShare
“Building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.”
Nkonde, M. (2023, February 22). ChatGPT: New technology, Same old misogynoir. Ms. Magazine. https://msmagazine.com/2023/02/22/chatgpt-technology-black-women-history-fact-check/
The contributions to human history made by women, children and people who speak nonstandard English will be underrepresented by chatbots like ChatGPT.
Perrigo, B. (2023, January 28). Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. Time.com. https://time.com/6247678/openai-chatgpt-kenya-workers/
This article discusses the ethical implications of using low-paid workers to train ChatGPT (CW: sexual abuse).
Sterling, B. (2023, June 28). AI is the Scariest Beast Ever Created, Says Sci-Fi Writer Bruce Sterling. Newsweek. https://www.newsweek.com/2023/07/21/ai-scariest-beast-ever-created-says-sci-fi-writer-bruce-sterling-1809439.html
This essay discusses three “monster” AI models: Roko’s Basilisk, the Shoggoth, and the Paperclip AI, using the metaphor of Mardi Gras and Lent to consider what our lives may look like once the AI gold rush cools.
Trucano, M. (2023, July 10). AI and the next digital divide in education. Brookings Institute. https://www.brookings.edu/articles/ai-and-the-next-digital-divide-in-education/
Trucano explores the potential for AI to make the existing digital divide–the idea that some children, families, teachers, and schools have access to information and communications technologies to support learning while others do not–even worse for students.
Williamson, B. (2023, June 30). Degenerative AI in education. Code Acts in Education [Blog]. https://codeactsineducation.wordpress.com/2023/06/30/degenerative-ai-in-education/
“What if, instead of being generative of educational transformations, AI in education proves to be degenerative—deteriorating rather than improving classroom practices, educational relations and wider systems of schooling?”
Wolchover, N. (2020, January 20). Artificial intelligence will do what we ask. That’s a problem. Quanta Magazine. https://www.quantamagazine.org/artificial-intelligence-will-do-what-we-ask-thats-a-problem-20200130/
By teaching machines to understand our true desires, one scientist hopes to avoid the potentially disastrous consequences of having them do what we command.
Yudkowsky, E. (2023, March 29). Pausing AI developments isn’t enough. We need to shut it all down. Time.com. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/?mibextid=Zxz2cZ&fbclid=IwAR39K9_VsmVLYfzCWqlPaRFBqGoXGviRgqp77h1FWGhXqAIR7R2lTm5RjNo_aem_AQs-MSEiCT4fQkccotVSZBlqI_8aLpcBJXsAV9wQi33e_ZFBPrdvwhWaJNdWI50hVWc
Yudkowsky, a decision theorist from the U.S. who leads research at the Machine Intelligence Research Institute, paints an urgent and imminent worst-case scenario where generative AI destroys humanity.
Bostrom, N. (2015). What happens when our computers get smarter than we are? [Video]. TED thttps://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are
Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. And then, says Nick Bostrom, it will overtake us: “Machine intelligence is the last invention that humanity will ever need to make.” A philosopher and technologist, Bostrom asks us to think hard about the world we’re building right now, driven by thinking machines.
Khan, S. (2023). How AI could save (not destroy) education. https://www.ted.com/talks/sal_khan_how_ai_could_save_not_destroy_education/c
Sal Khan, the founder and CEO of Khan Academy, thinks artificial intelligence could spark the greatest positive transformation education has ever seen. He shares the opportunities he sees for students and educators to collaborate with AI tools — including the potential of a personal AI tutor for every student and an AI teaching assistant for every teacher — and demos some exciting new features for their educational chatbot, Khanmigo.