179 Peer Reviewed/Professional Readings and Research on AI

Liza Long

This annotated bibliography includes peer reviewed research, pre-print research, and professional blogs for educators. The annotations are still a work in progress, so thank you for your patience! Where abstracts or author summaries are available, I have included these.

Anderson, J. & Rainie, L. (2023). As AI Spreads, Experts Predict the Best and Worst Changes in Digital Life by 2035. Pew Research Center https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2023/06/PI_2023.06.21_Best-Worst-Digital-Life_2035_FINAL.pdf?mibextid=Zxz2cZ

This report covers results from the 16th “Future of the Internet” canvassing that Pew Research Center and Elon University’s Imagining the Internet Center have conducted together to gather expert views about important digital issues. This is a nonscientific canvassing based on a nonrandom sample; this broad array of opinions about where the potential influence of current trends may lead society between 2023 and 2035 represents only the points of view of the individuals who responded to the queries.  Pew Research Center and Elon’s Imagining the Internet Center sampled from a database of experts to canvass from a wide range of fields, inviting entrepreneurs, professionals and policy people based in government bodies, nonprofits and foundations, technology businesses and think tanks, as well as interested academics and technology innovators. The predictions reported here came in response to a set of questions in an online canvassing conducted between Dec. 27, 2022, and Feb. 21, 2023. In all, 305 technology innovators and developers, business and policy leaders, researchers and activists responded in some way to the question covered in this report.

D’Agostino, S. (2023, June 5). How AI tools both help and hinder equity, Inside Higher Ed. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/06/05/how-ai-tools-both-help-and-hinder-equity

The technology promises to assist students with disadvantages in developing skills needed for success. But AI also threatens to widen the education gap like no tech before it.

Dede, C., Etemadi, A., & Forshaw, T. (2021). Intelligence Augmentation: Upskilling Humans to Complement AI. Project Zero. https://pz.harvard.edu/resources/intelligence-augmentation-upskilling-humans-to-complement-ai

While many forecasts chart an evolution of artificial intelligence (AI) in taking human jobs, more likely is a future where AI changes the division of labor in most jobs, driving a need for workforce development to shift towards uniquely human skills. Specifically, AI is becoming increasingly proficient at calculation, computation, and prediction (“reckoning”) skills. As such, we will see increased demand for human judgment skills such as decision-making under conditions of uncertainty, deliberation, ethics, and practical knowing. Developing human judgment skills follows well from the broadened conception of learners presented in the earlier briefs in this series. This brief focuses on the important topic of how workforce development can help humans prepare to collaborate with artificial intelligence to do work that neither are capable of in isolation.

Eaton, S. E., Dawson, P., McDermott, B., Brennan, R., Wiens, J., Moya, B., Dahal, B., Hamilton, M., Kumar, R., Mindzak, M., Miller, A., & Milne, N. (2023). Understanding the Impact of Artificial Intelligence on Higher Education. Calgary, Canada. https://hdl.handle.net/1880/116624

Eaton, S. (2023).Academic Integrity and Artificial Intelligence: Research Project Update. Learning, Teaching, and Leadership [Blog]. https://drsaraheaton.wordpress.com/2023/06/22/academic-integrity-and-artificial-intelligence-research-project-update/

This blog post contains numerous links to peer reviewed research, presentations, and other resources for college educators on the use of AI in the classroom.

Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). Gpts are gpts: An early look at the labor market impact potential of large language models. arXiv preprint arXiv:2303.10130. https://arxiv.org/abs/2303.10130

From section 4.3: “Our findings indicate that the importance of science and critical thinking skills are strongly negatively associated with exposure, suggesting that occupations requiring these skills are less likely to be impacted by current language models. Conversely, programming and writing skills show a strong positive association with exposure, implying that occupations involving these skills are more susceptible to being influenced by language models.”

Gadiraju, V., Kane, S., Dev, S., Taylor, A., Wang, D., Denton, E., & Brewer, R. (2023, June). ” I wouldn’t say offensive but…”: Disability-Centered Perspectives on Large Language Models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (pp. 205-216). https://research.google/pubs/pub52358/

Large language models (LLMs) trained on real-world data can inadvertently reflect harmful societal biases, particularly toward historically marginalized communities. While previous work has primarily focused on harms related to age and race, emerging research has shown that biases toward disabled communities exist. This study extends prior work exploring the existence of harms by identifying categories of LLM-perpetuated harms toward the disability community. We conducted 19 focus groups, during which 56 participants with disabilities probed a dialog model about disability and discussed and annotated its responses. Participants rarely characterized model outputs as blatantly offensive or toxic. Instead, participants used nuanced language to detail how the dialog model mirrored subtle yet harmful stereotypes they encountered in their lives and dominant media, e.g., inspiration porn and able-bodied saviors. Participants often implicated training data as a cause for these stereotypes and recommended training the model on diverse identities from disability-positive resources. Our discussion further explores representative data strategies to mitigate harm related to different communities through annotation co-design with ML researchers and developers.

LaQuintano, T., Schnitzler, C., & Vee, A. (2023). TextGenEd: An introduction to teaching with text generation technologies. WAC Clearinghouse. https://wac.colostate.edu/repository/collections/textgened/?fbclid=IwAR2wCfhGsL_t6SPR0jif-HX84k7_ezCeHuu1WE8n1W6EbgndwIy0hhDxU1A_aem_AcmrzVKiCynZcFViNCClV-a6wwe2lG-6kvcTDljAc7sZ5lCO93Oc–iMa8WQE_S1TXw&mibextid=Zxz2cZ

At the cusp of this moment defined by generative AI, TextGenEd collects early experiments in pedagogy with generative text technology, including but not limited to AI. The resources in this collection will help writing teachers to integrate computational writing technologies into their assignments. Many of the assignments ask teachers and students to critically probe the affordances and limits of computational writing tools. Some assignments ask students to generate Markov chains (statistically sequenced language blocks) or design simple neural networks and others ask students to use AI platforms in order to critique or gain fluency with them. A few assignments require teachers to have significant familiarity with text generation technologies in order to lead students, but most are set up to allow teachers and students to explore together. Regardless of their approach, all of these assignments now speak to the contemporary writing landscape that is currently being shaped by generative AI. Put another way, the assignments in this collection offer initial answers to urgent calls for AI literacy.

Lorentzen, A. and Bonner, E. (2023, February 12). Customizable ChatGPT AI Chatbots for Conversation Practice. The FLT Mag. https://fltmag.com/customizable-chatgpt-ai-chatbots-for-conversation-practice/

Since OpenAI’s AI can also be accessed and interacted with using a little bit of computer code called an API, we decided to go the extra step and develop a simple application using Unity, some 3D characters and some C# coding that would give the AI a customizable voice, accent and appearance. This way our students could not only create their own personalized AI conversation partners, but also choose how they look and sound. Finally, using Google Speech-to-Text for the students and Text-to-Speech for the AI, our students spoke with their AI directly and heard and saw the AI respond in kind.

Mollick, E. R. (2023, February 17). My class required AI. Here’s what I’ve learned so far. One Useful Thing. https://www.oneusefulthing.org/p/my-class-required-ai-heres-what-ive?subscribe_prompt=free

I fully embraced AI for my classes this semester, requiring students to use AI tools in a number of ways. This policy attracted a lot of interest, and I thought it worthwhile to reflect on how it is going so far. The short answer is: great! But I have learned some early lessons that I think are worth passing on.

Mollick, E. R., & Mollick, L. (2023). Using AI to implement effective teaching strategies in classrooms: Five strategies, including prompts. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4391243

This paper provides guidance for using AI to quickly and easily implement evidence-based teaching strategies that instructors can integrate into their teaching. We discuss five teaching strategies that have proven value but are hard to implement in practice due to time and effort constraints. We show how AI can help instructors create material that supports these strategies and improve student learning. The strategies include providing multiple examples and explanations; uncovering and addressing student misconceptions; frequent low-stakes testing; assessing student learning; and distributed practice. The paper provides guidelines for how AI can support each strategy, and discusses both the promises and perils of this approach, arguing that AI may act as a “force multiplier” for instructors if implemented cautiously and thoughtfully in service of evidence-based teaching practices.

Mollick, E. R., & Mollick, L. (2022). New modes of learning enabled by AI chatbots: Three methods and assignments. Available at SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4300783

Chatbots are able to produce high-quality, sophisticated text in natural language. The authors of this paper believe that AI can be used to overcome three barriers to learning in the classroom: improving transfer, breaking the illusion of explanatory depth, and training students to critically evaluate explanations. The paper provides background information and techniques on how AI can be used to overcome these barriers and includes prompts and assignments that teachers can incorporate into their teaching. The goal is to help teachers use the capabilities and drawbacks of AI to improve learning

Sayers, D. (2023, May 25). A simple hack to ChatGPT-proof assignments using Google Drive. Times Higher Education. https://www.timeshighereducation.com/campus/simple-hack-chatgptproof-assignments-using-google-drive

Sayers explains how he uses Google Docs version history to track potential misuse of generative AI in student assignments (note from Liza: this may prove more difficult now that generative AI writing assistant is embedded in Google Docs, but it still may be a good idea to try).

Zamfirescu-Pereira, J. D., Wong, R. Y., Hartmann, B., & Yang, Q. (2023, April). Why Johnny can’t prompt: how non-AI experts try (and fail) to design LLM prompts. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-21). https://dl.acm.org/doi/pdf/10.1145/3544548.3581388

Pre-trained large language models (“LLMs”) like GPT-3 can engage in fluent, multi-turn instruction-taking out-of-the-box, making them attractive materials for designing natural language interactions. Using natural language to steer LLM outputs (“prompting”) has emerged as an important design technique potentially accessible to non-AI-experts. Crafting effective prompts can be challenging, however, and prompt-based interactions are brittle. Here, we explore whether non-AI-experts can successfully engage in “end-user prompt engineering” using a design probe—a prototype LLM-based chatbot design tool supporting development and systematic evaluation of prompting strategies. Ultimately, our probe participants explored prompt designs opportunistically, not systematically, and struggled in ways echoing end-user programming systems and interactive machine learning systems. Expectations stemming from human-to-human instructional experiences, and a tendency to overgeneralize, were barriers to effective prompt design. These findings have implications for non-AI-expert-facing LLM-based tool design and for improving LLM-and-prompt literacy among programmers and the public, and present opportunities for further research.

 

 

 

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Write What Matters Copyright © 2020 by Liza Long is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book