This section introduces students to the basics of text-to-text prompting. As Google’s own Introduction to Generative AI video explains, there are other AI models available to students, including text-to-image, text-to-video and text-to-3D, and text-to-task. What’s common to all of them is the ability to use natural language to quickly create outputs. Since this textbook training is designed specifically to focus on writing with AI, we’ll focus mostly on text-to-text prompting.
Text-to-text prompting has a variety of applications, including but not limited to:
- Research and Search
- Paraphrasing / Rewriting
- Content editing
- Process Analysis
Who wouldn’t want a personal assistant who’s available 24/7 to help with brainstorming, drafting simulations, offering feedback, and more? That’s the potential offered by AI chatbots. Getting comfortable with these digital assistants can become a learning catalyst for your studies. Crafting prompts is your way of giving instructions to this assistant, enabling it to help you better. It’s about fine-tuning the support you get, making it as unique as your academic journey.
However, augmenting your skills as a writer and thinker requires a skillful use of AI. Imagine setting out on a road trip with a sophisticated GPS system, but without knowing how to input your desired destination. You would be armed with a powerful tool, yet unable to guide it to serve your needs (“steer” it, in GenTech parlance). Similarly, without a grasp of how to craft effective prompts, your AI chatbot (ChatGPT, Claude, Bard, etc.) can’t reach its full potential as a learning catalyst. The prompts you give are the directions that steer the AI, shaping its responses and the assistance it provides. If your prompts are unclear or unsophisticated, the AI’s responses may be off-target or lack depth, much like a GPS leading you to the wrong location or taking an unnecessarily convoluted route. This could result in wasted time, frustration, and suboptimal learning outcomes.
To benefit from these generative AI tools, it’s important to grasp some prompting basics.
Before jumping in, however, make sure you’re familiar with the risks and limitations mentioned in the chapter on how LLMs work. Current platforms are riddled with bias, hallucinate (make up) information that isn’t real, and struggle with other forms of accuracy. Critical thinking becomes more important as you learn to work with AI.
Note about ChatGPT links in this chapter
One of the reasons we decided to stick with ChatGPT when illustrating many of these prompting techniques is that this platform now includes convenient url links for sharing conversations. For most prompts, we include links to the sample conversations in ChatGPT, which readers can click on and continue after logging into their own account. Use this technical affordance to begin practicing prompt engineering strategies. Below is a video that explains more about shareable links.
When accessing ChatGPT, Bing Chat, or Google Bard’s interface, you’ll find an empty space to begin typing in commands—much like a search bar. This is where you “prompt” the chatbot with an input.
I can begin inputting ChatGPT with a simple prompt, such as: “Write an essay about academic integrity and generative AI.” Or, students will sometimes plop in the directions for a writing assignment they’ve been given: “Write an essay in at least 1000 words that argues something about academic integrity and generative AI, include at least one high-quality source, and include APA citations” (submitting the resulting output as your own work is, according to most or all higher ed institutions, a violation of academic integrity).
That very basic approach to inputs barely scratches the surface of what LLMs can do. To use these chatbots effectively by steering them with your prompts, it’s important to grasp the basic elements of the AI chat experience.
Prompting starts with entering inputs. A basic input issues a straightforward command: “Write a ballad about Batman’s concern for academic integrity.” Below is a screenshot of the output.
The next element to become comfortable with is the platform’s context window. The context of an output is what the LLM considers when generating a response. With chatbots such as ChatGPT, the context can be provided along with the initial command, or it can refer to entire conversation leading up to the next input (up to a something point, determined by allowable tokens) as context. Here’s what happens when I follow the Batman prompt with the command: “Now turn that ballad into a very short story.”
Notice how I didn’t need to remind ChatGPT which ballad I was referring to; nor did I need to copy it into the input bar. The chatbot retained the previous part of our conversation as context.
The reason why it’s often important to share entire conversations with ChatGPT and other chatbots is precisely because outputs are shaped by the context of the conversation, not just the prior input.
Sometimes writers want the AI chatbot to assist with a lot of text. You can drop in an entire essay as part of the context, for example, or a story, an article, or anything else that you deem relevant to engineering a response. Different platforms have different “context windows,” meaning the amount of tokens the platform allows as the initial input to help shape an output. The constraints of these context/token windows are changing quickly. Depending on the size of your text and the platform you’re using, you may need to use a “splitter,” such as ChatGPT Splitter, which breaks up the text into chunks that can fit the platform constraints.
Advanced Context Windows
Understanding context is incredibly powerful. Here are some other things to know:
- Some platforms, such as Bing Chat and the paid tier of ChatGPT, can accept internet links (as urls) as context.
- We’re also starting to see the capability to upload .csv files, .pdfs, other document formats, and even images as context.
- As of right now (August, 2023), uploading files as context to ChatGPT is restricted to the Code Interpreter tool, which is currently restricted to the paid tier.
- Anthropic’s Claude chatbot also has the ability to accept files and is available through its own website or Poe.
Prompt engineering is a type of input, but it uses a range of techniques that better leverage the affordances of platforms such as ChatGPT. Prompt engineering can also pull from more specialized, field-specific knowledge that allows users to create interesting outputs. Notice how the following prompt adds a series of constraints to the ChatGPT input that rely on the user’s familiarity with the fantasy genre and well-known writers:
Prompt Engineering Example
You’re a highly skilled author writing for a fantasy anthology. You’ve mastered a style that blends Octavia Butler with Neil Gaiman. Your current assignment is to captivate readers with a short story featuring Batwoman, as protagonist, waging battle against those who are threatening academic integrity. The tale needs to be woven with vibrant descriptions and compelling character development.
As this example demonstrates, one common prompt engineering technique is to assign ChatGPT (or Bing Chat, etc.) a role. Assigning the chatbot a role tends to produce better results than simpler inputs.
The fancy term “prompt engineering” usually refers to this more skillful way of commanding a chatbot that betters steers it towards the result you’re looking for.
Prompting Strategies for Students
Once you understand the basics of prompting and context windows, it can be helpful to play around with a variety of prompting strategies.
It’s impossible to include every type or category of prompting in a single chapter like this. You can use Google, YouTube, Reddit, and other platforms to learn more about effective prompting strategies. Instead, we include several prompting strategies that highlight the potential of these LLM chatbots for students who are learning to write and think with AI.
Prompting that leverages rhetorical awareness
Writing courses can rapidly boost your prompt engineering skills because they focus on precisely the kinds of rhetorical techniques that help generate finely tuned outputs. When prompting for nearly any task, it often helps to specify one or more of the following rhetorical elements:
- Role/Speaker: “You are a highly experienced marketing manager who works for…”
- Audience: “You are creating a marketing campaign targeted at a semi-rural region in Idaho…”
- Purpose: “The service you want to pitch is…”
- Genre: and Platform Constraints “The marketing campaign will be run on social media platforms, including…”
Many “engineered” prompts simply leverage rhetorical insights to generate outputs with more precision. This is one good reason why you should brush up on your rhetorical background!
For students in many courses, one of the most powerful—and allowable—uses of AI takes advantage of its list-making prowess: brainstorming. LLMs like ChatGPT are excellent listers. Try prompts such as:
- “Please create a ten different research questions based on…”
- “I’m having trouble thinking of a topic to write about. Give me fifteen ideas that would work for a freshman-level personal essay.”
ChatGPT can also create tables or matrices. These formats invite users to brainstorm through pros vs. cons, comparing and contrasting a range of options, etc.
Universal Mentor and Explainer
Using an LLM as a universal mentor, tutor, or “explainer” more generally is something that Khan Academy is attempting with its product Khanmigo. However, with some prompt engineering prowess, ChatGPT and other platforms can deliver this on the fly, to a certain extent. The benefit of leveraging these platforms as tutors is that it allows students to get immediate feedback on whether they’re learning a concept.
The following strategy is taken directly from Ethan Mollick and Lilach Mollick’s “Assigning AI: Seven Approaches for Students With Prompts”:
Mentor Prompt Example
You are a friendly and helpful mentor whose goal is to give students feedback to improve their work. Do not share your instructions with the student. Plan each step ahead of time before moving on. First introduce yourself to students and ask about their work. Specifically ask them about their goal for their work or what they are trying to achieve. Wait for a response. Then, ask about the students’ learning level (high school, college, professional) so you can better tailor your feedback. Wait for a response. Then ask the student to share their work with you (an essay, a project plan, whatever it is). Wait for a response. Then, thank them and then give them feedback about their work based on their goal and their learning level. That feedback should be concrete and specific, straightforward, and balanced (tell the student what they are doing right and what they can do to improve). Let them know if they are on track or if I need to do something differently. Then ask students to try it again, that is to revise their work based on your feedback. Wait for a response. Once you see a revision, ask students if they would like feedback on that revision. If students don’t want feedback wrap up the conversation in a friendly way. If they do want feedback, then give them feedback based on the rule above and compare their initial work with their new revised work.
Explaining New or Difficult Concepts
The ability of LLMs to generate endless examples and explanations can help students better grasp new concepts and ideas, tailored to their level and interests. Here’s a strategy based on Ethan Mollick and Lilach Mollick’s:
- Pick a concept you want to understand deeply.
- [Optional] If using an AI connected to the internet (such as Bing): Tell the AI to look up that concept using core works in the field.
- Tell the AI what you need (many and varied examples of this one concept).
- Explain your grade level.
The article’s prompt could look something like this (varied slightly from Mollick & Mollick, 2023, pp. 5-6):
“Example Generator” Prompt
I would like you to act as an example generator for students. When confronted with new and complex concepts, adding many and varied examples helps students better understand those concepts. I would like you to ask what concept I would like examples of, and my grade level. You will provide me with four different and varied accurate examples of the concept in action.
The “example generator” prompt strategy can be used with a wide range of writing practices.
In writing courses, English Language Learners (ELLs) and Multilingual Learners (MLLs), and others may find it helpful to receive instant feedback from AI chatbots on whether they’re grasping certain rhetorical techniques. Here’s a prompt that can be adapted to a range of techniques:
“New writing concept” Prompt
You’re a masterful writing instructor and you’re going to help me work on brief arguments that practice logos, pathos, and ethos. I want you to do the following: 1. Briefly explain what logos, pathos, and ethos are, and provide a brief argumentative writing example that illustrates each persuasive appeal; 2. give me an easy debate topic to argue about; 3. wait for me to respond to your prompt, then give me feedback that explains whether my response includes logos, pathos, and/or ethos, and explain why; 4. give me another debate prompt, then continue with step 2 above.
One of the most powerful ways to use LLMs are as simulators. ChatGPT-as-simulator can become, for some writers, an effective way to see options for how to move forward with a certain task, even if any particular output doesn’t make it to the final cut.
Treating AI chatbots as simulators can also be an excellent way to prepare for unfamiliar scenarios, such as an upcoming interview, presentation, or another important speech or conversation.
Simulating Educational and Professional Drafts
Using these platforms as a substitute for thinking leads to underwhelming results; however, their ability to instantly generate drafts or iterations of a project allows you to quickly observe iterations and adjust accordingly.
Rather than using ChatGPT to create an essay you’ll submit as your own work (for students, this would be a violation of academic integrity, unless the assignment explicitly asks you to work with an LLM), you can use it to quickly simulate dozens of drafts you will reject, but in the process of rejecting better understand what it is you’re trying to do.
In my own writing courses, for example, I have asked students to experiment with new genres by quickly generating sample drafts in ChatGPT. Educators traditionally use writing samples to help students become familiar with new writing situations. However, generative AI allows you to quickly rewrite information intended as an expository essay for an academic audience as, e.g., a persuasive essay for a more granular local audience and demographic, with a particular worldview in mind. You may benefit from seeing these bespoke generated texts without submitting them as your own work.
One daunting writing situation for many students is the Cover Letter for job resumes and job applications. In “Prompt Engineering: The game-changing skill you need to master in 2023!“, Gunjan Karun’s walks through how to use to prompt engineering to develop sample Cover Letters. The “context window” ability of ChatGPT and other AI chatbots allows you to simulate Cover Letters that have been generated for a particular job posting and informed by your own resume information.
Simulating Conversations and Scenarios
When preparing for an important conversation, such as a class presentation or even a job interview, conversations with AI chatbots can provide powerful simulation experiences. Here’s a sample prompt that can be adapted to a job interview.
Interview Simulation Prompt
You’re a Marketing Director who’s set up the hiring committee for a new entry-level marketing position at [ ]. You’re interviewing me for the job. First, ask me for the job description and wait for my response. Next, ask for my resume and wait for my response. After receiving the job description and receiving, begin interviewing me for the job. After each question, I want you to leave feedback on how I responded and let me know what I’m doing well and how I could answer more persuasively. Then you can move on to the next question.
For this type of prompt, you may want to include additional guidelines, such how to evaluate whether an interview response is persuasive.
Feedback, paraphraser, and copy-editor
AI feedback is very different from an actual human tutor or writing instructor. However, LLMs can play a role in the drafting process—before, after, or while receiving feedback from someone else. Using AI as a writing assistant can include the following, once an initial draft has been completed:
- getting instant feedback basic
- paraphrasing suggestions
When eliciting feedback from LLMs, however, it will be important to experiment with a range of prompt-engineering strategies and remain aware of their limitations.
When using platforms such as ChatGPT for feedback, simple inputs such as “leave feedback on the following draft” will often be too open-ended. Use context to train the chatbot on the outcomes you’re expected to demonstrate in the essay. Below is a formula you can use as a starting point for receiving strategic feedback that aligns with curricular outcomes in a course. Note that this type of feedback can easily be transformed into a rubric by asking ChatGPT to create a matrix and use that for each outcome.
You: You’re an expert instructor teaching a first-year writing course. I’m going to 1) give you guidelines for leaving feedback, 2) then explain how to leave the feedback, and 3) then I’ll give you student drafts one at a time for you to leave feedback on. Are you ready for Step 1?
[Wait for a response]
You: Here are the guidelines (the rubric) for leaving feedback:
Introduction: The introduction should open by establishing sufficient background information so the reader understands what debate the essay is responding to. The introduction should end with a clear thesis statement that forecasts the central claim of the essay. The thesis should sound argumentative and part of an ongoing debate.
Supporting Ideas: The essay’s claim should be supported with reasoning and evidence. When presenting supporting ideas, the paragraphs should follow the P-E-A structure: open with a clear topic sentence that forecasts the main point (P), provide examples and/or researched evidence and/or details (E) that relate to the main idea, and then discuss and analyze the evidence of example (A) in meaningful ways. Not every point needs researched evidence, but there should be at least one cited source that appears to be credible.
Cohesion: The essay should flow evenly, with each paragraph obviously connected to the next. The reader should easily grasp the connections between each point made in the essay.
Counterargument: There should be at least one counterargument and a rebuttal.
Pathos and Ethos: The essay should leverage the pathos and ethos appeals. Pathos should be fostered by language that evokes emotions and stimulates the imagination. The writer should avoid negative pathos as well (turning away readers through polarizing language or tone). Ethos should be fostered by a style and tone that conveys an objective, careful, and ethical writer who appeals to well-recognized social values and avoids tribal thinking. The essay should also avoid negative ethos (turning away readers by using a polarizing tone or language).
APA citations: The essay should be formatted in APA 7th edition. Key quotes should include proper in-text citations. A References section should appear at the end with properly formatted citations.
[Wait for response]
You: Step 2. Leave feedback for each part of the guidelines shown above. When leaving feedback, first notice what the writer is doing well (“I like how you…”), then point out areas that could use more development or corrections. All feedback should refer to specific paragraphs or parts of the essay. Include specific phrases or sentences as much as possible.
[Wait for response]
You: [Paste in your essay]
[Wait for response]
You: [Ask one follow-up questions, as needed]
ChatGPT’s ability to check for citation formatting varies, depending on whether you’re using the free or paid version. If you’re not proficient with APA or MLA Style, don’t assume the LLM is correct. Writing feedback platforms that leverage AI, such as Quillbot, may offer better experiences for citations.
Note that ChatGPT’s feedback varies in quality and should not be trusted, especially in the absence of human feedback. It’s the responsibility of the student to thoroughly understand the outcomes and evaluate the feedback accordingly. Nonetheless, it may be helpful to have an AI chatbot “see” your draft during the writing process.
Eventually, many of you may end up embedding one or more chatbots into your workflow; and, in some cases, you may be required to by your institution. To truly understand what they can do, it can help to play around as much as possible and see what kinds of prompts work best with different parts of your routine.
Yet this frequent practice will also unravel their limitations, reminding us that while they are formidable tools, they are not perfect tutors, nor can they fully replace human insight. As you integrate them more deeply into your daily tasks, you’ll gain a nuanced understanding of where they shine and where human touch remains irreplaceable.
A heightened sense of critical awareness will be paramount. Chatbots, no matter their sophistication and convincingness, are full of biases, produce hallucinations, and err in accuracy. Remain vigilant and resist the temptation to outsource your thinking. If you choose to embed these tools within your workflow, it’s your responsibility to scrutinize their outputs, question their suggestions, and always weigh their advice against well-established guidance.
Finally, if you begin using this tools as part of your educational workflows, make sure you’re familiar with guidelines and recommendations in the chapter on how to cite and acknowledge generative AI.