13 Principles for Using AI in the Workplace and Classroom

Joel Gladd

Why do I need to understand generative AI?

College courses aim to provide students with durable skills—meaning those strategies and critical thinking skills that translate most obviously into workplace environments. Today we’re seeing a transformation in professional workflows because of how generative AI and other forms of machine learning can augment what professionals do.

In May 2024, Microsoft reported that generative AI (GenAI) usage doubled in the previous months, “with 75% of global knowledge workers using it,” and those who do say it saves time, focus, become more creative, and make their work more enjoyable.

In August 2024, another report showed that 86.5% of employees used GenAI at work. Here’s how some work departments are using it, covering a range of backgrounds from marketing and business to STEM-related fields such as computer programming:

  • social media content
  • planning and building marketing strategies
  • search engine optimization (SEO)
  • content ideation (brainstorming, etc.)
  • writing content
  • content research
  • proofreading
  • bug-fixing and debugging software
  • testing code snippets
  • code generation and research
  • drafting messages to customers
  • analyzing customer feedback

Those in healthcare may think this is all about writing and coding, but AI is transforming the healthcare industry as well. In addition to the above, AI models are now:

  • automating documentation;
  • helping with data entry and extraction;
  • managing communication;
  • monitoring regulatory compliance;
  • helping with administrative workflows and task prioritization;
  • facilitating patient outreach;
  • image enhancement for better diagnosis;
  • data augmentation;
  • noise reduction and pathology prediction;
  • personalized treatment plans;
  • clinical support;
  • research and development of new drugs.

Even more hands-on patient help, such as the tasks normally undertaken by nurses and other practitioners, is becoming transformed by GenAI, including:

  • patient data analysis;
  • predicting potential complications;
  • real-time suggestions for interventions and medication dosages;
  • documentation assistance, including clinical notes from voice recordings and summarizing patient interactions;
  • simulating patient scenarios for training;
  • patient communication, including real-time translation services;
  • generating personalized patient education materials.

What about those in Career and Technical Education? It will also be disruptive, though in very different ways. Those in the automotive repair industry will begin seeing GenAI as well:

  • analyze vehicle data, symptoms, and repair history to suggest potential issues and solutions more accurately;
  • interpret complex diagnostic codes and sensor readings;
  • analyze images or descriptions of parts to identify them accurately;
  • anticipate parts needs based on common repair patterns, improving inventory management;
  • create realistic simulations and training scenarios to practice complex repairs;
  • help mechanics explain technical issues to customers in simpler terms;
  • provide estimated repair times and costs more accurately;
  • analyze repair shop data to optimize workflow.

This technology will have some similarities across all industries, especially when producing and analyzing content (helping with customer communication and outreach, for example), but we are also seeing a wide variety of applications as they become adapted to individual professions. Each of you will need to research and better understand how AI is affecting your field of interest.

Principles for Using AI in the Workplace

No matter what your career interests are, GenAI and machine learning are becoming everyday tools. Understanding the basics of how they work is an important first step. But it’s also important to foster a mindset and adopt certain principles that you find empowering and productive.

In his book Co-Intelligence, Ethan Mollick presents guiding principles that can help you navigate AI in your work life effectively. Those who wish to mix these tools into their workflows may find them useful.

Principle 1: Invite AI to the Table

AI is increasingly a valuable skill that complements many others. One power move is to just treat any situation with the question, “How can I use AI here?” By learning to use these tools now, you’re setting yourself up to adjust seamlessly as they evolve and become more powerful. Embracing AI in your workflow today means it will feel more familiar as new capabilities emerge.

Principle 2: Be the Human in the Loop

In Co-Intelligence, Mollick emphasizes the importance of “being the human in the loop.” This means actively checking AI’s outputs for accuracy, maintaining ethical standards, and applying your own judgment. Unlike, say, a simple calculator, you will need to bring oversight, critical thinking, and responsibility to your collaboration with AI.

For students, remaining the human in the loop means you also need to build a foundation in your area of study that provides you with enough insight to evaluate AI outputs. You will often need to build confidence in ways that are unassisted by AI in order to competently critique it. For this reason, when faced with a problem or challenge, consider using AI, but remain the thoughtful human guiding the process.

Centaurs, Cyborgs, and Resisters: Understanding Your AI Style

How you use AI may depend on your comfort level. Some people blend AI seamlessly into their work, others prefer a clear boundary between human-created and AI-generated content, and others may take a more antagonistic stance towards these tools. Mollick uses the metaphors of centaurs and cyborgs to describe these approaches. We’re adding the third category, resisters.

Centaurs

A centaur’s approach has clear lines between human and machine tasks—like the mythical centaur with its distinct human upper body and horse lower body. Centaurs divide tasks strategically: the person handles what they’re best at, while AI manages other parts. Here’s Mollick’s example: you might use your expertise in statistics to choose the best model for analyzing your data but then ask AI to generate a variety of interactive graphs. AI becomes, for a centaur, a tool for specific tasks within their workflow.

Cyborgs

Cyborgs deeply integrate their work with AI. They don’t just delegate tasks—they blend their efforts with AI, constantly moving back and forth between human and machine. A cyborg approach might look like writing part of a paragraph, asking AI to suggest how to complete it, and revising the AI’s suggestion to match your style. Cyborgs may be more likely to violate a course’s AI policy so be aware of your instructor’s preferences.

Resisters: The Diogenes Approach

Mollick does not suggest this third option, but we find that it’s important to recognize that some students and professionals feel deeply uncomfortable with even the centaur approach, and our institution and faculty will support this preference as well. Not everyone will embrace AI. Some may prefer to actively resist its influence, raising critical awareness about its limitations and risks. Like the ancient Greek philosopher Diogenes, who made challenging cultural norms his life’s work, you might focus on warning others about AI’s potential downsides and advocating for caution in its use. Of course, those taking this stance should understand the tool as well as centaurs and cyborgs. In fact, resisters may need to study AI tools even more deliberately.

It’s probably not practical to identify always as a cyborg, centaur, or resister. These are styles of interacting with an emerging technology, not identities. The most sophisticated cyborgs will occasionally become centaurs, sometimes even resisters when the situation calls for it. Likewise, someone who feels more attracted to the resister mode will have to “grok” what it means to be a cyborg or centaur if they intend to offer critical guidance to others.

Principles for Using AI in the Classroom

Educational environments foster durable skills that prepare you for workplace and lifelong success. However, there is a key difference between these environments: as you’re learning certain skills, instructors need to be able to assess the choices you’re making, often under challenging circumstances, to offer guidance about how to succeed.

Unlike most workplace scenarios, classroom environments want to assess student learning so they’re prepared for the future. This means instructors must be able to see the labor—i.e., the choices a student made in order to figure out how to respond to a challenge. This usually requires effort, what some like to call “friction,” and it’s often uncomfortable at first. It also takes time. GenAI can often reduce that friction.

But what’s becoming clear is that using AI effectively requires human input on many different levels (remain the human in the loop), and, if you want to be successful in the future, the difference between you and someone you’re competing with will be how much base knowledge + AI savviness you have when problem-solving. The base knowledge part requires a deep familiarity with the models and concepts relevant to that domain—and this is what courses want to help you with. It’s true, to a certain extent, that computer programmers for example can “program with words” and increasingly rely on higher-order thinking rather than just typing out routine functions again and again. It’s becoming higher-order. But accessing those higher-order ways of thinking (prompting with models and concepts in mind) is what you need to acquire proficiency in first. Without those tools, you’ll be as replaceable as another worker who can type things into a chatbot. With those tools and the comfort of working with them in challenging environments, you will better unlock the potential of AI.

The difference between a model or concept in each course vs. busy work will sometimes be obvious, but at other times it won’t. In a math course, understanding how basic algorithms and matrices work, for example, is incredibly important for understanding how machine learning works and provides insight into a completely different way of processing information, which in turn will allow you to prompt AI in powerful ways. In a writing course, knowing there are such things as the rhetorical appeals and different genres is a massive unlock for many things. In a philosophy course, you become more aware of the ethical frameworks companies use to align AI, and you learn what ideas and concepts are relevant when asking whether an output is ethical.

Higher education is beginning to adjust to this new world in which chatbots can help students at any moment. This may help reduce the amount of busy work you feel in the classroom. At the same time, you will be expected to demonstrate what choices you’ve made in order to solve certain challenges, that will take work and struggle, and course policies and sanctions are there to provide guardrails to ensure that happens.

Understanding AI Syllabus Policies

This final section offers guidance on how to understand AI course policies at CWI. There are three options that your instructors choose from: 1) most restrictive, 2) moderately restrictive, and 3) least restrictive. In each part below, you will find the official language followed by some guidance on how to interpret what’s allowed and what’s prohibited. Note that any use of GenAI that impacts a submission must be accompanied by an acknowledgement statement.

Most Restrictive Language

Aligned with my commitment to academic integrity and teaching focus of creating original, independent work, the use of generative artificial intelligence (AI) tools, including but not limited to ChatGPT, DALL-E, and similar platforms, to develop and submit work as your own is prohibited in this course. Using AI for assignments constitutes academic dishonesty, equitable to cheating and plagiarism, and will be met with sanctions consistent with any other Academic Integrity violation.

What this allows:

  • Since the language focuses on generative AI such as ChatGPT and other Large Language Models (LLMs), this does not restrict using other forms of machine learning, such as transcription tools that help with accessibility, or basic tools such as grammar and spell-check. Note that some grammar tools, such as Grammarly, have generative AI options (Grammarly Pro), and the GenAI options to paraphrase or revise writing would not be allowed under this policy.

What this prohibits at the course-level:

  • For longer writing tasks, outlining may be prohibited but ask for clarification from the instructor.
  • Using AI to draft responses (written, math-based, programming, etc.) to assignments is prohibited.
  • Using AI to revise or alter responses is likely prohibited.

What may be allowed (but you should ask):

  • It depends on what your instructor means by “develop.” Some forms of brainstorming may or may not be allowed, but it would also be impossible to enforce a policy that prohibits any brainstorming with generative AI.

Moderately Restrictive Language

Aligned with my commitment to academic integrity and the ethical use of technology, this course allows AI tools like ChatGPT, DALL-E, and similar platforms for specific tasks such as brainstorming, idea refinement, and grammar checks. Using AI to write drafts or complete assignments is not permitted, and any use of AI must be cited, including the tool used, access date, and query.  It is the expectation that in all uses of AI, students critically evaluate the information for accuracy and bias while respecting privacy and copyright laws.

What this allows:

  • Any from the category above (Most Restrictive) is allowed.
  • Brainstorming and outlining is allowed or even encouraged.

What this prohibits at the course-level:

  • Drafts you intend to submit to the course (written, verbal, math-based, etc.)  cannot be generated by AI.
  • Any other use of generative AI to help with submitted coursework must be acknowledged and explained.

What may be allowed (but you should ask):

  • Your instructor may allow generative AI for improving certain aspects of a completed draft, such as revising topic sentences, etc. Ask before doing this and acknowledge AI use.
  • Since this applies at the course-level, your instructor may allow or even ask you to use AI for certain tasks.

Least Restrictive Language

Aligned with my commitment to academic integrity, creativity, and ethical use of technology, AI tools like ChatGPT, DALL-E, and similar platforms to enhance learning are encouraged as a supplementary resource and not a replacement for personal insight or analysis. Any use of AI must be cited, including the tool used, access date, and query. I expect that in all uses of AI, students critically evaluate the information for accuracy and bias while respecting privacy and copyright laws.

What this allows:

  • Anything from the allowed categories above (Most Restrictive and Moderately Restrictive).

What this prohibits at the course-level:

  • You cannot use generative AI to assist with submitted coursework unless it is acknowledged and explained.
  • Since this applies at the course-level, your instructor may ask you to not use AI for certain tasks.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Write What Matters Copyright © 2020 by Liza Long; Amy Minervini; and Joel Gladd is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book