14 Ethical Concerns with Generative AI Tools

Liza Long

You may have heard about a group of concerned scientists who signed a letter calling for an immediate six-month pause to further developments of Large Language Models (LLMs) like Open AI’s ChatGPT, Google’s Bard, and Meta’s LLaMA (Roose, 2023). Their main concern is nothing less than the possible destruction of humanity.

That’s a pretty big ethical concern. But to be fair, it’s just one of many existential threats our species is currently facing. And it’s just one of numerous concerns about the large-scale unregulated implementation of generative AI in our daily lives.

When I asked ChatGPT about the main ethical concerns for generative artificial intelligence, it reported the following:

Generative AI technologies, like ChatGPT, raise significant ethical concerns. Firstly, they can spread misinformation and fake content, eroding trust in online information sources. Secondly, biased training data may lead to AI systems inadvertently perpetuating societal biases and discrimination. Privacy and data security are also at risk, as these systems process vast amounts of user data. The lack of accountability for harmful AI-generated content is a pressing issue. Malicious actors can use these technologies for manipulation and social engineering, endangering individuals’ well-being and trust. Consent and user awareness problems arise when users are unaware they are interacting with AI. Economically, job displacement could occur due to AI automating content creation. Additionally, the weaponization of generative AI raises concerns about its misuse for harmful purposes. Mental health and well-being could be negatively affected by prolonged interactions with AI chatbots. Overreliance on AI may reduce human skills and agency over time. Addressing these concerns requires collaboration between developers, policymakers, researchers, and the public to implement transparency, fairness, accountability measures, and ethical guidelines and regulations for AI development and deployment (OpenAI, 2023).

I’ll leave my ethical quibbles with the use of “firstly” as a transitional expression aside. But based on this response, ChatGPT doesn’t seem to “think” (or algorithmically predict) that generative artificial intelligence is an existential threat to humanity. According to ChatGPT, the eight main issues are:

  1. Misinformation and fake content.
  2. Biased training data.
  3. Privacy and data security.
  4. Lack of accountability for harm caused by generative AI.
  5. Consent and user awareness.
  6. Job displacement (especially for formerly “safe” knowledge workers).
  7. Mental health concerns from prolonged chats.
  8. Reduction of human skills (I imagine writing will be one of them).

What else do you think is missing from this list? What concerns do you personally have about using generative AI tools in your life?

I’ll tell you one thing that stands out to me as I write this chapter in the hottest summer of recorded memory: climate change. When I followed up with ChatGPT to ask about the effects of generative artificial intelligence on Earth’s rapidly warming climate, it led with the positives, hopefully suggesting that energy efficiency and climate modeling would improve with this new technology.

But according to one study, generative artificial intelligence will consume 8-21% of the world’s total energy output by 2030 (Magubane, 2023).

And contrast ChatGPT’s response with this statement from the United Nation’s Education, Scientific, and Cultural Organization’s statement on artificial intelligence:

The world is set to change at a pace not seen since the deployment of the printing press six centuries ago. AI technology brings major benefits in many areas, but without the ethical guardrails, it risks reproducing real world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms. AI business models are highly concentrated in just a few countries and a handful of firms — usually developed in male-dominated teams, without the cultural diversity that characterizes our world. Contrast this with the fact that half of the world’s population still can’t count on a stable internet connection. (UNESCO, n.d.)

What do you think? Do the benefits of this new technology outweigh the potential costs? How can we mitigate the potential harm that generative AI may cause?

As an ethical starting point in my own use of generative artificial intelligence, I am now including this acknowledgment inspired by the work of Laurie Phipps and Donna Lanclos (2023) whenever I use generative AI in my work.

I acknowledge that ChatGPT does not respect the individual rights of authors and artists and ignores concerns over copyright and intellectual property in its training; additionally, I acknowledge that the system was trained in part through the exploitation of precarious workers in the global south. In this work I specifically used ChatGPT to explore ethical concerns with generative artificial intelligence.

However you use (or don’t use) generative AI, I encourage you to be thoughtful in your approach. These are interesting times. We need to remember now more than ever what makes us human.

References

Magubane, N. (2023, March 8). The hidden costs of AI: Impending energy and resource strain. Penn Today. https://penntoday.upenn.edu/news/hidden-costs-ai-impending-energy-and-resource-strain

Open AI (2023, July 20). AI ethics concerns [chat] https://chat.openai.com/share/7c1aa595-e240-451f-95e8-74a116f80973

Perrigo, B. (2023, January 18). Exclusive: OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. Time Magazine. https://time.com/6247678/openai-chatgpt-kenya-workers/

Phipps, L. & Lanclos, D. (2023, January 22). An offering. Digital Is People [blog]. https://digitalispeople.org/an-offering/

Roose, K. (2023). A.I. poses “risk of extinction,” industry leaders warn. New York Times. https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html

UNESCO (n.d.). Artificial intelligence. https://www.unesco.org/en/artificial-intelligence#:~:text=AI%20actors%20should%20promote%20social,benefits%20are%20accessible%20to%20all.

License

Icon for the Creative Commons Attribution 4.0 International License

Ethical Concerns with Generative AI Tools Copyright © 2020 by Liza Long is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book