Person on computer
Collection

What Are Ethical Considerations for AI?

What should you, your colleagues, and your students be mindful of as you engage with generative AI like ChatGPT? This collection provides an overview of a variety of ethical considerations for AI.

Updated December 2024
Jess Taggart headshot
Assistant Director & Assistant Professor
Office of the Executive Vice President and Provost
View Bio
01

Ethical and Privacy Concerns

Brandeis University

Brandeis University has compiled a clear, easy-to-read list of ethical and privacy concerns that serves well as a quick primer on this topic.

Headshot of Jess Taggart
Jess Taggart

The list includes a few concrete tips for navigating these concerns in the classroom. I appreciate how you could easily adapt this list to share with your students and support discussions around AI.

Was this resource helpful?
02

Generative AI Cautions and Considerations

UVA Library

This page of the University of Virginia Library Guide on Generative AI provides links and information about evaluating AI tools and content, including a collection of readings related to social justice, equity, environmental costs, bias, and accuracy.

Headshot of Jess Taggart
Jess Taggart

I find this Library Guide to be an excellent way to access readings related to a range of crucial considerations for AI use.

View excerpt

The myriad uses of generative AI can often seem to offset the potential pitfalls. However, AI content cannot be used uncritically; a thoughtful interrogation of the source material is essential. There are a number of variables to evaluate, including knowledge gaps, currency, and the specific prompt used to generate the content. In addition to the risks of plagiarism and perpetuating misinformation, complex concepts of bias, privacy, and equity should be considered. 

Given the abundance of generative AI tools available to explore and use, determining the appropriateness of their use, how to successfully attain a useful response, and whether that response is accurate and appropriate for your needs can be challenging. While we can employ various strategies to evaluate the output provided by a given tool, it's essential to understand where the information is coming from and to have sufficient proficiency in the subject matter to be able to assess its accuracy. Among other things, you should consider whether the information the AI is producing is accurate, if the tool is drawing from a diverse range of data, and monitor the information returned by the tool for bias. Sarah Lebovitz, Hila Lifshitz-Assaf, and Natalia Levina write in the MITSloan Management Review that it is critical to find the ground truth on which the AI has been trained and validated." (Lebovitz et al., 2023) Digging in further, you can consider who the owner of the AI tool is and determine if that ownership reflect bias in the results. Consider reviewing the resources maintained by the DAIR (Distributed AI Research) Institute. DAIR examines AI tools and issues through a community-rooted lens; maintains a list of publications related to social justice, privacy, and bias; and, conducts research projects free from the influence of Big Tech. 

Was this resource helpful?
03

Wrestling with AI

Catherine J. Denial

In this essay, Cate Denial encourages instructors to thoughtfully consider how to support students in grappling with ethical considerations of AI. She provides concrete, scalable examples of how to do so.

Headshot of Jess Taggart
Jess Taggart

I admire the foundations of Cate's approaches to engaging students in discussions of AI: trust and transparency. The examples she provides are well designed, applicable across classroom contexts, and perfectly packaged for instructors.

View excerpt

I am sure I am not the only educator who wilted a little as they learned about ChatGPT in early 2023. After three years of pandemic instruction, in a variety of modalities, with our institutions demonstrating varying degrees of respect for public health, it felt (at best) exhausting to have circumstances demand we rethink our pedagogies once again to factor in generative AI. It was also tempting to rush to one stark choice or another–to ban ChatGPT and its ilk, or to permit it in every instance–if for no other reason than to feel some sense of clarity amid another period of rapid change. But in giving myself time to wrestle with the nuances of ed tech over the summer, I realized that I needed to give my students the same opportunity I had given myself: time. So much about generative AI has been sold to us at speed, promising quick resolutions to writing problems for students and demanding urgent responses from faculty. I wanted to slow things down, and to offer students the opportunity to weigh the pros and cons of AI use so that they could make critical, informed decisions about how it would shape their educational experience.

Was this resource helpful?
04

Perspectives on Generative AI Ethics

Markkula Center for Applied Ethics

The Markkula Center for Applied Ethics at Santa Clara University examines some of the ethical questions raised by generative AI in this collection of essays representing a variety of perspectives.

Headshot of Jess Taggart
Jess Taggart

I appreciate these easy-to-read essays and how they raise important questions about the ethical implications of generative AI. They stimulate further thinking and consideration about its use across domains, from education to mental health care.

View excerpt

Perspectives on Generative AI Ethics

Markkula Center for Applied Ethics
Open resource
Throughout history, new technologies have disrupted society in different ways–some positively and some negatively–from steam-powered engines and electricity, to the Internet, and now again with artificial intelligence (AI); generative AI in particular in this instance. The creation of art, journalism, education, and the very truth itself have all been tested by the use of ChatGPT and other generative AIs. Markkula Center staff and scholars unpack some of the many related ethical dilemmas in this Ethics Spotlight.
Was this resource helpful?
05

How Should AI Systems Behave, and Who Should Decide?

OpenAI

Hear from OpenAI itself—the creator of ChatGPT—how ChatGPT’s behavior is shaped, and what OpenAI is doing to improve that behavior.

Headshot of Jess Taggart
Jess Taggart

I like how this article provides an accessible view into the process behind the creation and ongoing refinement of a leading generative AI tool and how the creators are thinking about its future.

View excerpt
OpenAI’s mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. We therefore think a lot about the behavior of AI systems we build in the run-up to AGI, and the way in which that behavior is determined. Since our launch of ChatGPT, users have shared outputs that they consider politically biased, offensive, or otherwise objectionable. In many cases, we think that the concerns raised have been valid and have uncovered real limitations of our systems which we want to address. We’ve also seen a few misconceptions about how our systems and policies work together to shape the outputs you get from ChatGPT. Below, we summarize: How ChatGPT’s behavior is shaped; How we plan to improve ChatGPT’s default behavior; Our intent to allow more system customization; and Our efforts to get more public input on our decision-making.
Was this resource helpful?

Want to recommend a resource to add to this collection? Send us an email.