Male student on computer using ChatGPT
Collection

Designing Assignments for Academic Integrity in the AI Age

Generative artificial intelligence has made longstanding questions about academic integrity more urgent and complex. How does genAI change our understanding of plagiarism? What can or should instructors do to motivate students to complete graded assignments without unauthorized assistance?

Updated August 2025
Michelle Hock headshot
Ed.D. Cohort Manager
Curriculum Instruction & Special Education
View Bio
Rip Verkerke headshot
T. Munford Boyd Professor of Law
School of Law
View Bio

Catch Them Learning: A Pathway to Academic Integrity in the Age of AI

Cult of Pedagogy

This interview with educator Tony Frontier provides clear, practical strategies for helping students use AI responsibly and maintain academic integrity.

Headshot of Michelle HockHeadshot of Rip Verkerke
Michelle Hock, Rip Verkerke

This accessible and engaging podcast interview gives instructors (a) helpful perspectives on how they can promote integrity as a means by which to deter cheating and (b) takeaways that they can immediately adopt in their own practices. 

View excerpt

Many teachers have attempted to deter cheating with AI by threatening greater consequences. But increased consequences can have a paradoxical effect: The greater the consequences for the cheater, the greater the burden of proof for the accuser. This happened in Boston; minutes after Ruiz was crowned the winner, race officials doubted her surprising results. But the stakes of disqualifying the winner put officials in a state of paralysis; a formal investigation would be required before acknowledging any suspicious behavior.

Was this resource helpful?

How GenAI Affects Our Writing and Thinking

The New York Times

Meghan O'Rourke, who edits The Yale Review, gives us her wickedly funny and deeply perceptive take on ChatGPT's writing style. She also reflects on how genAI affects students' thinking.

Headshot of Michelle HockHeadshot of Rip Verkerke
Michelle Hock, Rip Verkerke

This short essay is full of insight into how we should think about student (and our own) AI use. O'Rourke's insights should help to inform strategies for motivating students to complete assignments on their own.

View excerpt

I came to feel that large language models like ChatGPT are intellectual Soylent Green — the fictional foodstuff from the 1973 dystopian film of the same name, marketed as plankton but secretly made of people. After all, what are GPTs if not built from the bodies of the very thing they replace, trained by mining copyrighted language and scraping the internet? And yet they are sold to us not as Soylent Green but as Soylent, the 2013 “science-backed” meal replacement dreamed up by techno-optimists who preferred not to think about their bodies. Now, it seems, they’d prefer us not to think about our minds, either. Or so I joked to friends....

When I write, the process is full of risk, error and painstaking self-correction. It arrives somewhere surprising only when I’ve stayed in uncertainty long enough to find out what I had initially failed to understand. This attention to the world is worth trying to preserve: The act of care that makes meaning — or insight — possible. To do so will require thought and work. We can’t just trust that everything will be fine. L.L.M.s are undoubtedly useful tools. They are getting better at mirroring us, every day, every week. The pressure on unique human expression will only continue to mount. The other day, I asked ChatGPT again to write an Elizabeth Bishop-inspired sestina. This time the result was accurate, and beautiful, in its way. It wrote of “landlocked dreams” and the pressure of living within a “thought-closed window.”

Let’s hope that is not a vision of our future.

Was this resource helpful?

How To Make Humanities Assignments AI-Proof

The New York Times

In this short article, Jessica Grose offers instructors real hope for finding a path to create humanities assignments that don't invite students to offload their work to an AI chatbot.

Headshot of Michelle HockHeadshot of Rip Verkerke
Michelle Hock, Rip Verkerke

This thoughtful review of evolving teaching practices will provide useful ideas for humanities instructors who want to adapt their assignments to the age of the AI chatbot.

View excerpt

Banning A.I. and calling it a day wouldn’t work; they had to A.I.-proof many assignments by making them happen in real time and without computers, and they had to come up with a workable policy around the technology in other situations. I don’t remember being particularly inspired by the essays I was writing as an English major back in the early aughts, and listening to the way these professors are adapting to an A.I.-powered world made me wonder if this ingenuity is overdue.

Through a combination of oral examinations, one-on-one discussions, community engagement and in-class projects, the professors I spoke with are revitalizing the experience of humanities for 21st-century students.

Was this resource helpful?

Want to recommend a resource to add to this collection? Send us an email.