Person wearing white Converse sneakers, standing on a yellow line in the road
Collection

Setting Boundaries for AI Use Through Syllabus Statements

Students appreciate clear guidance on which uses of generative AI are permitted in coursework, if any, and which are prohibited. The syllabus statements in this collection are examples that instructors can use as they set and communicate boundaries for student AI use that align with their course goals and teaching values.

Updated May 2026
Derek Bruff headshot
Associate Director
Center for Teaching Excellence
View Bio

UVA Provost AI Guidance for Faculty

University of Virginia Office of the Provost

The UVA Provost's Office provides guidance for both faculty and students about the roles that generative AI can (and cannot) play in teaching and learning at the university.

Headshot of Derek Bruff
Derek Bruff

UVA instructors are allowed and encouraged to set their own course-level AI policies. This FAQ provides some guidance for faculty in doing so, addressing privacy issues, incorporating AI in one's teaching, restricting students' use of AI in coursework, and AI detection tools.

View excerpt

It is important for instructors to make explicit in their syllabi and assignment descriptions which uses of Gen-AI are permitted in coursework, if any, and which are prohibited. Using Gen-AI for completing coursework in ways that are prohibited by the course instructor may be a violation of the Honor Code.

Was this resource helpful?

AI Policy for a First-Year Writing Course on Attention and Distraction

Ethan King is an assistant professor of English at UVA. He teaches a first-year writing course titled "Writing about Attention and Distraction," and in this course he prohibits the use of generative AI by students on both formal and informal assignments.

Headshot of Derek Bruff
Derek Bruff

I appreciate how Ethan has shaped his course AI policy around the learning goals and objectives of the course. He lays out a clear boundary for students, provides a rationale for that boundary that points to his learning goals, and invites his students into further discussion about learning to write. Note also the reference to Ethan's alternative grading policy and to supporting student well-being.

View excerpt

That said, I am not naïve. Some of you may already incorporate AI regularly into your academic workflow, and there may be moments in this course when you are tired, behind, or staring at a blank page and the idea of asking an AI to help you get started feels very appealing. That feeling makes complete sense. But reaching for AI in those moments means handing off the most generative part of the writing process: the struggle to figure out what you actually think. And this course is specifically designed to support you through that struggle.

Was this resource helpful?

A Red-Light AI Policy for an Engineering & Society Course

Joshua Earle is an assistant professor of engineering and society at UVA. He takes a firm red-light stance toward AI in his courses, as shown through this syllabus statement from his capstone course.

Headshot of Derek Bruff
Derek Bruff

Joshua also lays a clear boundary here for his engineering students, including examples of the kinds of learning tasks where students aren't allowed to use AI. He provides rationales for his policy, arguing that AI use undermines student learning and pointing to larger ethical concerns he has with generative AI. He considers any use of generative AI output to be plagiarism and thus an Honor Code violation. That's not universally true for courses at UVA, but he has made that a policy for his courses.

View excerpt

Generative AI and Large Language Models are both technically impressive, and seemingly quite helpful in completing large writing assignments. That said, the use of generative AI is not allowed for any tasks in this course, including, but not limited to: summarizing texts, transcribing notes, brainstorming topic or QFE ideas, or producing assignment text (even if only for a preliminary draft).

Was this resource helpful?

AI Transparency Statements for a Nursing Course

Kimberly D. Acquaviva

Kim Acquaviva, UVA nursing faculty member and Faculty AI Guide, asks her students to complete this transparency statement with every major assignment as a way to hear from the students how they're using generative AI.

Headshot of Derek Bruff
Derek Bruff

Kim doesn't mind if her students use generative AI in her courses, she just wants to know how. Her transparency statement, which students complete with every assignment, asks students to identify the tools they use and the parts of the assignment process where they used those tools. This creates opportunities for Kim to talk with her students about effective and ethical use of AI in their learning and future nursing work.

Was this resource helpful?

AI Consultation Forms for an Engineering & Society Course

Bryn Seabrook is an assistant professor of engineering and society at UVA. She allows students to use AI in her writing-heavy courses, but requires them to disclose and reflect on their AI use through what she calls an "AI consultation form."

Headshot of Derek Bruff
Derek Bruff

I appreciate Bryn's effort to engage her students in conversations about generative AI and its roles in their learning. Her consultation form includes a disclosure element, which allows students to cite the tools they've used and gives Bryn insight into how her students are using AI. The form also prompts students to check the output they receive from their AI tools and, importantly, take a moment for metacognitive reflection, a key step in learning to use AI tools effectively and ethically.

Was this resource helpful?

An AI Syllabus Statement for a Legal Research Course

Daniel Radthorne is a research librarian at the UVA School of Law. He teaches an advanced legal research course, and his AI syllabus statement reflects the ways that AI is changing the field of legal research.

Headshot of Derek Bruff
Derek Bruff

AI is changing how legal research is conducted, and Daniel's course necessarily has to respond to those changes to be effective in preparing his students for their legal careers. His students will have the chance to learn the use of AI-powered legal research tools, but they will also need the opportunity to build the conceptual understanding to use those tools well. As a result, Daniel sets AI guidelines on a per-assignment basis.

View excerpt

However, please remember that the capabilities of these resources remain in flux, and their responses may be incorrect in ways that are not immediately obvious. Even when generative AI functionality becomes better established, it will still be your ethical obligation as a licensed practitioner to ensure that the material generated by these tools is correct. Doing so will require a broad understanding of the infrastructure of American law and jurisprudence – knowledge you must build organically through our coursework. As such, our use of GenAI tools will be limited and specific.

Was this resource helpful?

AI Guidelines for a Course on Emerging Topics in Technology and Operations

Tim Laseter is a professor of the practice at the UVA Darden School of Business. He teaches a course on emerging topics in technology and operations, and generative AI is one of those emerging topics. He embeds AI guidelines in a variety of places in his first-day-of-class onboarding for students.

Headshot of Derek Bruff
Derek Bruff

Note that Tim assumes some use of generative AI by his students, which means he has raised his expectations for low-pass and standard-pass grades. He asks students to use AI to thoughtfully prepare for guest speakers, and he asks students to write their own exam questions that go beyond the tasks that AI can automate.

Was this resource helpful?

Want to recommend a resource to add to this collection? Send us an email.