Example AI Syllabus Statements from UVA Faculty AI Guides

Summary:

The UVA Faculty AI Guides are a group of UVA faculty who are helping their colleagues explore the role of generative AI in their teaching. The 2024-2025 Guides have taken a variety of thoughtful approaches to AI in their own courses, as seen here through a selection of syllabus statements.

Rip Verkerke (Law)

Rip Verkerke, professor of law, used the following AI policy in his law courses in the fall 2024 semester. This course policy statement is shared with his permission.

AI Policy and Guidance on Use

Generative artificial intelligence tools (such as ChatGPT, Claude, Copilot, and Bard) may soon revolutionize legal research and writing just as online legal research tools (Westlaw and Lexis) did several decades ago. These AI tools promise to answer our questions creatively and insightfully and in lucid prose. In practice, their writing is often pedestrian (or even stilted), and their answers may be incomplete or utterly fabricated. Despite these shortcomings, AI deserves our attention and may help you with your work in this course. The tools are changing rapidly, so I will do my best to identify ways you can use AI for tasks it does relatively well and caution you about situations where it more often fails.

For the moment, I suggest you focus on finding ways that AI may help you (1) master the various topics we will study and (2) prepare for our exam. Thus, you might consider using an AI tool to summarize the cases we read, to quiz you about key doctrinal principles, or to create a helpful diagram or flowchart. As you begin to study for the final exam, you could use AI to generate short practice problems, to provide feedback about your practice essay responses, or to extract organizing themes from your class notes.

Remember, however, that AI sometimes gets things wrong. You should never rely uncritically on AI-generated content. Be certain to check AI results against your own understanding of the subject and confirm any information that you don't remember from our readings or class discussions. Among the many available resources for learning about AI tools, I highly recommend Ethan Mollick's Substack newsletter "One Useful Thing" which includes both thoughtful commentary and helpful examples of AI prompts you may find useful.

At least in theory, AI also could help you write essay answers for exams. Although AI use is allowed on our final exam, I strongly encourage you to proceed with the utmost caution. You've likely already heard about the tendency of AI models to "hallucinate"—that is to confidently assert facts, principles, or even case citations that do not exist. The currently available AI tools also tend to produce generic prose and struggle with the sort of inductive and analogic reasoning that law school exam questions require. Generative AI models are trained on vast databases of publicly available text. These models can perform well on multi-state bar exam questions because their training databases probably include many example questions and model answers. But they do not have access to the intricate web of information and ideas specific to each law school course. As a result, in my experience, even the most advanced models (ChatGPT-4, Copilot, and Bard/Gemini) frequently generate unsuccessful exam answers that contain glaring errors and omissions. 

For these reasons, I urge you to refrain from using AI tools to produce your answers to our final exam questions. You may wish to use AI to offer editorial suggestions or to provide feedback about the readability of your answer. But please be aware that you alone are responsible for the accuracy and completeness of your answer. It would be extremely unwise to rely heavily on AI assistance at this stage of the tools' development.

Here is a link to UVA's page on generative AI tools that are currently or will soon be available to students. Please just let me know if you have any questions about using these resources.

Joey Meyer (Psychology)

Joey Meyer, assistant professor of psychology, used the following AI syllabus statement in his research methods and programming courses in the fall 2024 semester. This course policy statement is shared with his permission.

AI Help (Such as GPT-4)

Using AI for assistance has never been easier, with new barriers being shattered every day. AI will soon be able to help solve problems and assist in academia, research, and the workplace, and these fields will likely improve as AI becomes more integrated in society.

For now, AI can be helpful, but it can also give answers that are misleading or are completely incorrect, while sounding confident regardless. AI also takes and likes to hold onto most of its input in some form, so using AI to directly analyze sensitive or highly sensitive data may violate privacy and IP rights of participants, clients, faculty, or students. It may be tempting to use AI to help you code, write, perform statistical analyses, and/or interpret results; however, you may find that it will be easier and faster (and sometimes necessary) to do these yourself, rather than spend time tracking down any hidden mistakes the AI makes. In addition, you’re enrolled in the course, not the AI, so it’s important that you’re the one answering any questions or performing any tasks given in the course.

Thus, for the purposes of this course, AI help is considered “Live help outside the classroom,” and so therefore must not be used for the final project. Otherwise, any usage of AI must be cited in the answer given in the assignments.

Dan Player (Public Policy)

Dan Player, associate professor of public policy, has shared two examples of syllabus statements regarding generative AI. These statements are shared with his permission.

Example 1: APP Generative Artificial Intelligence Policy

Generative artificial intelligence is a powerful and readily accessible tool in the professional world. As APP is a pivot class to the professional world, AI use is allowed in all settings and for all assignments in APP with the following caveats and warnings. These policies apply to all AI enabled tools from Generative AI chat bots like ChatGPT to platforms built on top of generative AI like Elicit.

1) Document your use of AI. AI is a new tool and we are all learning to use it, for better and for worse. If you use AI in an assignment, you are required to submit an addendum paragraph narrating how you used it (ideation, editing, text generation, etc) and what was effective/ineffective. You will be asked to discuss your experience in class so we can all learn from it.

2) You are responsible for high quality and accurate work. AI is prone to hallucinate, that is to make stuff up (see this policy example). The AI quality has serious shortcomings, especially as applied to particular places and in the present (most AI is trained off of data more than a year old). For example, you can ask it for sources and the best (ie. paid) models produce 20% made up sources.  The publicly available models make up sources 80% of the time. In the working world, your reputation for accuracy and trustworthiness is the coin of the realm. We will have high standards for accuracy as lives and resources will be at stake in your analysis. Therefore, if you submit made up sources, unverifiable facts, or other hallucinated material you will automatically fail your submission, no matter the quality of the rest of the submission

3) Standards have been raised. Because AI can often produce B or B- quality work with a few keystrokes, work that in past years might be taken as demonstrating engagement in the material via the assignment no longer passes muster. Instead, I expect consistently high quality work, especially with respect to applying general principles into the specifics of your problem and your client's world. I am particularly interested in seeing you produce value from what you have learned from a close reading of materials from your client and interviews with people in their organization and on the ground. 

Example 2: Class AI Policy

My goal is to prepare you to be an effective interpreter and communicator of causal evidence in a future that will almost certainly have AI tools available. For this course, I will encourage you to use AI most of the time as a tool to help you understand and more clearly communicate these complex ideas. Unless otherwise stated, you should feel free to use sources like ChatGPT or Claude. However, much of the course will require you to synthesize and communicate orally in real time, so you must make sure you deeply understand everything the AI is teaching you so that you can be the expert when required. We will have more discussions in class about how to use these tools effectively and when their use should be avoided as they might inadvertently short circuit our learning.

Garrick Louis (Engineering and Society)

Garrick Louis, associate professor of engineering and society and of systems and information engineering, took a different approach in his Technology and Policy course. He polled his students to ask their view of the use of AI in the course. His poll and results are shared with permission.

STS 2760 CLASS AI POLICY POLL

This poll is to request your opinion on the use of Generative AI in Class. UVA oJers ‘Copilot Chat” free to students and faculty for use in university-related work.

I propose three AI policy options for this course. Please indicate your preference for one of the options and give a brief statement why you prefer that option. You may also suggest a different option if desired.

We will discuss the poll results with the intent of choosing the option selected by most of the class.

  1. RED OPTION - Strictly prohibit any use of Generative AI technology in this course. Use of the technology would be considered an honor violation.
  2. YELLOW OPTION - Allow the use of Generative AI in some parts of the course. The parts of the course would be decided on by instructor and students beforehand. Students would acknowledge all use of generative AI in their relevant submitted work. Other undisclosed use of AI would be an honor violation.
  3. GREEN OPTION - Allow use of Generative AI in all aspects of the course with the option of acknowledgment by students. The instructor would assume the use of Generative AI in all student work and would grade their work accordingly. Use of AI would not be an honor violation.

I choose the _____________ policy option because:

POLL RESULTS

  1. RED - 0 votes
  2. YELLOW - 4 votes
  3. GREEN - 8 votes

We will adopt a green policy for this course.

Student comments:

• AI can be helpful with citation. • AI use represents the real world • AI use will help, not hinder. • AI can't reproduce policy analysis • AI can improve efficiency. • AI use should not require citation. • AI a useful tool for outlining & researching. • AI use beneficial, not detrimental. • Grade assignments assuming AI use. • Students need to be familiar with AI • AI use should be cited. • Use AI with citation. • Useful tool that is widely available. • Students should learn how to use it. • AI is a useful tool. • Use AI with citation. • Use AI with citation. • AI should not be used to cheat on assignments • If AI use gets out of control, prohibit its use.