Combating Academic Dishonesty
There are few current issues in education that have provoked more interest – or sounded more alarms – than artificial intelligence (AI) technology. While the issue has simmered for some time, it burst into the forefront of debate following OpenAI’s public release of ChatGPT. When given a prompt – e.g. “What were the causes of World War I?” or “How does the Krebs cycle work?” – ChatGPT (the acronym stands for “Generative Pretrained Transformer”) can generate text that reads, at least on superficial examination, like that written by a human – the basis of the famed Turing Test for machine intelligence.
Once the tool’s capabilities became known, it did not take long for fears to be voiced that students would turn to ChatGPT to write their assignments for them. Eye-grabbing headlines began to appear, not only in sensationalist newspapers like the New York Post (which dubbed the tool “CheatGPT”) but in more sober publications like the Atlantic, where an opinion piece bluntly claimed that “the college essay is dead”. Advocates for the worst-case scenario see a future in which human-generated and computer-generated text are indistinguishable, essay assignments are meaningless, and the very skill of academic writing is lost.
One need not accept this doomsday proposition to recognize that ChatGPT raises legitimate concerns for academic integrity. But if we are to address such concerns, we must first answer several key questions: what is ChatGPT, exactly? What are its affordances and limitations? And, assuming that ChatGPT and tools like it are here to stay (as seems overwhelmingly likely), how should we rethink pedagogy to address this new reality?