
LongStories is constantly evolving as it finds its product-market fit. Features, pricing, and offerings are continuously being refined and updated. The information in this blog post reflects our understanding at the time of writing. Please always check LongStories.ai for the latest information about our products, features, and pricing, or contact us directly for the most current details.
Schools Redefine Cheating in Light of Expanding AI Tools
The rapid rise of artificial intelligence tools like ChatGPT is forcing schools to reevaluate what constitutes cheating as traditional methods of teaching and assessment struggle to keep pace. For many educators, take-home essays and assignments are becoming outdated, replaced by in-class work and alternative assessment approaches.
"The cheating is off the charts. It's the worst I've seen in my entire career", said Casey Cuny, who has taught English for 23 years. Cuny, a 2024 recipient of California's Teacher of the Year award, now assumes that any work assigned to students outside of class is likely to be "AI'ed."
Shifting Teaching Methods in Response to AI
Cuny, who teaches at Valencia High School in southern California, has adjusted his teaching methods to reflect the challenges posed by AI. His students now complete most of their writing assignments in class, where he can monitor their progress using software that locks down their devices or blocks access to certain websites. He has also embraced AI as a teaching tool, helping students use it as a study aid rather than as a means to cheat.
"We have to ask ourselves, what is cheating?" Cuny said. "Because I think the lines are getting blurred."
Similar changes are being made in other classrooms. Kelly Gibson, a high school teacher in rural Oregon, no longer assigns traditional essays as homework, instead opting for in-class writing and verbal assessments. "I used to give a writing prompt and say, ‘In two weeks, I want a five-paragraph essay,’" Gibson said. "These days, I can't do that. That's almost begging teenagers to cheat."
Students have found ways to integrate AI into their learning processes. For instance, ChatGPT can provide brainstorming ideas and fully drafted outlines for essays, along with suggestions for quotes and examples. While this can be useful for studying, it also presents a temptation to rely on AI for more than just support.
Gray Areas for Students and Teachers
For many students, the line between legitimate AI use and cheating is not always clear. Lily Brown, a psychology major and college sophomore, uses AI to help with essay outlines and to summarize difficult readings. While she tries to maintain academic integrity, she admits uncertainty. "Sometimes I feel bad using ChatGPT to summarize reading, because I wonder, is this cheating?" she said. "If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?"
This uncertainty is compounded by inconsistencies in school policies. Some educators allow tools like Grammarly to check grammar, while others prohibit them, considering their ability to rewrite sentences. At Valencia High School, 11th grader Jolie Lahey described the confusion caused by varying policies. "Whether you can use AI or not depends on each classroom. That can get confusing", she said.
Lahey credited Cuny with teaching her class how to use AI tools effectively, such as uploading study guides to ChatGPT to create quizzes. However, she expressed frustration with rigid "No AI" policies. "It’s such a helpful tool. And if we’re not allowed to use it that just doesn’t make sense", she said. "It feels outdated."
New Guidelines Address Emerging Challenges
Initially, many schools banned the use of AI tools like ChatGPT when they emerged in late 2022. Since then, the educational landscape has shifted, with the concept of "AI literacy" taking center stage. Over the summer, colleges and universities formed task forces to draft new guidelines and help educators balance the advantages and risks of AI in their classrooms.
The University of California, Berkeley, advised faculty to include clear statements in their syllabi regarding AI use, offering sample policies that range from requiring AI, banning it, or permitting limited use. The university emphasized that unclear guidelines could lead to inappropriate use of AI by students.
At Carnegie Mellon University, incidents of academic responsibility violations involving AI have risen sharply. Rebekah Fitzsimmons, chair of the AI faculty advising committee at Carnegie Mellon’s Heinz College, noted that many students don’t realize they’ve violated policies. In one case, a student used DeepL, an AI translation tool, to translate their work into English, only to have the tool’s alterations flagged by detection software. Fitzsimmons explained that enforcing integrity policies is becoming increasingly difficult as AI use is often undetectable and hard to prove.
To address these challenges, Fitzsimmons worked on drafting new guidelines over the summer. Faculty were advised that a blanket ban on AI "is not a viable policy" unless significant changes are made to teaching and assessment methods. Some educators now favor in-class quizzes using lockdown browsers or have shifted to "flipped classrooms", where traditional homework is done in class.
Emily DeJeu, who teaches communication courses at Carnegie Mellon’s business school, eliminated writing assignments as homework entirely. Instead, she relies on in-class quizzes. "To expect an 18-year-old to exercise great discipline is unreasonable", DeJeu said. "That's why it's up to instructors to put up guardrails."
Redefining Academic Integrity in the AI Era
The introduction of AI tools has forced educators to confront deeper questions about the nature of academic integrity. While some students and teachers view AI as a valuable resource for learning, others see its potential to undermine traditional education models. As AI continues to evolve, schools and universities are tasked with striking a balance between fostering innovation and maintaining the principles of honest academic work.
For Cuny, the solution lies in teaching students to work with AI responsibly rather than banning it outright. As he put it, the goal is "to get kids learning with AI instead of cheating with AI."
