Accused of cheating in the age of ChatGPT


Wednesday, 20 September, 2023


Accused of cheating in the age of ChatGPT

A recent study by Drexel University has examined the reactions of students who have been accused of using ChatGPT to cheat.

The study, published in the journal Learning: Research and Practice as part of a series on generative AI, analysed 49 Reddit posts and their related discussions from college students who had been accused of using ChatGPT on an assignment.

Tim Gorichanaz, the study author and an assistant teaching professor in Drexel’s College of Computing & Informatics, identified a number of themes in these conversations. Most notably, these included frustration from wrongly accused students, anxiety about the possibility of being wrongly accused and how to avoid it, and creeping doubt and cynicism about the need for higher education in the age of generative artificial intelligence.

“As the world of higher ed collectively scrambles to understand and develop best practices and policies around the use of tools like ChatGPT, it’s vital for us to understand how the fascination, anxiety and fear that comes with adopting any new educational technology also affects the students who are going through their own process of figuring out how to use it,” Gorichanaz said.

Refuting incorrect accusations

Of the 49 students who posted, 38 of them said they did not use ChatGPT, but detection programs like Turnitin or GPTZero had nonetheless flagged their assignment as being AI-generated. As a result, many of the discussions took on the tenor of a legal argument. Students asked how they could present evidence to prove that they hadn’t cheated, and some commenters advised continuing to deny that they had used the program because the detectors are unreliable.

“Many of the students expressed concern over the possibility of being wrongly accused by an AI detector,” Gorichanaz said.

“Some discussions went into great detail about how students could collect evidence to prove that they had written an essay without AI, including tracking draft versions and using screen recording software. Others suggested running a detector on their own writing until it came back without being incorrectly flagged.”

Another theme that emerged in the discussions was the perceived role of colleges and universities as ‘gatekeepers’ to success and, as a result, the high stakes associated with being wrongly accused of cheating. This led to questions about the institutions’ preparedness for the new technology and concerns that professors would be too dependent on AI detectors — whose accuracy remains in doubt.

“The conversations happening online evolved from specific doubts about the accuracy of AI detection and universities’ policies around the use of generative AI, to broadly questioning the role of higher education in society and suggesting that the technology will render institutions of higher education irrelevant in the near future,” Gorichanaz said.

The degradation of trust in education

The study also highlighted an erosion of trust among students — and between students and their professors — stemming from the students’ perception that they are persistently under suspicion of cheating. A range of comments illustrated the degradation of these relationships:

  • “I never would have expected to get accused by him, out of all my professors.”
  • “Of course she trusts that AI detector more than she trusts us.”
  • “I know I sure as hell didn't plagiarize, but unfortunately you can’t always trust others.”

Generative AI technology has forced institutions of higher education to reconsider their educational assessment practices and policies about cheating. According to the study, students are asking many of the same questions.

“There were comments about policy inconsistencies where students were punished for using some AI tools such as ChatGPT but encouraged to use other AI tools like Grammarly. Other students suggested that using generative AI to write a paper should not be considered plagiarism because it is original work,” Gorichanaz said.

“Many students reached the same conclusion that universities have been grappling with: the need to responsibility integrate the technology and move beyond essays for learning assessment.”

Changing methods of assessment in the future

The study could play an important role in helping colleges and universities communicate to their students about the use of generative AI technology, Gorichanaz suggested.

“While this is a relatively small sample, these findings are still useful for understanding what students are going through right now,” he said.

“Being wrongly accused, or constantly under suspicion, of using AI to cheat can be a harrowing experience for students. It can damage the trust that’s so important to a quality educational experience. So, institutions must develop consistent policies, clearly communicate them to students and understand the limitations of detection technology.”

Gorichanaz noted that even the best AI detectors could produce enough false positives for professors to wrongly accuse dozens of students — which is clearly unacceptable, considering the stakes.

“Rather than attempting to use AI detectors to evaluate whether these assessments are genuine, instructors may be better off designing different kinds of assessments: those that emphasise process over product or more frequent, lower-stakes assessments,” he wrote, in addition to suggesting that instructors could add modules on appropriate use of generative AI technology, rather than completely prohibiting its use.

While the study offers a thematic analysis, Gorichanaz said that future research could expand the sample to a statistically relevant size and draw it from sources beyond English-language conversations on Reddit.

Image credit: iStock.com/Valeriy_G

Related Articles

Tech partnership simplifies school administration

Atturra has partnered with Brisbane Grammar School to deliver a student information system (SIS)...

Does online delivery trump the classroom?

A new study by Charles Darwin University has explored the effectiveness of online learning when...

Using AI to help resolve student perfectionism

Researchers believe that AI tools could be harnessed to treat perfectionism — a condition...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd