Many BU Students Study with ChatGPT. A Few Admit Cheating with It
Student research project urges BU and other schools to adopt AI policies
Many BU Students Study with ChatGPT. A Few Admit Cheating with It
Student research project urges BU and other schools to adopt AI policies
To fete her father, long-serving BU executive Joseph Mercurio (now retired), on his 75th birthday last year, Andrea Mercurio fed his bio into ChatGPT to generate some toasts. A senior lecturer in psychological and brain sciences at the College of Arts & Sciences, she recalls the results.
“One of them said something like, ‘Joe, with his infectious laugh and calm demeanor.’ The entire room is filled with BU people, who have known my dad for years, and they burst out laughing.” ChatGPT, artificial intelligence’s supposed wunderkind, also “talked about his ‘keen culinary talents in the kitchen.’ I mean, he does not cook and make anything. Ever.”
So nobody’s perfect. But ChatGPT “continues to improve itself,” says Mercurio (CAS’00), such that three-quarters of 38 BU students sampled in recently completed research by her student Dima Ghalili (CAS’24) used the tool for schoolwork—and, in a handful of cases, to cheat.
Ghalili conducted the project under the auspices of BU’s Undergraduate Research Opportunities Program (UROP), which sponsors faculty-mentored student inquiry. The sampled students used other AI platforms, from Grammarly to translation software. But ChatGPT was the most popular, with 75 percent of respondents saying they used it for academics, most commonly “to understand articles/passages (35 percent), to check grammar/sentence structure (32 percent), to debug code (29 percent), and to generate ideas for academic work (29 percent),” Ghalili wrote in his UROP report.
Yet he found that BU academic units haven’t finalized AI policies, save for the Faculty of Computing & Data Sciences. Among its requirements, CDS says students must “give credit to AI tools whenever used, even if only to generate ideas rather than usable text or illustrations.” (Admissions also has a policy governing applicants’ college essays.) Meanwhile, a University AI task force is expected to report soon on suggested best practices and guardrails for the technology in education and research.
Moreover, Mercurio, who sits on the CAS academic misconduct committee, says professors’ syllabi typically warn students that classwork must be their own, independent work—neither someone else’s nor AI’s.
Most students Ghalili surveyed crave guidance as to responsible ChatGPT use, he wrote, adding that educators must “confront the challenges posed by emerging, sophisticated technologies that present new avenues for academic misconduct to occur.”
Kenneth Lutchen, interim provost and chief academic officer, says once the task force report is submitted, “we will take a thorough look at the findings and recommendations before deciding if a high-level, University-wide policy makes sense. Without getting too far ahead of the work the task force has done, I do suspect we are moving in a coordinated direction on a number of practices—for example, requiring attribution if output is derived from ChatGPT, among other issues.
“The University community, their feedback, and their ideas have been vital to this process and we look forward to reporting back shortly on what we believe will be sensible pathways for the use of this technology at BU.”
Summarizing his UROP survey of students, Ghalili wrote that “8 percent admitted to generating text via ChatGPT that was incorporated verbatim into academic work without proper source credit.” Meanwhile, “36 percent reported unsanctioned collaboration on an exam or assignment through peer interaction and/or online sources.”
Among all forms of cheating Ghalili examined, “A small portion of the sample indicated that either they did not see these behaviors as forms of academic misconduct (between 8 and 17 percent, depending on the specified behavior) or were unsure (between 8 and 21 percent).”
Other investigators have corroborated the platform’s popularity. Last year, an anonymous BU student told WBZ, Boston’s CBS affiliate, “I’ve been using ChatGPT for most of my assignments and it works really well.”
“You’re given all this new technology [as a student], but not necessarily the guidance to go with it,” Ghalili said in an interview. “A new tool can be used for something great, but it can also be misused.”
A new tool can be used for something great, but it can also be misused.
Mercurio knows that misuse from her misconduct committee work. Her last four or five cases “have all been ChatGPT-related,” she notes. Those cases involved professors discovering that “students used ChatGPT to write code for their homework or to write a paper,” she says. One BU professor discovered cheating only because students had to write drafts in a Google document, which shows earlier versions of a piece of writing. The professor saw that the student’s earlier version had been erased and replaced with a copied-and-pasted, completed response, “and it wasn’t consistent with any of the work he had done previously.”
Such detective work by faculty is necessary, Mercurio and Ghalili say, because software for detecting AI use often is unreliable. ChatGPT’s work can give itself away as AI-generated—for example, by writing inappropriately flowery prose, Mercurio says, but that such language might pass muster in writing classes, she says.
“I think we have to rethink [student] assessments,” she adds. “The way in which we grade people or evaluate them is going to have to shift.” She and Ghalili say options include more oral examinations and in-class proctored tests.
ChatGPT’s speed-of-light evolution has amazed others in the academy: the Washington Post reports that a University of Mississippi AI expert starts faculty meetings on the topic by saying, “Remember what I told you last week? Forget it.”
The UROP research project grew out of Mercurio’s interest in possible bias in academic misconduct and whether international students were more likely to be suspected and found guilty of it. She discussed the topic with Ghalili, a student in one of her classes, who grew up in China. “When ChatGPT gained traction,” she says “we saw it as an opportunity to learn more about the potential ways new technology might be misused in academic work.”
Given the technology’s accelerating development and the UROP project’s small sample size, Mercurio says, she plans a larger study, hoping for up to 400 respondents from BU and other colleges and universities.
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.