Boston Globe: Months after ChatGPT’s noisy debut, colleges take differing approaches to dealing with AI
Originally published in the Boston Globe | 4/30/23
By: Globe Correspondent

When ChatGPT was released in late November, the artificial intelligence engine created an immediate sensation. Computer scientists, journalists, and curious people everywhere began plugging in prompts and marveling at what this advanced chatbot produced in response: a recipe, an original song, even a novel.
ChatGPT’s ability to generate persuasive-sounding text on virtually any subject also quickly raised red flags, especially among educators worried about its potential use for cheating. But five months after the program’s debut, college professors are taking varied approaches to the digital elephant in the classroom.
At Boston University, students in professor Wesley J. Wildman’s Data, Society, and Ethics class developed a policy for using artificial intelligence, including ChatGPT, this semester that has since been modified and adopted by the university’s Faculty of Computing and Data Sciences. The policy states that students must disclose any use of AI, include detailed information on how it was used, and not use AI tools during exams unless explicitly permitted.
“We are trying to embrace it and teach people how to learn, teach people how to use it, how to think with it,” Wildman said.
Proponents say that with some further tweaks, the policy could be adopted more widely across BU, where most academic departments are still sorting out their protocols for using AI, and at other schools.
Thomas Mennella, an associate professor of biology at Western New England University, a private college in Springfield, tells students to treat ChatGPT like “a well-intentioned neighbor.”
“Imagine you’ve got a neighbor … and they’ll tell you about anything from getting your grass greener to how to fix your car,” he said. “But you’re not really sure where their expertise lies, so you’re going to take everything they say with a grain of salt.”
At Harvard earlier this year, professors reportedly were asked to tell undergraduates that the use of ChatGPT would violate the school’s honor code, which forbids “plagiarizing or misrepresenting the ideas or language of someone else as one’s own.”
UMass Amherst is seeking the middle ground. The college recently amended its academic conduct policy to say that “unless the instructor expressly allows the usage of one of these AIs, it’s prohibited,” according to Mohit Iyyer, an assistant professor of computer science. “But it’s not possible to enforce this.”
“From my colleagues’ perspective, it’s very ad hoc,” Iyyer said. “Some people are just going about their classes as normal. Others are expressly banning it, or allowing it for certain assignments and not for others, or allowing it for everything.”
Some academics want to block further advances in the technology, at least temporarily. A conclave of computer scientists, educators, and ethicists led by an MIT physics professor recently called for a six-month pause on further development of AI tools like ChatGPT, which was built by OpenAI, a nonprofit research lab.
In an open letter, the Future of Life Institute warned that untrammeled development of AI could lead to widespread misinformation, unwanted job automation, and the development of “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us.”
Students will use the growing array of AI technology whether it’s allowed or not, several professors said, and it is likely to permanently change the ways people study and work.
“It’s going to be at least as disruptive as the Internet, probably as disruptive as the Industrial Revolution,” said Wildman, the BU professor. “It’s going to cause a lot of problems but create a lot of new things … and it’s going to be fast — really, really fast.”
Soon students will rely on artificial intelligence to write papers and be graded on their ability to correct the AI’s mistakes, professors suggested. They may work with project partners who only exist digitally. Some will major in the emerging field of “prompt engineering” — writing text designed to elicit the most relevant responses from an AI interface.
Chris Martens, a computer scientist and associate professor at Northeastern University who uses they/them pronouns, compared the new generation of AI to the gasoline engine in its potential for changing society for both good and ill, adding that they are concerned ChatGPT’s abilities have been “dramatically overhyped and oversold.”
“When we see language that looks like words that a human would have written, it’s very easy for us to fill in the gaps and make assumptions about this program having some kind of mind,” Martens said. “When we make those assumptions, we’re going to draw incorrect conclusions, and that can lead to all kinds of problems.”
Makers of plagiarism-detection software have responded to educators’ worries about cheating with new products designed to detect AI-generated prose, but some educators say the programs aren’t very good at finding plagiarism, let alone text from a bot.
Yagev Levi, a senior computer science major at Boston University, said he knows students who are using ChatGPT to help with homework.
“Even though their professors tell them not to, there’s really no way to check it,” said Levi, 23, of Newton. ”None of them have gotten caught.”
Lesley University senior Sabrina Forman, who said her classmates are mostly using ChatGPT as a memory aid, said she thinks there is a greater chance of abuse by middle- and high-school students who have little say in whether their coursework interests them.
“In college, you have this opportunity to kind of delve into what you want to learn — that’s what you’re paying for,” said Forman, 22, an animation and motion media major from Westwood.

The current generation of advanced chatbots works, in part, like predictive texting on your cellphone that suggests the most likely next word depending on what you’ve already typed. By relying on such predictions, ChatGPT can wind up delivering information that sounds convincing but is spectacularly inaccurate.
Martens, of Northeastern, said students using ChatGPT for research have found that it sometimes cites published studies that were never published — or even conducted.
“What this tool is designed to do is to produce text that looks like plausible correct information,” Martens said. “It’s like when you’re talking to a parrot: It can mimic human speech, but it does not understand the language that it’s producing.”
There are more reasons to worry about ChatGPT getting its facts wrong. OpenAI hasn’t released the sources of the massive data set powering the interface, and misinformation can be added by users, intentionally or otherwise.
Some educators and students have begun exploring the darker side of the technology.
In associate professor Gillian Margaret Smith’s course on the ethics of creative AI at the Worcester Polytechnic Institute, one student is exploring how text-generating software like ChatGPT and programs that create images, such as OpenAI’s Dall-E, can be used to produce propaganda.
“It takes a very small amount of time to be able to create very believable-looking propaganda posters,” Smith said. “That student is exploring … how could this be used in the hands of someone who is more skilled in this area or has deliberate malintent.”
Jeremy C. Fox can be reached at jeremy.fox@globe.com. Follow him on Twitter @jeremycfox.