Why ChatGPT Is Both Exciting and Unsettling for Students, Faculty

Professor Wesley Wildman assigned students in his Data, Society and Ethics class—the first ever—to craft a blueprint for academic use of ChatGPT and similar artificial intelligence models.
Why ChatGPT Is Both Exciting and Unsettling for Students, Faculty
BU data science class takes a first step toward crafting a strategy for dealing with the artificial intelligence model in the classroom
Ask it a quick question, get a quick answer. Ask it to write a complete essay, and it will do that, too.
The capabilities of ChatGPT offer both opportunity and temptation—driving students, faculty, and administrators across Boston University to talk about the artificial intelligence chatbot’s potential as both an enabler of plagiarism and an exciting research tool.
And one BU Computing & Data Sciences class is going a step further than talking.
The 47 undergraduates in Wesley Wildman’s Data, Society and Ethics (CDS DS 380) class—the first ever—spent the last few weeks writing a blueprint for academic use of ChatGPT and similar artificial intelligence models, called the Generative AI Assistance (GAIA) Policy. They intend to follow it and hope it will be a starting point as the University moves to deal with ChatGPT in the classroom.
“I was really proud of them,” says Wildman, a professor of philosophy, theology, and ethics and of computing and data sciences, who splits his time between the School of Theology and the Faculty of Computing & Data Sciences. “They were articulate and strong in their beliefs.”
Wildman discarded his original lesson plan two days before their second class in January and gave his students the ChatGPT assignment instead. “I was especially pleased with their ability to identify the principles that mattered,” he says. “Things like fairness, things like ‘we don’t want cheating,’ things like ‘I don’t want to have my ability to learn how to write and how to think crippled by this.’”
GPT stands for “generative pretrained transformer,” which means, in the words of the GeekCulture website, it was “trained on a massive amount of text data to generate human-like responses to a given input.”
The 550-word policy hashed out over several of Wildman’s class sessions, sometimes in small groups, “adopts a few commonsense limitations on an otherwise embracing approach to LLMs” (large language models, another term for chatbots like ChatGPT). The policy addresses grading, the disparity between students who use ChatGPT and those who don’t, and the possible positive uses of the technology.
ChatGPT “is walking a weird line, because it’s the first one that’s powerful enough that people are scared of it and that we’re talking about it as a university,” says George Trammell (CDS’24), a student in Wildman’s class. “But it’s not powerful enough that it doesn’t have skeptics, and a lot of skeptics at that.
“One of the first things I asked it was to read the first book of the Bible in Trump’s rhetoric, and it did a ridiculously good job. It was hilarious,” says Trammell. “And then after that, I asked it to explain an answer to some simple physics question to me. And it sounded very convincing and was totally wrong.
“So this is a really good conversation to have right now, because we’re on that line,” he says. “And people need to understand this software isn’t going anywhere. It’s going to get better, and it’s going to get better really fast.”
Revolutionary potential
The New York Times calls ChatGPT, created by the company OpenAI, “quite simply, the best artificial intelligence chatbot ever released to the general public.” No less an authority than Bill Gates says ChatGPT “will change our world.” It’s considered so revolutionary that Google rushed to release its own AI chatbot, called Bard, which did not go well. A New York Times reporter testing an an early version of another chatbot attached to Microsoft’s revamped Bing search engine found a conversation with it got seriously weird.
In short, it appears that this is not a passing fad.
“I think what we talk about is [it’s] more like the printing press,” says Wildman. “It’s transforming the way people use objects to extend their cognitive powers beyond their own minds. We’ve become very good at doing that with all sorts of tools—calculators and so on. But the printing press changed the way we think, changed the way we taught each other. It changed everything about education. This is similar in scale.”
Since it burst onto the scene in December, ChatGPT has stirred concern about its possible use in cheating—students handing in papers that ChatGPT wrote for them. In higher education, that’s generally called plagiarism and punished with a failing grade, or suspension, possibly even expulsion.
“What are the key issues surrounding use of ChatGPT in higher education?” we asked the chatbot.
About 15 seconds later, it responded with a short essay: “What are the key issues surrounding use of ChatGPT in higher education? The use of AI language models like ChatGPT in higher education raises several important ethical, legal, and technical issues.” It went on to summarize—in ways that were at least technically accurate—half a dozen areas of concern, including privacy and data security, accuracy and bias, intellectual property, accessibility, and even job loss for teachers and other educators.
In a paragraph on its role in the classroom, curiously, the word “plagiarism” went unmentioned.
It concluded: “These are just a few of the key issues surrounding the use of AI language models like ChatGPT in higher education, and it is important for institutions to carefully consider these issues and to put in place appropriate safeguards to protect students, teachers, and the wider educational community.”
Enter Wildman’s CDS DS 380 students.
Everybody’s talkin’ about it
“When I was a kid, I had never heard of an Ace Hardware store until someone thrust me in to buy something in it, and then suddenly, I heard like 10 advertisements a day for it,” says Rafael Perron (CAS’23). “I’d never heard of ChatGPT, or only very vaguely, and then suddenly, every single class and every single professor is talking about it.”
Trammell says the class created independent groups to come up with their own policy, and then put them all on a board to compare. “We talked about what wouldn’t work—we went through banning, and you know, obviously, that doesn’t work,” he says. “And we went through more restrictive and less restrictive policies to see what would and wouldn’t work.”
The policy they hammered out says students should credit ChatGPT whenever it’s used and add an appendix to papers and other take-home assignments to explain how and why it was used. “We should not use LLMs to help with in-class examinations, tests, or assignments, unless they are explicitly organized around an LLM,” the policy states.
“It needed to be usable for us in the class,” Wildman says. “It needed to win the consensus of the people in the class, so that we all felt we had buy-in. And it had to be responsive to the whole bunch of stakeholders, from parents to universities to employers to the students themselves.”
What the policy couldn’t do, he says, is simply ban ChatGPT and products like it, even if that were feasible. From the student point of view, this is their future. “They need to figure out how to master these tools and integrate it into our toolkit,” Wildman says.

“I plan to enter data science,” says Natalia Clark (CAS’23), “so I’m glad that the University is attempting to introduce the idea in a way where I can learn and make mistakes with it before it becomes more serious to make mistakes with it. It’s more important when I’m creating an AI model that affects real people.
“I will hold myself to the standard that I will be critical in analyzing it and understanding what it is. And hopefully professors help guide that conversation too.”
The students’ approach also includes guidelines for faculty, such as: “Treat work submitted by students who declare no use of LLMs as the baseline for grading” and “use a lower baseline for students who declare use of LLMs, depending on how extensive the usage,” while still rewarding creativity. Simply reproducing ChatGPT text would be worth zero points.
Reaction elsewhere around campus
No sooner had ChatGPT been released than anti-ChatGPT programs started emerging, as well, intended to make it easy for a person to detect when text has been generated by a chatbot. “That arms race is ratcheting up so fast,” Wildman says.
Discussions of ChatGPT’s impacts and potential University responses have begun around campus, including in central administration and the Council of Deans as well as among faculty. Many at BU seem to see the potential as well as the problems in the technology.
“Some Writing Program instructors have started to experiment with how to teach with ChatGPT, and some have invited students to help form class policies,” says Sarah Madsen Hardy, a master lecturer and director of the College of Arts & Sciences Writing Program. “The Writing Program has also formed a committee that will create resources and make policy recommendations for the program.
“My own take on ChatGPT is fascinated and cautiously optimistic,” Hardy says. “Writing and technology have always been intertwined, and writers, and writing teachers, will keep adapting, as we always have. It’s early days.”
Some feel the very nature of ChatGPT holds a contradiction to what they’re trying to teach.
“Our school is about human experiences, and so machine learning should not impact what we do to advocate for true ‘people interactions’ and ‘human hospitality,’” says Leora Lanz (COM’87), an associate professor of the practice and assistant dean for academic affairs at the School of Hospitality Administration. “We are drafting copy to incorporate into our syllabi—to encourage ethical use of machine learning and not risk plagiarism, which is, of course, academic misconduct.”
“I have told all three of the undergrad classes that I’m teaching this semester that if they use ChatGPT, they will likely be caught, as I can do the same general searches they can,” says Gregory Stoller, a senior lecturer in strategy and innovation at the Questrom School of Business.
Instead of worrying, however, he envisions a more positive outcome.
“I think it could be used as an alternative research tool. If you were doing company research, for example, for a case competition, I can’t think of a better way of efficiently scouring the internet to make sure you’re leaving no stone unturned.”
It’s important to look at the social and cultural context of how society responds to new technology, says Louis Chude-Sokei, George and Joyce Wein Chair in African American and Black Diaspora Studies and director of the CAS African American & Black Diaspora Studies Program.
“All new technologies generate significant cultural fears,” says Chude-Sokei, whose research is often associated with Afrofuturism. “What I do find fascinating here is the class politics of this fear. We’ve thought and been promised for generations that the real threat from intelligent machines, algorithms, or automation would be to blue collar labor and perhaps to the service industry. This fear is different and likely to be taken more seriously because it’s now seen as a threat to white collar labor and intellectual, so-called ‘higher’ cultural production. AI is the ‘immigrant’ coming for our elite jobs!”
A focal point at CDS
“ChatGPT is only the tip of the iceberg when it comes to how generative AI will impact our ways of thinking and ways of doing,” says Azer Bestavros, BU’s inaugural associate provost for computing and data sciences, a William Fairfield Warren Distinguished Professor, and a CAS computer science professor.
“The question is how are we going to ‘up our game’ in response to the increased use of these tools,” Bestavros says. “This goes to all the things we do in academia, including what we teach, how we teach it, and how we assess learning outcomes.”
Embedding ethics in the data science curriculum has been a priority for CDS from the get-go, Bestavros says. Wildman shared the class’ policy and discussed it with a small group of faculty members, he says, “all of whom indicated that they believed it was a good one for other courses to either adopt or use as a starting point.” He says the policy was also shared with members of the CDS Academic Program Committee.
“I am glad to see CDS at the forefront of this, which is only proper,” Bestavros says. “By doing this, students—the future developers of ChatGPT-like solutions—are experiencing how to engage those affected.”
CAS data science student Clark says she’s proud to be part of the solution. “I guess responsibility comes with that. And maybe it feels a little ethically burdened, to think about these issues that nobody really has answers to. But taking the first step was the most important part.”
Wesley Wildman and two other University faculty members will hold a panel discussion, Learning to Think after ChatGPT, on Thursday, February 23, at 4:30 pm at the Center for Computing & Data Sciences, Room 1750, 665 Comm Ave. Najoung Kim, a CAS assistant professor of linguistics and of computer science, and Wildman will address questions, including what is ChatGPT? What will the future of AI text generation be like? And how can a university formulate ethical policies to incorporate the reality of ChatGPT into teaching? Mark Crovella, a CAS professor of computer science and of computing and data sciences, will moderate. Register here.
In addition, Wildman will hold a Reddit AMA on February 27 from noon to 2 pm. In this open live Q&A, he will discuss the ethics and pedagogy of AI text generation in the educational process.
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.