Posted April 2023

If the concept of AI still seems foreign to you—you’re not alone. Despite its recent surge in use, AI remains a complicated and polarizing topic. While some welcome its introduction into daily life for ease and convenience, others fear its ramifications on jobs and internet security. So, what do we know about AI, and can it be trusted?

Behavioral scientist Chiara Longoni, a Boston University Questrom School of Business assistant professor of marketing, has studied human trust of AI sources and found that people are generally skeptical of it. Across demographics, “people tend to believe generative AI less compared to the same content written by a human reporter” Longoni told The Brink. People are also less forgiving of mistakes made by AI and remain resistant to its use in high-stakes settings such as medicine and government. Given these reservations, Longoni believes AI would be most useful as secondary support to human judgment and opinion.

So, if people maintain an overall attitude of skepticism towards AI, why has it become so popular? While AI platforms like ChatGPT are perceived as intriguing and revolutionary, their installation and use are uncommon. This scarcity only builds curiosity. After all, who wouldn’t be interested in a tool that could “magically” perform tasks that would otherwise take a lot longer?

Of course, the “magic” of AI isn’t really “magic” at all—it’s the culmination of years of human training and developing code. Unlike traditional computers that simply follow user input, artificial intelligence imitates human learning behaviors to gather large amounts of information and generate complex outputs. This process allows AI to produce original essays, do research, and answer tough questions. Some creative users have prompted platforms like Chat-GPT to output answers to philosophical questions, produce “original” art, and even give life advice. While most platforms routinely output wrong answers, misinformation, or outrightly refuse to answer, they are still used and even relied on for school work and job tasks.

AI’s current capabilities and future promise have made it a big business. In 2021, The International Trade Administration reported that a record 65 AI companies reached over one billion in valuations (up 442% from the previous year).

While many see AI as an exciting frontier of technology, it has already caused many universities to worry about its impact on student work. Mark Crovella, chair of the Center for Computing and Data Science (CDS) Academic Policy Committee, told BU Today that while AI has increased the potential for plagiarism, “the world is never going to go back to pre-ChatGPT, and we have to understand how to productively coexist with these kinds of tools.”

On the frontier of new technology, BU has accepted AI as part of its curriculum. This semester, the Center for Computing and Data Science adopted a student-developed policy for generative AI use in the classroom as the department’s official stance. While the current policy expects to adapt alongside AI’s capabilities, as of now, students are not allowed to use AI on assignments without permission or giving credit. But, BU doesn’t outrightly ban its use either. Students are still encouraged to learn about AI platforms and nurture their curiosity.

CDS isn’t the only school responding to the growth of AI. Boston University Wheelock College of Education & Human Development assistant professor Naomi Caselli is using AI in her research to track ASL gestures across hundreds of videos to determine how language develops over time. While research of this scale would have previously seemed impossible, unlike humans, AI can quickly and empirically study mass amounts of data.

Despite its impressive capabilities, AI still has flaws. Boston University Professor of Law Ngozi Okidegbe explains that the algorithms we usually think of as impartial can actually uphold prejudiced beliefs. Since AI is human-made, its outputs reflect the unconscious biases of its coders. To counteract this, Okidegbe hopes more diverse perspectives will be included when creating future algorithms and code. With this change, Okidegbe predicts AI could bring productive change to politics and even the justice system!

Whether AI scares or excites you, it is undoubtedly paving the way for how we will work, learn, and collaborate in the future.