Will robots take our jobs?

Three CAS experts share their ideas for what it means to be human in a world of AI

| in The Big Question

By Ana Rico (COM’25)

As artificial intelligence (AI) systems become more advanced and powerful — processing vast amounts of data, performing complex tasks, generating new ideas, designs, and, even language, new questions have emerged: What does it mean for privacy? Fairness? Transparency? Accountability? What does it mean for humanity? What does it mean for our jobs? 

When we asked ChatGPT, the artificial intelligence-infused chatbot said: “Being human in the age of AI means navigating a world where technology is increasingly integrated into our lives and has the potential to reshape our society, economy, and even our understanding of what it means to be human.” 

AI systems can complement human abilities; but they cannot replace human creativity, imagination, or emotional intelligence. Being human in the age of AI means grappling with these questions, adapting to new ways of working and living alongside AI, leveraging its strengths while also recognizing its limitations; celebrating and cultivating our ability to dream, create, and imagine new possibilities.

To help us understand all of this, and learn where artificial intelligence falls short, we asked three CAS faculty members in three different fields — Economics, Philosophy, and Psychological and Brain Sciences — are robots taking over our jobs?

Pascual Restrepo, an associate Professor of Economics whose research focuses on the impact of technology on labor markets, employment, wages, inequality, and growth. His most recent publication, “Demographics and Automation,” studies the impact of industrial robots on US labor markets. 

Juliet Floyd, Borden Parker Browne Professor of Philosophy and Director of the Boston University Center for the Humanities. A philosopher on logic, language, mathematics, and science, she has written over 80 articles, one of them on the topic of opacity in AI and the concepts of “rigor” and the “everyday.” ‘

Rachel Denison, an Assistant Professor of Psychological and Brain Sciences who studies visual perception, attention, and decision making. Her research focuses on how the brain integrates visual information in real-time to produce a coherent perceptual experience. 

Pascual Restrepo, associate Professor of Economics

Pascual RestrepoThe question of whether robots and AI will take all our jobs isn’t the right one to ask. The crux of the issue isn’t merely about whether we will or won’t work but rather the circumstances that could lead to either scenario. For example, we might opt not to work because jobs are so well-paying that we can afford to reduce our hours. Conversely, we might find ourselves unable to work because our skills and abilities no longer align with the demands of the future economy, leaving us without a job where we can make a significant contribution. Even worse, we could be left stuck working long hours at low-pay jobs of little economic importance.

Focusing on the sheer number of jobs or total hours worked is misguided. The real concern is whether technological advancements are paving the way toward a future where large segments of society find their skills and abilities undervalued, leaving them without access to high-pay jobs where they can provide value. In such a scenario, a small group of individuals would have a significant stake in the economy, while the majority would have minimal input and no access to meaningful work.

This scenario is eerily reminiscent of Kurt Vonnegut’s novel Player Piano, where automation and machines have replaced most human labor, leading to a stark division between the few who maintain the machines and the majority who are left without access to meaningful work.

Are we heading to a future with a scarcity of valuable jobs for a significant segment of society? It is hard to tell. Technology will undoubtedly eliminate some roles for humans, but it is also likely to create new ones. The nature of jobs will inevitably change, as it has throughout history. The critical question is whether these new roles will generate enough demand for workers with diverse skills or will they only benefit a select few with highly sought-after skills, much like Vonnegut’s engineers.

During the initial stages of the industrial revolution, the transition wasn’t smooth, and it took some time for technological progress to raise everyone’s wages and create broad-based access to valuable employment opportunities. In the last 40 years, we have seen a similar trend, with technological progress automating or devaluing some jobs and skills more than it has created new employment opportunities, especially for workers without a college degree. This is evidenced by the stagnant wages and decreasing employment opportunities for non-college-educated Americans since 1980. 

We can’t definitively say that robots and AI will leave many without access to good, high-paying jobs. But we should not rule out this possibility either. Our best course of action requires acknowledging the potential for significant shifts in the job market and preparing accordingly, lest we find ourselves in a reality akin to Vonnegut’s Player Piano.

Juliet Floyd, Borden Parker Bowne Professor of Philosophy and director of the Boston University Center for the Humanities

Juliet FloydSome jobs will be eliminated by robots, but it is difficult to predict exactly which ones, not simply because the rise of technology is difficult to predict, but also because we are talking about human forms of life, which evolve in a variety of different ways, both socio-culturally and biologically, in response to how we feel and speak. Not everything can be automated, even if large quantities of data can be organized very efficiently (not always in a truthful or explainable way) by AI. As Hannah Arendt held in her book The Human Condition, we need to distinguish between “work” — which may be self-developmental and growth-inducing — and “labor”, which is repetitive and boring. 

Step-by-step, rule-determined tasks can be automated. ChatGPT can generate pretty good, sometimes accurate, web-scraped B-level reporting on facts, more grammatically than some humans, because most of what we say in everyday life is predictable. However, human philosophical and ethical experience — reflection, discussion and personal growth — cannot be automated. As Arendt put it: “vitality and liveliness can be conserved only to the extent that [humans] are willing to take the burden, the toil and trouble of life, upon themselves”. Maybe more of us should and will pursue more forms of work, but not as a job.

In the history of capitalism, jobs have frequently been created from technological shifts, but disruptions made life brutally difficult for those whose expertise is displaced. Today we are all dependent upon AI and are undergoing a major shift in forms of vulnerability. Startups seemed romantic until many young workers didn’t get paid. When flights are canceled, the remarkably dense efficiency of our air transportation systems crashes, saddling us with huge backups and supply chain snafus. Having one’s phone near one is now almost always a must. Inequality is a major problem, as are climate degradation and the danger of AI-designed superbugs and crowd-sourced mass shooting manuals for Incels. Supermarkets, which were called “self-serve” when they appeared, were far more efficient than old-time, everything-behind-the-counter stores until COVID hit, and then some people again began to ask someone else to pack their bags, and at Amazon it was a robot, while the delivery person was a person. 

We will need AI to save us from AI, whether we like it or not, and we will have to discuss and interpret and include ethics in its uses. There will be plenty to do. Human to human culture, including intergenerational differences, is crucial. GrubHub became popular during COVID: remote work made it romantic to order a meal in — and remote work has the potential to disrupt offices. But people seem to be drifting back toward the idea of going out and being served by a real person, just as some are drifting away from dating apps to human matchmakers

Will this last? BU Emerging Media Studies Ph.D. Kate Mays, now a postdoc at Syracuse, showed in her dissertation on emotions and robots that inter-culturally humans have certain preferences in the way robots appear: gender-neutral is the general favorite, female-looking next, and male-looking robots are the least liked. But will this be the same in the next generation? Will it be true for sex robots? How many people will prefer sex robots to humans anyway? Note that while in robotic Fictosexuality a human fantasizes a partner who will never let them down, when the software of a holographic companion is discontinued, one may be worse off

Rachel Denison, assistant professor of Psychological & Brain Sciences

Rachel Denison

For robots to take human jobs, they have to be able to do things that humans do. So which human tasks are easier and harder for robots, and why?

Today’s computer-controlled systems can do two kinds of tasks well—which, interestingly, lie on opposite ends of a spectrum. At one end, industrial machines excel at automation, churning out everything from cars to computer chips. Automation tasks involve repetitive, inflexible behavior in highly-controlled physical environments. Factories can be built to precise specifications for robots to operate according to a fixed program. At the other end, generative AI systems excel at producing infinitely flexible abstract content, in a virtual realm free of physical constraints. 

In between these two extremes of current robot prowess is a large space of tasks that require flexible behavior in uncontrolled physical environments. A rundown of major industries—food, housing, healthcare, retail, transportation, tourism—reminds us how much of our lives still takes place in the messy world. A fundamental challenge of behaving effectively under such conditions is dealing with uncertainty. 

Even just figuring out what is happening in the world at any given moment requires resolving innumerable ambiguities in sensory data. Our eyes and ears give us partial information about what’s out there; our brains fill in the rest. Even though perception feels effortless to us, if you’ve ever looked at an ambiguous image like the duck-rabbit or old woman-young woman, you’ll know that our brains are doing a lot of interpretive work under the hood. Robots will have to do the same to understand what is going on in novel, changing environments. Current computer vision systems in self-driving cars still make mistakes humans never would. 

Making a guess about what’s happening is one thing; deciding what to do about what’s happening is quite another. But the challenge of handling uncertainty is at the heart of decision making, too. In the course of our jobs, we often have to make quick decisions using incomplete information. How should I handle this customer who just snapped at me? Should I perform emergency surgery on this patient? We often simply cannot get all the information we wish we had in order to decide the best course of action, and robots won’t be able to either, despite their vast access to stores of human knowledge. In many real-world decision scenarios, the most critical information is specific, contextual, and unavailable. 

Robots will likely get better and better at handling uncertainty in perception, decision making, and action. But at the end of the day — barring a wholesale robot takeover — the jobs robots do will be the jobs we let them do. For this reason humans will have to trust robots to make good decisions in the face of all this uncertainty. One interesting possibility is that robots may be able to tell us about their own levels of confidence in their judgment calls — a process that requires metacognition, or thinking about one’s own thoughts. The better their metacognition, the more we’ll trust them. And the sooner we’ll be able to step in when they’re out of their depths.
____________________________________________________________________________________________________________

Interested in learning more? Join Arts & Sciences for the 2023 Gitner Family Lecture, “What does it mean to be human in the world of AI?” with Arts & Sciences faculty members Margarita Guillory, associate professor of religion; Rachell Powell, professor of philosophy; Rachel Denison, assistant professor of psychology & brain sciences; Pascual Restrepo, associate professor of economics.