The Big Picture: What Does the Evolution of AI Mean for Our World?
Rachell Powell explores the philosophical and ethical aspects of rapidly advancing artificial intelligence
Rachell Powell explores the philosophical and ethical aspects of rapidly advancing artificial intelligence
Philosopher Rachell Powell isn’t afraid of the rapid advancements in artificial intelligence. On the contrary, she is fascinated by the questions AI raises and how it fits into her areas of interest: evolutionary theory and biomedical enhancement ethics.
“My interest is in the more general theoretical problems—questions about the existential risks that AI might pose to continued species’ existence or to life on the planet, but also these deeper theoretical questions about things like the nature of intelligence, the nature of mind, the moral status of increasingly sophisticated machines,” says Powell, a professor of philosophy and the new director of BU’s Center for Philosophy & History of Science. “Also, because I work on long-term evolution, when I consider all the people worried about the long-term impact of AI, I wonder what evolutionary theory has to offer about some of those questions.”
Arts x Sciences spoke with Powell about philosophical and ethical considerations surrounding the development and use of artificial intelligence.
Arts x Sciences: What about AI interests you, given your background in philosophy?
Rachell Powell: There’s obviously a vast range of different kinds of AI. They’re going to have radically different implications, and there are different approaches to dealing with their philosophical issues. To simplify things, one thing we can say is that there’s this big bifurcation between the people who are concerned about the shorter-term risks of imaginable AI—or AI that’s being implemented right now or might be implemented in the coming months and years—and the people who are working on the really theoretical, long-term risk sides of AI.
The question for me is, how can my work on the evolution of cognition, evolutionary theory, and enhancement ethics help illuminate some of these issues?
Do you think there’s a right and wrong side to this issue?
The people who are concerned about the short-term risks focus on these big issues that we need to deal with right now. And they don’t think we should spend too much time talking about what some people call artificial general intelligence or sentient AI or super intelligences, or about these big risks to humanity. I think there’s something very reasonable about that view.
Then there are a lot of scholars working in this area of philosophy called longtermism. It involves hot-button and controversial philosophical and moral theories, including that we really need to think about ethics for long-term humanity existence. Because when you do the ethical computations, and you add up all the good stuff that comes out of [the long-term thinking], it’s going to dwarf all the good stuff that we get right now from making these small, shorter-term tweaks to our societies. They believe we really need to think about the extinction of the species due to these kinds of super intelligences. Because if we’re gone, our entire future is gone.
Our conceptual resources are finite. Our monetary resources are finite. So, these two sides are competing for resources and attention. I understand the concern on both sides, actually.
My interest is in the existential risks that AI might pose to continued species’ existence or to life on the planet.
What are the moral and ethical considerations?
My interest is in the existential risks that AI might pose to continued species’ existence or to life on the planet, but also these deeper theoretical questions about things like the nature of intelligence, the nature of mind, the moral status of increasingly sophisticated machines. What is going to be their moral political place in our society?
There are three interrelated aspects of the more theoretical side of AI that interest me in particular as a philosopher of biology and as a bioethicist. One is what people call the control problem, how you’re going to control AI. Another is the nature and evolution of intelligence. And the third one is the moral status of these machines, because I work on the moral status of nonhuman animals, embryos, people with brain damage, and so forth. These are all related things.
Let’s start with the control problem and go from there.
Part of the control problem is that, okay, we’re going to make these increasingly sophisticated machines. Well, how are we going to be able to control what they do or want? How are we going to be able to control what they become? And those are big, big, big, big problems.
One might wonder why these are philosophical problems or how they relate to the philosophy of biology. Here’s the way I see it: One of the big differences so far between living things and machines is that machines don’t really want anything. They’re not motivated to do things. They follow executing programs, but they don’t really have drive. Living things have drives; they want stuff. Things that happen to them have meaning to them.
So far, we haven’t been able to design machines with these kinds of agencies and these kinds of motivations. They can do very sophisticated things, like make art and dominate chess. They also do a lot of really dumb things. But I don’t like slamming the dumb stuff, because eventually they are not going to do those dumb things anymore. And then who’s laughing?
What about the evolution of AI?
One of the key questions is whether we’re going to be able to design AIs with these kinds of motivational systems. And if we do, then are those motivational systems going to be persistent under self-modification and self-improvement? In other words, when these machines want to improve their functionalities, if they have access to their own source codes and they can make modifications that they deem to increase efficiency, for example, are we going to be able to control the stability of what they want?
And if these AI systems develop agency, then what?
The difference here is that these AI systems are also going to be smart as hell. In some respects, they’re going to be smarter than us. If they have access to their own design, to their own source codes, they might modify their motivations. And if we try to put impediments to that, they may see that as a threat, and they could try to maximize their utility functions, however those have been set, and we don’t even know that that’s going to remain stable.
I sometimes think about the classic movie 2001: A Space Odyssey. There’s this AI on the spaceship, and in a famous scene, the astronaut is trying to shut down the AI, and the AI says, “Just what do you think you are doing, Dave?” I think that’s legit. Machines may begin to resist these kinds of modifications. This becomes a really serious set of questions that we need to deal with.
I think when machines are successfully embodied—which they’re not right now—then all bets will be off. At that point, you’re going to have this robust living system with really key differences between what we’ve normally been dealing with, and I don’t know that we will be able to control the outcomes. At some point, AI will cross that threshold, with machines becoming more embodied and starting to actually care about things and being able to get around in the world.
This all sounds a little terrifying, no?
This doesn’t actually scare me. I think that so much of evolution is like that to begin with. I’m not scared because I’m not afraid of disappearing. I’m not afraid of humanity disappearing. I take a really, really long view. I’m not convinced that humanity continuing into the future is really important for a whole range of reasons, but that’s definitely an iconoclastic position.
You also mentioned you are interested in the nature and evolution of intelligence, as well as the moral status of AI.
Yes. What forms does intelligence take? What role does it play in the evolution of societies? In the evolution of ecosystems? And, importantly, how is intelligence related to things like consciousness, how is it related to the moral status of individuals? Can you have incredible intelligence without sentience? Can you have extraordinary intelligence without wetware [a central nervous system]? Is that a necessary condition for sentience? No one knows the answers to these questions yet. We have some hunches.
There is a lot of ongoing work about this, and there is a lot of overlap with existing problems. For example, what’s the moral status of people with neural degeneration, or infants, or developing embryos and fetuses? What about nonhuman animals with varying kinds of brain structures? Some share brain structures with us because of our common ancestor; some of them evolved completely independently. Which ones matter for moral purposes?
All of these questions relate to the AI problem, but AI is radically different because it doesn’t even make use of the same cellular basis. Does that matter? We don’t know. But it does make it harder to assess in some ways.
Many critics of AI have called for a pause on AI development until we have a better understanding of it and can better prepare safeguards. What do you think of that stance?
When people are assessing risk with new technologies, they tend to put more weight psychologically on the risk side, and not enough weight on the positives. That’s not a good way of doing risk analysis because the benefits are hugely important.
If by saying “pause” you mean forgo all benefits of these technologies indefinitely—until when? What do you want to know, exactly? You can’t just say, there are risks and now let’s put a pause on this. That is not how it works. When pharmaceuticals come out on the market, they have risks, but there are also big benefits. There’s an opportunity to increase human well-being. So, you get it out there when you deem the benefits and risks work out the right way. I don’t think you can just sit around saying, let’s pause. It’s fearful. I don’t think that’s a good ethical approach.
So, with all this said, how might you predict AI will develop in, say, 10 or 20 years?
The futurists have been in the business of making these predictions, and they’re always wrong. [Laughs.] They’re always underestimating changes in some areas, and then they’re massively overestimating changes in other areas. Nobody really saw the impact of the internet on our world until it happened. Yet, everyone thought that by 1980, we’d have fully functioning androids, flying cars, and that robots would be everywhere, integrated in human society. That’s not even remotely the case.
And evolutionists are not going to make predictions, because the big lesson of evolution is that whatever happens in the long run is not going to be predictable. That’s because these biospheres and living systems, these evolving systems and ecosystems, are so extraordinarily complex. They’re also shaped by incredibly contingent historical events that could easily have been otherwise.
I don’t know what’s going to happen in the future. We’re just a blip, and we’ll see: will our machines be less than a blip in another part of the evolutionary process? Is the transition from biological life to machine life part of living worlds throughout the galaxy? That plays into important questions about the search for extraterrestrial intelligence and worries about the explosion of AI. Why don’t we see it all around the galaxy if that’s what’s going to happen here? Why has it not happened elsewhere?
This interview has been edited for clarity and brevity.