The Consolation of Philosophy

Last week I read Eugene Thacker’s In the Dust of This Planet, and I was euphoric for at least a few days, just going over and over the ideas and processing what it all means. I don’t know yet exactly what I think of all of it – and as soon as I have time after writing my papers these next few weeks, I’m probably going to re-read this book beginning to end again, and take much more detailed notes. I was going to write a blog post about what I was thinking about as I read and as I thought about it afterwards. But I realized I’m still thinking, so that will wait a while.

But then today I came across this piece by Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek in The Independent. They discuss the “risks” of Artificial Intelligence: “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” and emphasize that since AI inherently means that the technology will be self-improving, “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” 

This leads them to a call to consider the thought going into the development of this technology:

“So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here – we’ll leave the lights on”? Probably not – but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.”

After a semester spent reading about and discussing decentering humans from this kind of discussion, reading this piece was an interesting experience. A few quick thoughts:

I’ve been focusing recently on spotting unquestioned assumptions inherent in choices – choices of words, choices of reading material, choices of subjects for discussion, etc. In this piece, the word “risk” jumped out at me as leaving some key points unaddressed. Talking about the “risk” of AI is assuming an inherently anthropocentric view, not feeling the need to define risk as “risk to humankind” or anything like that.

Instead, it just assumes that the only consideration worthy of being called a “risk” is the survival of the human race.

But the thing is this: When I read Cary Wolfe’s Before the Law, one of the issues I started thinking about was that if we decenter humans, amid the argument that we as humans have no right to destroy another species (leaving objects out of this for now, because I want to focus on beings that have agency in the more conventional sense than object-oriented ontology or thing theory takes into account), one of the things that could get lost is the right of the human to defend itself against destruction in exactly the same degree and manner as any other species.

But the balance between seeing humans as inherently the subject of the consideration of “risk” and seeing the human as parallel to other species in their right to survival is, I think, sometimes hard to strike.

When these scientists say we wouldn’t just sit back and allow it if aliens said they’re coming in a while, I get their point, but I think creating this AI technology is different because we’re the ones creating it. We would choose to fight the aliens, that seems obvious, given this posthumanist idea that every creature/species has the right to defend itself against attack from others, including humans.

But we can’t turn back time and erase the possibility of these aliens ever existing, nor would anyone agree that systematic genocide of these aliens before they can attempt to attack is a good idea.

How exactly is AI different? It’s only in that we are at the point of creating them ourselves. So what does ethics say about the choice not to allow the generation of a species that will be sentient? Are we committing murder by choosing not to create these sentient beings? What right do we have to decide that our lives, our existence, are more valuable than AI’s? What if AI is more beneficial to the world than humans could ever be? That would turn the idea of “risk” around, and say that the risk to the world in general is allowing humans to take control and annihilate AI before it can even come into existence.

I know this sounds extreme, and I’m not advocating that choices about the production of AI be made purely from consideration of these questions. The situations the scientists describe in the article are very real problematic considerations – creating AI weapons, creating AI tech that could demolish the economy – but these are all in the short-term.

When I posted a shorter version of this on Facebook, a friend commented: #frankensteinproblems. So yes, these ideas have been around for quite some time. The main question for me, I think, is what I mentioned about spotting assumptions inherent in choices – we come to any new idea, any new situation, with a set of assumptions we may not even know we have.

As humans, it is extremely difficult to imagine a non-anthropocentric world (a problem which Thacker discusses), as much as we may want to. Some posthumanist and ecocritical theorists I’ve read take it to another extreme, and (probably not purposely, I’d like to think) suggest that humans not even defend themselves if that means destroying another species or entity.

But humans are the only species (that we know of, which is part of the problem) that can/will even consider this question. A non-human animal, a plant, acting on “instinct,” will defend itself when it finds itself in danger without considering this question (as far as we can tell). If we as humans agree that we should act the same way, the tension between that decision and the fact that we can “see both sides,” as it were, consider the damage that will be done either to the self or to the opposing party, completely problematizes the whole situation.

The crux of this particular question turns on the fact that with AI, we’re not destroying anything – in fact, we’re creating something. But by choosing not to create, are we in essence destroying? And along with that question, is choosing not to create/choosing to destroy merely exercising the right shared by every creature and species to defend itself or is it exercising control over other species the way humans have been doing for millennia to the detriment of the planet? And finally, I don’t think the question has been satisfactorily answered yet (at least for me) – can we, and should we, as humans, choose an option that essentially seals our fate of extinction if it will result in a better “world”?

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: