What science fiction tells us about our trouble with artificial intelligence
Given that the reality of AI may be fast approaching, it’s of the utmost importance that we work out what might a future with artificial intelligence might look like. Last year, an open letter with signatories including Stephen Hawking and Nick Bostrom called for AI to be of demonstrable benefit to humanity, or risk something that exceeds our ability to control it.
AI, as conceived of in popular culture, does not yet exist, even if autonomous and expert systems do. Smartphones might not be supercomputers, but they are called “smartphones” for good reason, in terms of how their operating systems function. Equally, we are happy to talk about a computer game’s “AI”, but gamers quickly learn to take advantage of its limitations and inability to “think” creatively. There is an important difference between these systems and what is termed Artificial General Intelligence (AGI) or “strong AI”, an AI with the general intelligence and aptitudes of a human.
Both the US and British governments’ exploration of the significance and implications of AI research has focused on potential economic and social impacts. But politicians would do well to consider what science fiction can tell them about public attitudes – arguably, one of the biggest issues concerning AI
Culturally, our understanding is informed by the ways in which it is represented in science fiction, and there is an assumption that AI always means AGI, which it does not. Fictional representations of AI reveal far more about our attitudes to the technology than they do about its reality (even if we sometimes seem to forget this). Science fiction can therefore be a valuable resource from which the public view of AI can be assessed – and therefore corrected, if need be.
I, Robot: the greater good
In Alex Proyas’s adaptation of Isaac Asimov’s stories, I, Robot (2004), there is a heart-to-heart scene in which we learn of the reason for a detective’s mistrust of robots. He recounts a car crash in which two cars end up in a river, and that a robot determined that it was better to save the detective than it was to save a child because it the detective had a higher percentage chance of survival. The scene serves to demonstrate the inhumanity of AI and the humanity of the detective, who would have opted to save the child. This scene, for all its Hollywood gloss, is indicative of the core ethical issues concerned with AI research: it denigrates AI as not being “moral” but merely a pattern of encoded behaviours.
But is the robot in this situation actually wrong? Isn’t it better to save one life than lose two? Here, emergency triage is not seen as “inhuman” but necessary. “Greater good” arguments have been going on for centuries and, in this situation, the “greater” good, saving the policemen or the child, is debatable, especially as the detective later saves humanity from the ravages of VIKI, an AI gone rogue.
The context in which this decision is made, the parameters through which the robot reached its percentage conclusion, could also factor in any number of concerns, albeit limited by those programmed into it. Is the emotional response, if saving the child is a fundamentally emotional approach, the correct one?
One of the problems we face as a society engaging with an AI-future is that machine intelligences might actually demonstrate the contingency of our own moral codes, when we want to believe them to be universally applicable. Is the problem not that the robot was wrong, but that in fact it might be right?
Interacting with AI
The ways in which AI have been represented lead to pretty much the same conclusion: any AI is inhuman(e) and therefore dangerous. Just as VIKI in I, Robot turns against humanity, as she finds another “logical” interpretation of Asimov’s three laws (designed to protect humans), there are a plethora of stories and films in which AIs take over the world (Daniel H Wilson’s Robopocalypse and Robogensis, the Matrix and Terminator franchises). There are many more about how they are insidious and will directly control humanity, or enable factions to take more complete control of society (Daniel Suarez’s Kill Decision, Neal Asher’s Polity stories, the TV series Person of Interest).
But there are relatively few about how they might cooperate with humanity (Asimov got in here early on this and remains one of the few, although Ann Leckie’s Ancillary trilogy is also of interest). The hypocrisy is that this trend suggests that it’s fine for governments to monitor its citizens and for corporations to analyse social media feeds (even using software bots), but an AI shouldn’t. It’s like saying that you’re happy being screwed over, but only by a political system or another mammal, not a computer.
One solution, therefore, is to consider how to limit AIs and teach them human ethics. But if we “train” AIs to have ethical behaviours, who do we trust to train them? To whose ethical standards? Given the recent issues Microsoft had with Tay (members of the public tried to “trick” an AI into making potentially offensive statements), it is clear that if an AI learns from humanity, what it learns might be precisely that we’re not worth the time it takes to tweet back to us. We don’t trust robots to think for themselves, we don’t trust ourselves to program them or use them ethically, and we can’t trust ourselves to teach them. What’s an AI to do?
Public perceptions of AI will be governed by just this sort of mistrust and suspicion, fostered by such public debacles and by the broadly negative view evident in much science fiction. But what such examples perhaps reveal is that the problem with AI is not that it is “artificial”, nor that it is immoral, nor even in its economic or social impact. Perhaps the problem is us.