In Pacific Rim, pilots drive giant robots using the power of their minds. How far away from science reality is the science fiction?
What is it like to be a bat? That was the question posed in 1974 by the noted American philosopher Thomas Nagel. Nagel argued that if you were somehow able to transport yourself into the mind of a bat, you still wouldn’t really know what it’s like to be a bat; you would only have the experience of being a person inside the mind of one. Essentially, Nagel’s arguments rest on the fact that we don’t yet fully understand what consciousness is, and perhaps we never will.
But when it comes to technological developments, does that matter? Growing up with Transformers toys in the 1980s means that I have a certain affinity for over-the-top movies involving giant robots, so when Pacific Rim came out a few weeks ago, I couldn’t have been happier. Yet one thing about it got me thinking back to Nagel’s question; the robots in it were controlled by two pilots, whose brains were linked together. So, given that the movie is set pretty much in the immediate future (around 10 years from now), how far off are we from that technology being a possibility?
Brain-Computer Interfaces, or BCIs, do exactly what the name suggests; they create a direct link between the human brain and a machine to be controlled. The connection doesn’t have to be complex – you can buy a rudimentary BCI and pretend to be a Jedi for about £90. Research into BCIs started in the 1970s, when Darpa commissioned work to look into the potential for neural interfaces in jet fighters. However, subsequent work very quickly turned to whether BCIs could help patients who had been paralysed, or had motor disabilities. For example, Darpa’s Reliable Neural-Interface Technology (RE-NET) uses a concept called Targeted Muscle Reinnervation, which allows amputees to control prosthetic limbs via signals from their brain. It works by reattaching nerve endings to the prosthetic limb, and then getting a computer to learn to translate motor commands from the brain into limb movements.
There’s also the possibility that prosthetic limbs can go even further and provide feedback. For example, a Nature paper from 2011 described a brain-machine-brain interface in which monkeys controlled a virtual-reality arm in a task that required the animal to search for a visual object. Whenever the arm touched a virtual object, a signal was sent back to the monkey’s sensory cortex, which produced virtual touch feedback about the object.
Most research into BCIs uses a technique called electroencephalography (or EEG) to record and monitor electrical activity from the brain, via electrodes placed on the scalp. The problem with this technique is that it’s not very precise; EEG can’t determine the depth in the brain that electrical activity is coming from, and the spatial resolution is poor (a single electrode will pick up activity from millions of neurons).
However, that doesn’t make EEG a useless technique – recently, researchers at the University of California, San Diego attempted to get around these problems by developing a collaborative BCI system. They recorded brain activity from 20 people who had to imagine a reaching movement, which was then translated into an actual movement by an artificial limb. However, instead of getting participants to do the task individually, the researchers instead combined the decisions from the entire group in order to come to a more accurate conclusion. They found that different ways of combining information from people worked to varying degrees of effectiveness, with a voting method (rather than simply averaging information) coming out on top.
Other researchers have taken these ideas a step further. Riccardo Poli and colleagues from the University of Essex recently looked at whether it would be possible to control a simplified spaceship simulation using a BCI. In order to control the ship, the “pilot” would have to concentrate on an onscreen pointer to indicate their desired direction of travel. Electrical activity from the brain was picked up by EEG, which in turn was used to move the ship accordingly. As with the research from the UCSD group, Poli found that if two pilots were used instead of one, a clearer EEG signal could be picked up, and the movement of the spaceship became more accurate – so maybe two heads really are better than one.
It’s one thing to combine information from two brains, but a very different thing to have one brain controlling another. A recent paper in PLOS ONE attempted to do just that – by developing a brain-computer-brain interface to link a human brain to a rat’s brain. The idea was to allow the human to control the rat’s actions; by looking at a strobe light flickering on a computer display, participants signalled their intention to generate a movement in the rat. This signal in the human brain was translated into a motor command that was delivered to the rat’s brain, causing its tail to move.
All of these developments are really exciting, and hint at a world of extraordinary future possibilities for connecting minds. But sadly for the mega-robots of Pacific Rim, they are all still in their infancy, and 10 years of research is probably not going to change that. So, if a Kaiju does spring up out of an interdimensional portal underneath the Pacific Ocean in the next few years, maybe we’ll have to look at more drasticcountermeasures instead.
Raw Story is a progressive news site that focuses on stories often ignored in the mainstream media. While giving coverage to the big stories of the day, we also bring our readers' attention to policy, politics, legal and human rights stories that get ignored in an infotainment culture driven solely by pageviews.
Founded in 2004, Raw Story reaches 9 million unique readers per month and serves more than 30 million pageviews.