Quantcast
Connect with us

Brain scanners allow scientists to ‘read minds’ – could they now enable a ‘Big Brother’ future?

Published

on

Are you lying? Do you have a racial bias? Is your moral compass intact?
To find out what you think or feel, we usually have to take your word for it. But questionnaires and other explicit measures to reveal what’s on your mind are imperfect: you may choose to hide your true beliefs or you may not even be aware of them.

But now there is a technology that enables us to “read the mind” with growing accuracy: functional magnetic resonance imaging (fMRI). It measures brain activity indirectly by tracking changes in blood flow – making it possible for neuroscientists to observe the brain in action. Because the technology is safe and effective, fMRI has revolutionised our understanding of the human brain. It has shed light on areas important for speech, movement, memory and many other processes.

ADVERTISEMENT

More recently, researchers have used fMRI for more elaborate purposes. One of the most remarkable studies comes from Jack Gallant’s lab at the University of California. His team showed movie trailers to their volunteers and managed to reconstruct these video clips based on the subjects’ brain activity, using a machine learning algorithm.

In this approach, the computer developed a model based on the subject’s brain activity rather than being fed a pre-programmed solution by the researchers. The model improved with practice and after having access to enough data, it was able to decode brain activity. The reconstructed clips were blurry and the experiment involved extended training periods. But for the first time, brain activity was decoded well enough to reconstruct such complex stimuli with impressive detail.

Enormous potential

So what could fMRI do in the future? This is a topic we explore in our new book Sex, Lies, and Brain Scans: How fMRI Reveals What Really Goes on in our Minds. One exciting area is lie detection. While early studies were mostly interested in finding the brain areas involved in telling a lie, more recent research tried to actually use the technology as a lie detector.

As a subject in these studies, you would typically have to answer a series of questions. Some of your answers would be truthful, some would be lies. The computer model is told which ones are which in the beginning so it gets to know your “brain signature of lying” – the specific areas in your brain that light up when you lie, but not when you are telling the truth.

Afterwards, the model has to classify new answers as truth or lies. The typical accuracy reported in the literature is around 90%, meaning that nine out of ten times, the computer correctly classified answers as lies or truths. This is far better than traditional measures such as the polygraph, which is thought to be only about 70% accurate. Some companies have now licensed the lie detection algorithms. Their next big goal: getting fMRI-based lie detection admitted as evidence in court.

ADVERTISEMENT

They have tried several times now, but the judges have ruled that the technology is not ready for the legal setting – 90% accuracy sounds impressive, but would we want to send somebody to prison if there is a chance that they are innocent? Even if we can make the technology more accurate, fMRI will never be error proof. One particularly problematic topic is the one of false memories. The scans can only reflect your beliefs, not necessarily reality. If you falsely believe that you have committed a crime, fMRI can only confirm this belief. We might be tempted to see brain scans as hard evidence, but they are only as good as your own memories: ultimately flawed.

fMRI scanner.
wikipedia

Still, this raises some chilling questions about the possibility for a “Big Brother” future where our innermost thoughts can be routinely monitored. But for now fMRI cannot be used covertly. You cannot walk through an airport scanner and be asked to step into an interrogation room, because your thoughts were alarming to the security personnel.

Undergoing fMRI involves lying still in a big noise tube for long periods of time. The computer model needs to get to know you and your characteristic brain activity before it can make any deductions. In many studies, this means that subjects were being scanned for hours or in several sessions. There’s obviously no chance of doing this without your knowledge – or even against your will. If you did not want your brain activity to be read, you could simply move in the scanner. Even the slightest movements can make fMRI scans useless.

ADVERTISEMENT

Although there is no immediate danger of undercover scans, fMRI can still be used unethically. It could be used in commercial settings without appropriate guidelines. If academic researchers want to start an fMRI study, they need to go through a thorough process, explaining the potential risks and benefits to an ethics committee. No such guidelines exist in commercial settings. Companies are free to buy fMRI scanners and conduct experiments with any design. They could show you traumatising scenes. Or they might uncover thoughts that you wanted to keep to yourself. And if your scan shows any medical abnormalities, they are not forced to tell you about it.

Mapping the brain in great detail enables us to observe sophisticated processes. Researchers are beginning to unravel the brain circuits involved in self control and morality. Some of us may want to use this knowledge to screen for criminals or detect racial biases. But we must keep in mind that fMRI has many limitations. It is not a crystal ball. We might be able to detect an implicit racial bias in you, but this cannot predict your behaviour in the real world.

ADVERTISEMENT

fMRI has a long way to go before we can use it to fire or incarcerate somebody. But neuroscience is a rapidly evolving field. With advances in clever technological and analytical developments such as machine learning, fMRI might be ready for these futuristic applications sooner than we think. Therefore, we need to have a public discussion about these technologies now. Should we screen for terrorists at the airport or hire only teachers and judges who do not show evidence of a racial bias? Which applications are useful and beneficial for our society, which ones are a step too far? It is time to make up our minds.

The Conversation

Julia Gottwald, PhD candidate in Psychiatry, University of Cambridge and Barbara Sahakian, Professor of Clinical Neuropsychology, University of Cambridge

ADVERTISEMENT

Julia Gottwald, University of Cambridge and Barbara Sahakian, University of Cambridge

This article was originally published on The Conversation. Read the original article.


Report typos and corrections to: [email protected].
READ COMMENTS - JOIN THE DISCUSSION
Continue Reading

Breaking Banner

Conservative Dilbert cartoonist Scott Adams feels personally ‘abused’ by Trump

Published

on

Dilbert cartoonist Scott Adams has spent an overwhelming amount of time, energy and money supporting President Donald Trump. He tried to rewrite history on the Charlottesville riots and has defended the president for years. But the Tuesday evening debate seems to have broken him.

In a video, Mr. Adams lamented that Trump had every opportunity to take down a Democratic talking point about him being a racist. But he didn't.

Continue Reading

Breaking Banner

‘Scheme of the devil’: Southern Baptist leaders repudiate Trump’s views on white supremacy

Published

on

Senior leaders within the Southern Baptist Convention are speaking out against white supremacy in the wake of President Trump's controversial comments where he called on the far-right nationalist group Proud Boys to "stand back and stand by," the Christian Post reports.

"When asked to condemn white supremacy, every single one of us should be ready to do so. Racism is, sadly, not extinct, and we know from our Southern Baptist history the effects of the horrific sins of racism and hatred," SBC President J.D. Greear said on Twitter Wednesday afternoon.

Continue Reading
 

2020 Election

Chris Wallace blames Trump for ‘awful’ debate: ‘He bears the primary responsibility for what happened’

Published

on

Fox News host Chris Wallace on Thursday blamed President Donald Trump for what he admitted was an "awful" debate.

In his first TV appearance following Tuesday night's debate, Wallace noted that Trump had interrupted either Democrat Joe Biden or himself 145 times.

"He bears the primary responsibility for what happened on Tuesday," Wallace said. "At a certain point, 45 minutes in, I called a halt to the debate for a moment and said this really isn't serving America and please stop the interruptions."

Wallace complained that his team had spent "hundreds" of hours preparing for the debate only to have it spoiled by the president.

Continue Reading
 
 
Democracy is in peril. Invest in progressive news. Join Raw Story Investigates for $1. Go ad-free. LEARN MORE