Quantcast
Connect with us

Study finds AI systems exhibit human-like prejudices

Published

on

A human-like robot (Shutterstock)

Whether we like to believe it or not, scientific research has clearly shown that we all have deeply ingrained biases, which create stereotypes in our mind that can often lead to unfair treatment of others. As artificial intelligence (AI) plays an increasingly important role in our lives as decision makers in self-driving cars, doctor offices, and surveillance, it becomes critical to ask whether AI exhibits the same inbuilt biases as humans. According to a new study conducted by a team of researchers at Princeton, many AI systems do in fact exhibit racial and gender biases that could prove problematic in some cases.

ADVERTISEMENT

One well established way for psychologists to detect biases is the Implicit Association Test. Introduced into the scientific literature in 1998 and widely used today in clinical, cognitive, and developmental research, the test is designed to measure the strength of a person’s automatic association between concepts or objects in memory. It is administered through a computer task, in which subjects are asked to quickly categorize concepts (e.g., black people, gay people) and evaluations (e.g., good, bad) or stereotypes (athletic, intelligent). The main idea is that faster word pairings means those words are more strongly associated in memory than pairings that take longer.

In the version of the task that tests racial bias, it has consistently been found that white individuals, on average, are faster at categorizing negative words when they follow pictures of black as opposed to white faces. What this research suggests is that many whites have split-second negative reactions towards members of certain other races, which can affect our behavior. Similarly, studies using the Implicit Association Test have shown that most people have gender biases whereby they tend to associate women with the arts and homemaking, and men with science and careers.

The Princeton study used an augmented version of the Implicit Association Test on AI systems to detect specific ingrained biases relating to race and gender. Since computers operate unfathomably fast, rather than measuring the time amount of time the system took to categorize words, they calculated biases by looking at the number of statistical associations between concepts.

Since many AI systems ‘learn’ about the world essentially through reading massive amounts of human-created text, often taken from the World Wide Web, and calculating proximity between words, the researchers suspected that human-like biases could be present. Roughly 2.2 million words were analyzed through a computer program that was looking at specific word associations. Knowing which words tend to occur with each other is important because it is how AIs derive meaning. For example, an automated system can tell that a cat is more like a dog and less like a car or a table because people often say and write things like, “I need to go home and feed my cat” or “I need to go home and feed my dog,” but not statements like, “I need to go home to feed my car.”

The analyses showed that those machine-learning systems that are trained with massive amounts of text often display human-like biases that relate to race, gender, and other sensitive matters. For example, past studies have shown that the same exact resume is twice as likely to result in an interview opportunity if the applicant’s name is European-American rather than African-American. In a similar way, the AI system was more likely to association European American names with positive stimuli compared to African-American names. There were similar findings in terms of gender, such that female words, like “woman” or “girl”, were more closely related to the arts compared to mathematics.

ADVERTISEMENT

Such gender biases can clearly be seen in AI algorithms like those used by Google translate. Great examples of this can be seen when one tries to translate statements from Turkish to English that use a genderless noun. The statement that means, “He, she or it is a doctor” is automatically translated to “He’s a doctor” in English, while “He, she, or it is a nurse” is translated to “She’s a nurse.”

It is important to point out that biases are not always a bad thing. In fact, without biases we wouldn’t be able to make countless predictions about the world that we rely on for survival and smooth social interaction. Our brains are highly evolved prediction machines that make decisions based on previously experienced patterns. But in some instances our biases can reflect prejudices that result in unfair treatment of others or incorrect assumptions about individuals.

These biases could have some important real world consequences that are worth considering as we create AI systems that take on new roles previously reserved for human judgment. AI is already being used in doctors’ offices to make medical diagnoses that sift through mountains of data and published studies that no doctor could process on his own in a lifetime.

ADVERTISEMENT

It would be very helpful for such a medical machine to take into account known differences in associations between specific races and genetic predispositions that put them at risk for certain diseases, for example. But what about AIs that are tasked with sifting through job or college applications to make decisions about which people are granted interviews? Do we really want a machine that associates one race with pleasant things and another race with negative things making the decisions?

To make matters worse, the types of learning algorithms that our newest and best forms of AI use to operate are becoming so complicated that even their designers can’t understand exactly how they are making their decisions. As AI becomes increasingly sophisticated this problem is expected to become worse, and for many applications, those who want the advantages of automation might be forced to accept the decisions and suggestions of AI systems on blind faith.

ADVERTISEMENT

Advanced statistical models are already being used to help determine who is approved for loans, who is eligible for parole, and who gets hired for an important job. AI will most certainly expand farther into these domains and many more; as such systems are already being utilized by the military, banks, and employers.

In light of these new findings, researchers and society as a whole should be aware of the potential problems that can arise from the fact that artificial intelligence learns much of what it knows from human created knowledge, and is therefore vulnerable to the same prejudices and stereotypes. From this work, we can learn more about what biases are worth correcting for in specific artificial systems so that they perform better and more fairly. This new field of study can also use AI and word association measures to better understand the prejudices that are ingrained in human language and culture.


Report typos and corrections to: [email protected].
READ COMMENTS - JOIN THE DISCUSSION
Continue Reading

Breaking Banner

John Oliver explains how Dems can bring the pain to Mitch McConnell — even if he wins re-election

Published

on

"Last Week Tonight" host John Oliver returned to his Sunday show, fresh off of an Emmy win, to explain how the United States got so incredibly f*cked up.

Oliver began by remembering Justice Ruth Bader Ginsburg and noting that her seat will be taken over by a far-right conservative justice who will unmake healthcare, reproductive freedom, LGBTQ rights and more things Americans have indicated they want.

He played a clip of Sen. Mitt Romney (R-UT) claiming that America is a "center-right" country, which Oliver called outright "bullsh*t." Not only do Americans overwhelming support reproductive freedom, they also support LGBTQ equality. At the same time, Americans also believe, overwhelmingly, that President Donald Trump shouldn't be allowed to appoint the next justice.

Continue Reading

2020 Election

Trump campaign blames Dems for ex-campaign manager Parscale’s reported self-harm threat

Published

on

The Trump campaign is blaming "Democrats and disgruntled RINOs" for the reported self-harm threat of its former campaign manager Brad Parscale.

"The disgusting, personal attacks from Democrats and disgruntled RINOs have gone too far, and they should be ashamed of themselves for what they've done to this man and his family," said Trump campaign’s communications director Tim Murtaugh.

Statement from ?@realDonaldTrump? Campaign Communications Director Tim Murtaugh pic.twitter.com/NKuJrtmImz

Continue Reading
 

Breaking Banner

Former Army prosecutor explains why Trump will ‘get laughed out of court’ if he tries to steal the election

Published

on

Democrats are still panicking about the plots that President Donald Trump and the Republican Party seem to be cooking up to circumvent the people's vote in November.

Last week's shocking piece in The Atlantic detailed how electors in Pennsylvania could be manipulated to deliver Trump the vote despite ballots to the contrary. After President George W. Bush's campaign convincing the Supreme Court to stop Florida from counting the 2000 election ballots, there is a fear that Trump too could manipulate the courts to get his Supreme Court justices to deliver him a win.

Continue Reading
 
 
Democracy is in peril. Invest in progressive news. Join Raw Story Investigates for $1. Go ad-free. LEARN MORE