That means it’s time to take a look back at one of the primary criticisms of this police practice: racial profiling.
The American Civil Liberties Union defines racial profiling as “the discriminatory practice by law enforcement officials of targeting individuals for suspicion of crime based on the individual’s race, ethnicity, religion or national origin.” This includes police using race to determine which drivers to stop for routine traffic violations or which pedestrians to search for illegal contraband.
The inevitable question is what percent of minorities the police should stop, statistically. But the default methods for deciding who is guilty of racial profiling are not statistically sound. We are working with the Bureau of Research and Analysis at the St. Louis County Police Department to create a stronger metric.
In general, there are two types of tests used to identify patterns of racial profiling.
The first, “benchmarking,” simply involves comparing the percentage of stops for people of a specific race with the percentage of that minority in that geographic area.
Benchmarking was used in an often-cited 1999 report by the New York attorney general on the New York City Police Department’s stop-and-frisk practices. Officers were patrolling in and around private residential buildings and stopping individuals they believed were trespassing. In 1999, 25.6 percent of the city’s population was black, yet comprised 50.6 percent of all persons stopped. In a 2013 federal court case, the judge ruled that stop and frisk had been used in an unconstitutional manner.
However, in benchmarking, the numbers are based on census data, which can give a highly misleading view. For example, take Town and Country, Missouri, a city with only a 12.2 percent nonwhite population. More than 20 percent of last year’s traffic stops involved minorities. However, Town and Country has two major interstates running through it. How are the tens of thousands of motorists driving on those interstates captured in the benchmark?
Census data doesn’t account for any nonresidents. For all of the St. Louis County Police Department patrol areas, only 44.6 percent of drivers stopped by police actually lived in St. Louis County. This alone shows that census data is not a viable source for determining racial profiling.
What’s more, officers are often ordered to patrol “high crime” areas. Statistically speaking, these are predominantly minority areas. So, inevitably, there will be more stops in those designated high-crime areas. As data is usually observed on a city, county or precinct level, the demographics of these high-crime areas are obscured.
Another type of test looks at stop-and-frisk’s “hit rate” – that is, the percentage of searches that actually lead to the discovery of weapons, drugs or other contraband.
In some states, like North Carolina, while a higher percentage of one minority was searched, there was actually a less likely chance that the officers discovered illegal contraband. This was shown as evidence of racial profiling.
An issue here is that most hit rates involve all searches, regardless of the type. This includes searches after arrests for outstanding warrants. That means that the final hit rate may be misleading, including searches done as part of routine processing.
In 2016, researchers at Stanford published a new type of test that analyzes four variables: race of the driver, department of officer making the stop, if the stop resulted in a search and if illegal contraband was found. This metric is designed to give a “snapshot of the officer’s threshold of suspicion before searching person of a given race.”
However, as the authors notably discuss, there is no way to definitively conclude that the disparities shown by this metric necessarily stem from racial bias. What’s more, Stanford’s metric is too complicated for every precinct in the U.S. to use due to lack of detailed data and the complex analysis required.
A proposed metric
Given the drawbacks of current methods used to detect racial profiling, the U.S. needs a new way to detect racial profiling among police officers. We suggest something that is simple, understandable and easily applied across the country: a method called intrapopulation comparison.
Say one precinct has 100 police officers. Some officers stop fewer minorities, some stop more, while most officers are somewhere in the middle. Each officer is assigned a score, showing how far he or she individually deviates from the average. If the officer deviates too far, he or she is flagged and that case is looked at more carefully.
This concept was first introduced in the early 2000s. Why aren’t more precincts using this method? Most likely the same reason most practices stay in place past their prime: habit. We’re currently collecting data and studying how this metric might work for the St. Louis County Police Department.
Intrapopulation comparison allows us to flag individual officers, while addressing the issues that come with benchmarks or hit rates, like commuters and census data. The officers are compared with other officers in similar situations. The basis for identifying an officer in this system is that he or she is statistically different from the peer group.
A glaring issue with this approach is that an entire precinct could be racially biased. But, inevitably, there will be major outliers.
Racial profiling is a critical issue for law enforcement and the nation. Police departments have to demonstrate that they serve citizens in an impartial manner. We believe that this metric is simple and understandable, and it serves as an early warning system that will get closer to the root of the problem – individual officers who racially profile.