The Purpose of the Algorithm is Obscuring Accountability: Facial Recognition and the Police
Politico has a good story about the failures of facial recognition technology in New Orleans. In a panic over the temporary increase in crime coming out of the pandemic, the city council of New Orleans rescinded a rule that prevented the police from using facial recognition technology. Unsurprisingly, the use of the technology has been a flop, with it being described as “wholly ineffective” and with most matches being flat out wrong. But what it especially telling is how the departments racism is being laundered through this technology.
First, I say this is an unsurprising result because we have seen this story before. Facial recognition technology is terrible, especially when dealing with minorities. I have written before about, for example, the case of the pregnant woman arrested for a crime she clearly had no part in (the victim did not mention that the perpetrator was 8 months pregnant.). I want to give some credit to the New Orleans police department here: of the three incorrect matches, they made no false arrests like the one above. Someone in the New Orleans police department is doing actual investigative work. However, that investigative work is almost entirely against black people.
The NOPD made 19 requests to use the system. Of those, 15 were valid requests. Of those 15, nine could not find a match. Of the remaining six, as mentioned, half were false positives. But all except one of the 15 requests were for black people. The reason that the results of these system are so terrible when used on minorities is that they are largely trained on pictures of white people. However, the NOPD made a choice to use this system almost entirely on black people. That is not racism inherent in the flaws of the technology, but racism inherent in the police department. But the presence of the technology can be used as a shield to that same racism.
When confronted with these figures, the NOPD merely said “Race and ethnicity are not a determining factor for which images and crimes are suitable for Facial Recognition review.” But because it is a technology, there are still defenders of it. One council member stated that if it worked without abuse and solved one crime, it would be worth adopting. This demonstrates the known tendency to give the benefit of the doubt to technology over people. Even when faced with the ineffectiveness of the system, even when faced with the reports of the racist way the system is used, the council still supports its use.
This is perhaps the most subtly dangerous aspect of these kinds of technologies. Right now, people have been trained to trust them. So, when a result is racist, or when a system is used in a racist way, most people, unaware of the details, assume it is reliable as their google map directions. And that in turn insulates the people like the NOPD from the consequences of their decisions. My placing a machine in the decsion making spot, the real accountability is lost, or at least obfuscated.
And this is a clear benefit to the people using their systems. Clearly, the NOPD has an issue with dealing fairly with minorities. People made the decision to use the facial recognition technology on almost only Black people. But because people trust machines, those decisions are given a patina of respectability. It is not enough to focus on how the machines fail. It is just as important to focus on how the machines shield their users from the consequences of their choices. Otherwise, we will continue to live in a world where this kind of reporting generates a debate about whether or not to continue the system rather than on the people who misuse it.