This seems like it should have been an obvious short-coming, and yet, it also seems like no one thought about it:
“Risk assessments are pitched as “race-neutral,” replacing human judgment—subjective, fraught with implicit bias—with objective, scientific criteria. Trouble is, the most accurate tools draw from existing criminal justice data: what happened to large numbers of actual people who were arrested in any particular location. And the experience of actual people in the criminal justice is fraught with racial disparities and implicit bias.”
It does highlight one of the issues with algorithms. In order for them to learn to measure likely outcomes, we have to give them data, but that data may have been the result of human error and bias, so we are basically cooking it in to the system.
How do we correct for that without creating more bias and error though?