And this is just one of the issues in the article below:
Checkr is on the forefront of a new and potentially problematic kind of hiring, one that’s powered by still-emerging technology. Those hoping to quickly get extra work complain that Checkr and others using AI to do background checks aren’t addressing errors and mistakes on their criminal records reports. In these cases, a glitch in the system can cost someone a job.
I do believe that the power of good AI will make a positive contribution to the world, but first the AI has to get to “good”. Right now, most AI that I’ve come in to contact with, isn’t good. Whether you want to talk about social media algorithms, automatic flagging of content, or public records searches, it’s way too easy to see where things have gone wrong, and where the data going in, isn’t good.
Any AI system with bad data, is going to be a problem. And given the examples used in this article, misidentifying people with a common name, using nothing more than crude “keywords” when evaluating someone on social media, or pulling in random, unconnected data from records, what we have in these systems is a whole lot of bad science.
Bad science that could be ruining people’s lives. Yes we have rights to question the data, but how many of us will ever know what was in the data that caused an employer to look elsewhere?