I think this is really interesting, because the more information about you there is, the more reasons someone can find to either rule against you, or to do more investigation into you. Which leads to more information about you being available, and round and round we go.
“That is in part because algorithms are made up of biased data and often don’t consider other relevant factors. Because low-income people have more contact with government agencies (for benefits like Medicaid), a disproportionate amount of their info feeds these systems. Not only can this data fall into corporate hands, but the government itself uses it to surveil.”
For many of us, this bias works the other way, we don’t really need to interact with government services, and don’t live in zip codes that have high crime rates, so when an algorithm tries to make a decision about us for employment or housing, everything looks fine.
But, if you’re poor, and need Medicaid or other assistance, suddenly these things all work against you. There’s a bunch more information about you, and the more there is, the more possible it is to find something, anything, to raise a red flag. Oh you lived in an apartment building known as a high-crime area, or were a victim of domestic violence? That’s a little high-risk. There was an investigation done on some healthcare claim? That’s suspicious, better flag that too.
And then do some more research, generating more data. Some of which might be misleading, or totally incorrect if you’re really unlucky.
But, the algorithm is supposed to eliminate bias, right?