This is some interesting stuff to think about:
The students, part of a university honors class this semester called When Machines Decide: The Promise and Peril of Living in a Data-Driven Society, were tasked with creating a mobile app that teaches the public how a machine-learning algorithm could develop certain prejudices.
“It was created to show that when you start using algorithms for this purpose there are unintended and surprising consequences,” says Suresh Venkatasubramanian, associate professor in the U’s School of Computing who helped the students develop the app and taught the class with U honors and law professor (lecturer) Randy Dryer. “The algorithm can perceive patterns in human decision-making that are either deeply buried within ourselves or are just false.”
The reason I find it interesting is because whether you’re talking about predictive coding in the eDiscovery space, social media and web-based algorithms, or algorithms currently being brought in by government agencies to help with sentencing, or policing, the algorithms in use are all going to be based on human behavior and interpretation. We’ve been saying for a long time that technology assisted review in the legal space was only as good as the reviewers who were teaching the technology what is relevant and what isn’t. The same is true when it comes to these other algorithms. Any kind of bias is going to get passed into the algorithm. In fact, the algorithms are probably inheriting biases that we aren’t even aware of.
Maybe the best example of this is stuff you see in iTunes or Amazon. When you’re looking at a product, or after you’ve purchased a product, both show you the “customers also bought” section. That’s the result of an algorithm, and what you see there may be of interest to you, or it may not, but it’s based on what other people did. For example, people who bought a Clash album, may be likely to also like the Sex Pistols. That’s a super simple algorithm but one that we can easily understand.
But, what happens when a whole bunch of people who love the song Rock the Casbah hit the site who aren’t Clash fans, but just 80’s music fans? Suddenly, the algorithm gets thrown a bit, and you may see recommendations for 80’s bands that aren’t all that similar to the Clash.
Now take that basic algorithm and start thinking about how some of the odd little biases and human behaviors could throw off more complicated algorithms? Instead of using technology to replace human bias, we may just wind up further embedding them, in ways that have a huge, hidden, impact in how we make decisions.
Like I said, it’s interesting to consider where we will end up with artificial intelligence, and how many unintended consequences we may find ourselves dealing with. Will AI actually improve our decision making, or could it actually make it worse?