I have been concerned about the idea of trusting “crowd-sourced” data to influence how decisions are made with technology for awhile now. Mostly, it’s been with social media and “reporting” tools, which seem to me to be very easy to influence. For example, fake news reports, or harassment reports are an all-too-easy way to simply punish information you don’t like. (If 1000’s of my followers all report a news story as “fake”, we could, in fact, make it disappear from social media, right?)
This one seems a little too obvious, but I can see how it would work.
Sometimes, a system as sophisticated as Google Maps can be tricked in the simplest of ways.
This was recently proved by artist Simon Weckert, who hauled around 99 smartphones in a handcart, thus generating a traffic jam on Google Maps, which falsely registered this activity as a bunch of cars being around the same location.
This seems silly, but also, why wouldn’t it work? If Google Maps is using cell phone GPS as a replacement measurement for how much traffic is in a given location, then showing up with a whole bunch of phones is going to look like a traffic jam. Just like using reports from users as a replacement measurement for fake or inappropriate content will make anything look like fake or inappropriate content if enough users report it.
This is important though, as we start to build out smart cities. Cities that are constantly adjusting things like traffic light patterns, power grids, etc. need to be careful about what replacement measurements they might be using, and how they can be influenced. Cell phone signals are a decent replacement way to measure the number of people in an area, for example, unless people are going out of their way to fool it. Protesters showing up without cell phones create a problem. People walking around with 99 phones, create a problem. A glitch in the Matrix, if you will. If I’m basing police presence on the number of cell phones in a given area, can I get a bunch of cell phones gathered up in one area, or a bunch of people to show up into video-monitored areas, to create a diversion from something else going on in a now under-protected area? If I understand how smart city decisions are made, manipulation becomes relatively easy sometimes.
Will the AI that runs these cities be smart enough to adjust for attempts at manipulation? Or will this just be another area where we will see a constant cat and mouse game between the AI and bad actors, much like we see with hacking attempts online?
I’d be willing to bet it will be an ongoing battle, but the potential damage is going to be much higher, because this isn’t online data, these are systems that interact with our real lives every day, ones that we will become more and more dependent on going forward, and if the AI isn’t up to snuff, there will be tragic consequences.
Follow these topics: Tech