If you’ve seen references to a court ruling sort of redefining the Computer Fraud and Abuse Act recently, or even if you haven’t, this paragraph from the folks at McGuire Woods boils down the real life implications pretty well.
Truthfully though, I think this story shows a couple of really important points.
Recognize that mistakes happen
The importance of two-factor identification
The importance of taking action as soon as you realize the mistake
The importance of getting the technical folks involved immediately instead of hiding it
AI is not without bias, and maybe the best thing we can do is know that going in, instead of assuming that technology would solve this problem.
Hmm, looks like Customs and Border Protection is taking aim at cars, and all the various information stored within: According to statements by Berla’s own founder, part of the draw of vacuuming data out of cars is that so many drivers are oblivious to the fact that their cars are generating so much data in…
This is a frightening story. How many patients see their therapist without giving a second thought to where, and how, the therapist is keeping their notes. Probably not many. But, it can make a big difference, because if it’s not secure, those are your deepest, most private, conversations out there for all the world to see.
As I think about this, it occurs to me that a lot of the things that we think would give away deep fake videos are things that happen all the time in Zoom or Teams calls, right? The video being a little slow, or jerky, or not keeping up fluidly with the movement of people on screen, etc. So it could be harder to tell that the “person” on the call with you isn’t really who you think it is, and then we can begin to wonder who it was, and what information they got from being there, pretending to be someone else.
Are we ready for that?