But seriously, we need to stop telling people that one of the signs of a phishing email is poor writing, or the generic text because that is going away fast. The tools used to create the message are getting better and better.
At first blush, the idea of scanning images synced up to iCloud for child sexual abuse materials against the hash list of known CSAM images seems like a good idea. As a survivor of childhood sexual abuse myself, I want tech companies to takes some initiative to deal with this issue. They also want to scan images on kids’ phones using AI to see if kids are getting into any trouble with sending or receiving sexual material. Again, that sounds like a good thing. But, as the EFF points out, this all requires a backdoor, and backdoors, once created, almost never remain used for just one purpose.
AI is not without bias, and maybe the best thing we can do is know that going in, instead of assuming that technology would solve this problem.
As I think about this, it occurs to me that a lot of the things that we think would give away deep fake videos are things that happen all the time in Zoom or Teams calls, right? The video being a little slow, or jerky, or not keeping up fluidly with the movement of people on screen, etc. So it could be harder to tell that the “person” on the call with you isn’t really who you think it is, and then we can begin to wonder who it was, and what information they got from being there, pretending to be someone else.
Are we ready for that?