-
-
Linked – Most people worry about deepfakes – and overestimate their ability to spot them
Maybe those easy-to-spot ones are lulling us into a sense of overconfidence. What will we do when something is done with more effort and better quality? Heck, enough folks are being fooled by the poor ones because they display something they want to believe. None of us should assume we’d always know.
-
Linked – ChatGPT in trouble with the EU (again)
This story seems inevitable to me. As the tech world pushes us increasingly into asking an AI chatbot for information instead of looking it up ourselves, and the AI model has incorrect information, where do you get it corrected? When OpenAI was asked to correct or remove this misinformation, they said it was ‘technically impossible’…
-
-
Just Once I’d Like To See a Tech Company Not Release New Toys Before Realizing the Obvious Risks
I think we all knew this would happen, right? Thousands scammed by AI voices mimicking loved ones in emergencies. The description of what happened: “Tech advancements seemingly make it easier to prey on people’s worst fears and spook victims who told the Post they felt “visceral horror” hearing what sounded like direct pleas from friends…
-
Linked: Dutch MPs in video conference with deep fake imitation of Navalny’s Chief of Staff
As I think about this, it occurs to me that a lot of the things that we think would give away deep fake videos are things that happen all the time in Zoom or Teams calls, right? The video being a little slow, or jerky, or not keeping up fluidly with the movement of people on screen, etc. So it could be harder to tell that the “person” on the call with you isn’t really who you think it is, and then we can begin to wonder who it was, and what information they got from being there, pretending to be someone else.
Are we ready for that?
