Just Once I’d Like To See a Tech Company Not Release New Toys Before Realizing the Obvious Risks
I think we all knew this would happen, right?
Thousands scammed by AI voices mimicking loved ones in emergencies.
The description of what happened:
“Tech advancements seemingly make it easier to prey on people’s worst fears and spook victims who told the Post they felt “visceral horror” hearing what sounded like direct pleas from friends or family members in dire need of help. One couple sent $15,000 through a bitcoin terminal to a scammer after believing they had spoken to their son. The AI-generated voice told them that he needed legal fees after being involved in a car accident that killed a US diplomat.”
This is sad. What is sadder still is that it will be up to us to protect ourselves and our families from this type of scam because, like every other article about this kind of misuse, this one also reminds us that there is no regulation or recognition that there might be any blame for the people who made these tools and released them to the public.
There may be increasing pressure on courts and regulators to get AI in check, though, as many companies seem to be releasing AI products without fully knowing the risks involved.
It shouldn’t be up to us to protect ourselves from tools created by tech companies with zero concern for the potential for bad actors to do bad things with them. We’ve known about the risks and realities of deep fakes for years. We’ve seen more and more Generative AI tools announced and released every day for months. These tools exist. They’ve existed in some form or other for years. How can we still not know how to deal with abuse, copyright issues, scammers, etc.?
How can tech companies continue to release products without knowing the risks? There is something seriously wrong with any industry where this is the norm, yet it is the norm in the tech world.
We deserve better. We should be demanding better.
Follow these topics: Security