And so, the ratchet keeps turning, as we get better at preventing phishing, the fishers turn to better tools:
At the Black Hat and Defcon security conferences in Las Vegas this week, a team from Singapore’s Government Technology Agency presented a recent experiment in which they sent targeted phishing emails they crafted themselves and others generated by an AI-as-a-service platform to 200 of their colleagues. Both messages contained links that were not actually malicious but simply reported back clickthrough rates to the researchers. They were surprised to find that more people clicked the links in the AI-generated messages than the human-written ones—by a significant margin.
The article below goes into some detail about some interesting things, such as the fact that there are some AI as a service providers who take great pains to try and block anyone from missing the service, to the reality that not every service will bother, and some may even market themselves to black-hat groups.
At which point, AI will be used to scan email for content generated by AI. It’ll just be policing itself, right? 😉
I feel like I’ve read about this happening somewhere.
But seriously, we need to stop telling people that one of the signs of a phishing email is poor writing, or the generic text because that is going away fast. The tools used to create the message are getting better and better.
Follow these topics: Links