How many people are sharing confidential data with shadow AI tools?
Do your users understand the risks and how to use AI safely?
Do your users understand the risks and how to use AI safely?
Do you know what your agents are doing?
Perhaps before we rush to let an agent book our travel arrangements, we should take a moment to consider what might happen to our payment information if the agent were to allow it to fall into the hands of a scammer. Because, apparently, they are susceptible to the same fakes that we are.
Every employee is probably learning about AI because their job demands it, learning new features after new features of the tools they use to do their job, learning new systems that get rolled out every year, and dealing with technological change at a ridiculous pace.
Then, we make them responsible for learning how to stay secure and deal with all of the hack attempts that may come their way, too.
It’s all too much. Most of your users aren’t going to put in that kind of effort, and a yearly reminder about data security isn’t going to help them keep up with the variety of risks that are out there. It might not be worth the money you spend on it.
There are obvious data privacy risks here. Sometimes, I forget that not everyone works in the legal industry and is hyper-aware of confidentiality and data privacy, the way you are when that is your firm’s business, but this is not one of those times. Every business, including Microsoft, would be unhappy if a user synced company data into their consumer OneDrive account.
As I have said before, if your data is stored somewhere outside your control, it’s only a matter of time before it gets hacked. Your AI assistant will have a lot of private information, making it a prime target.