How many people are sharing confidential data with shadow AI tools?
Do your users understand the risks and how to use AI safely?
Do your users understand the risks and how to use AI safely?
For more like this, subscribe to the newsletter and get these links and more in your email.
There are obvious data privacy risks here. Sometimes, I forget that not everyone works in the legal industry and is hyper-aware of confidentiality and data privacy, the way you are when that is your firm’s business, but this is not one of those times. Every business, including Microsoft, would be unhappy if a user synced company data into their consumer OneDrive account.
The pressure to become AI experts and make the organization’s rollout of AI tools secure only adds more mental load to already overloaded cybersecurity professionals.
As I have said before, if your data is stored somewhere outside your control, it’s only a matter of time before it gets hacked. Your AI assistant will have a lot of private information, making it a prime target.
Technology, especially M365 technology, changes all the time. It’s vast and complicated, and things get broken when new versions are rolled out. When dealing with eDiscovery, security, privacy, etc., we have to stay on top of those changes to understand new features and ensure the old ones still work the same way.
Don’t assume the old ones will always work the same way. I can tell you from this and plenty of other experiences they often don’t.