Linked – Two ways AI hype is worsening the cybersecurity skills crisis
The pressure to become AI experts and make the organization’s rollout of AI tools secure only adds more mental load to already overloaded cybersecurity professionals.
The pressure to become AI experts and make the organization’s rollout of AI tools secure only adds more mental load to already overloaded cybersecurity professionals.
As I have said before, if your data is stored somewhere outside your control, it’s only a matter of time before it gets hacked. Your AI assistant will have a lot of private information, making it a prime target.
Technology, especially M365 technology, changes all the time. It’s vast and complicated, and things get broken when new versions are rolled out. When dealing with eDiscovery, security, privacy, etc., we have to stay on top of those changes to understand new features and ensure the old ones still work the same way.
Don’t assume the old ones will always work the same way. I can tell you from this and plenty of other experiences they often don’t.
If I ask AI to do legal research or provide legal insights, should I assume that the data would be privileged? As the article points out, why would I believe that if no lawyers were involved?
It’s a huge challenge. We have no control over updates, and Microsoft is increasingly releasing new features before they build out the compliance tools that would allow us to manage them. As Tony pointed out above, agents may have critical impacts on current security and data privacy work. No matter, it’s coming. Figure it out.
As I mentioned during the ILTA session last month, I don’t see how organizations will function without a dedicated staff to monitor and communicate changes. Consider just how much work is involved in understanding Copilot and all of the compliance issues surrounding the use of AI, then consider how fast Microsoft is rolling out new Copilot features. It’s been a moving target since day one.
It’s only a small part of the M365 environment.