Governance Can’t Keep Up With AI
You should read this:
Why AI adoption keeps outrunning governance — and what to do about it
The challenge is immense. One obvious problem the article addresses, and I agree with, is that compliance is designed to slow things down. AI is not. AI exists to speed everything up, getting to a finished product much faster than before. It’s the point of using AI in the first place.
“Companies still design governance as if decisions moved slowly and centrally,” she said. “But that’s not how AI is being adopted. Businesses are making decisions daily — using vendors, copilots, embedded AI features — while governance assumes someone will stop, fill out a form, and wait for approval.”
That mismatch guarantees bypass. Even teams with good intentions route around governance because it doesn’t appear where work actually happens. AI features go live before anyone assesses training data rights, downstream sharing, or accountability.
There is another side to this, though. Because you can block access to URLs and programs that haven’t been approved, as imperfect as that is. If I can’t get to ChatGPT on a work device, the risk of what I might do with it becomes very limited. The issue I’m seeing everywhere is the number of tools that are already in use that suddenly add AI features. How do you get users to slow down the use of a tool that literally shows up overnight in an update to an existing application?
The constant rate of change in tech means that even when a tool is vetted based on how it works today, it might work differently tomorrow. The recent Copilot/Claude integration is a case in point. When the news first broke, Microsoft made clear that interactions with Copilot where a user chose to use Claude were no longer covered by the Enterprise Data Protection agreement.
If you weren’t paying attention, you might have missed how to disable that. Shortly after that, Microsoft again changed how it would work and reverted to enabling it by default, so it was up to customers to catch up on the compliance side to determine how safe it would be.
These are the kind of changes that happen with AI tools daily. They are all racing to keep up and provide the latest shiny toys. In some cases, they don’t even govern their own tools very well, and now it’s on your compliance team to keep up with every change and new feature. They are vetting vendors on the fly when the tools are already in the environment.
It’s not dissimilar to trying to keep up with new M365 features, teach users, and learn about them yourself at the same time.
This brings me to my last point about that article. This has been a problem from the start.
No breach is required for harm to occur — secure systems can still hallucinate, discriminate, or drift,” Butt said, emphasizing that inputs, not outputs, are now the most neglected risk surface. This includes prompts, retrieval sources, context, and any tools AI agents can dynamically access.
Most companies do not have the resources to have an AI expert on staff who has taught everyone else the best practices for using AI. The vast majority of us were trying to learn it at the same time that our users were learning it. We didn’t know enough about how it worked to understand the risks until we started using it heavily and discovered them.
Oh, and we also had full-time jobs to do on top of this.
I’m not surprised that governance is lagging behind AI, but I also wonder whether we’re thinking about this incorrectly. Common advice seems to be that we have to completely restructure compliance to keep up with the speed of AI, rather than asking why we are moving so fast to use unknown, unvetted tools we don’t understand.
There are a lot of people out there telling us to move fast, not to get left behind, etc., but I also wonder how much they have invested in AI.
It says so very much about our media climate — and our ability to read/think critically — that this bit of weaponized AI hype went so viral this week. One suspects that many credulous readers clicked “share” without actually reading the thing. Had they done so, they might’ve wondered why a man forecasting the AI jobs apocalypse would (a) clearly use AI to write this post, effectively and preemptively replacing himself and (b) recommend his reader purchase, as one possible remedy, a premium ChatGPT subscription.
No matter!!! That thing did numbers. Great for its author, one Matt Schumer, and his AI start-up. (Again: red flags!!)
So I ask again: why are we listening to the people who have the most to gain by getting everyone to buy AI tools, instead of making our own decisions about how quickly we should move forward with AI? Governance exists to slow things down – forcing people to think before they run off and do something disastrous.
Should we design better governance to address rapidly changing technology? Absolutely. Should we let Big Tech determine how we redesign it? I don’t think so.
Follow these topics: Artificial Intelligence
