a discipline problem.
It is a signal of absent governance.
The invisibility is not accidental. It is structural.
In virtually every organisation we observe, the executive committee has no visibility into the real AI usage of its teams. Not because that usage is intentionally hidden, but because no structure makes it visible.
Teams use ChatGPT, Claude, transcription tools, writing assistants. They don't report it because they have never had reason to. IT doesn't track it because these usages happen through personal accounts or team subscriptions that don't appear in official inventories. Senior leadership doesn't see it because it has never asked to see it.
This is not a compliance failure. It is a decision that was never made. And the absence of a decision is itself a decision : with consequences.
Three risk levels IT cannot manage alone.
What IT manages. What the board must decide.
The IT response to shadow AI is a control response: inventory tools, block unapproved ones, define an acceptable use policy. Necessary, but not sufficient.
The board response is different. It addresses three questions IT alone cannot resolve: which data are we willing to externalise and under what conditions? Which critical processes can rely on third-party tools, and which cannot? What responsibility do we assume toward our regulators and clients for these usages?
At tointelligence, we observe that organisations treating shadow AI as an IT problem produce policies nobody follows. Those treating it as an executive governance decision produce frameworks teams adopt because they align with operational reality. The difference is not in policy severity. It is in the level at which the decision is made.
Shadow AI is saying something you haven't heard.
When 40% of teams use unapproved AI tools, it is not a sign of rebellion. It is a signal that leadership has not yet produced a usable answer to a real need. Teams filled a void that governance did not fill.
Prohibition doesn't resolve that void. It displaces it. The only durable response is to produce a framework clear and fast enough that the official path is more attractive than workarounds.
Three decisions to make : not control measures to delegate.
Decide to have a real picture of AI usage across the organisation. Not the picture IT builds from official inventories. The picture of real, field-level usage, with the data that transits. This decision belongs to general management, not IT.
Explicitly decide what can be externalised and what cannot. Which data, which processes, which criticality levels. This decision cannot be delegated to teams that don't have the strategic visibility to arbitrate it.
Produce an approved usage framework that works because it starts from operational reality, not a compliance ideal. A framework teams can follow without compromising their operational efficiency.