AI creates new power dynamics, new dependencies, new risks. These concepts are what every CEO and executive committee must master to decide correctly, not to understand the technology, but to understand what it changes in power and control dynamics.
This is not a technical skill. It is a decisional skill. Most organisations have an AI strategy, a deployment plan for tools. Few develop AI strategic intelligence, the ability to read what those deployments change in power dynamics, who captures value, who loses control.
Organisations that develop this intelligence before their competitors build an advantage that is almost impossible to reverse. Those who wait are subject to decisions made by their vendors, competitors and regulators.
Sovereignty is not the absence of dependencies. A sovereign organisation can depend on an LLM, a cloud provider, an AI vendor, provided that dependency is chosen, steered and reversible.
Three questions assess real sovereignty: Can you exit this dependency? In what timeframe? At what cost? If the answers are unclear, sovereignty is compromised.
In France, the contribution to the national digital sovereignty framework established this logic at the level of critical infrastructure. The same reasoning applies at the level of each organisation.
A strategic AI dependency differs from a functional dependency by its impact on the organisation's ability to act freely. When an AI system becomes the filter between data and executive decisions, or when it is embedded in critical processes without a viable alternative, the dependency is strategic.
The most frequent strategic AI dependencies involve: LLMs integrated into decision workflows, cloud platforms hosting proprietary data, and model vendors who unilaterally modify their terms.
Shadow AI is the modern form of shadow IT, but with more severe consequences. When an employee uses ChatGPT, Claude or a personal AI agent at work without management validation, three types of risk are created simultaneously:
Invisible dependencies, critical processes become dependent on unvalidated tools.
Regulatory exposure, the EU AI Act imposes executive-level accountability for all AI systems used in the organisation, including those not officially deployed.
Strategic data leaks, proprietary data may be used to train third-party models.
Shadow AI is systematically underestimated by executive committees. AI governance must explicitly address it.
Effective AI board governance answers three fundamental questions: Who decides which AI systems are deployed? Who is accountable when an AI system produces an erroneous or discriminatory decision? Who answers to regulators in the event of an EU AI Act audit?
Without clear answers, AI governance is nominal, it exists on paper but does not structure real decisions. The EU AI Act compliance deadline for high-risk systems is August 2026.
Decisional sovereignty is threatened when AI systems become invisible filters between reality and executive decision-making. When an executive committee decides on the basis of AI recommendations without understanding their biases, underlying data or created dependencies, the decision is nominally executive but effectively delegated to the system.
This concept is particularly critical for mid-market company executives who lack the resources to audit the AI systems they deploy, but who bear full accountability for them.
This is the most insidious dependency. Initially, the AI system seems interchangeable, you could theoretically switch to a competitor. But over time, the system learns from the organisation's data, processes and preferences. It becomes progressively irreplaceable, not because its technology is unique, but because its training on your data is unique.
Learning dependency is rarely modelled in build/buy/partner trade-offs. This is a frequent strategic error.
BYOA is an evolution of shadow AI, which concerned tools. With autonomous AI agents capable of acting, deciding and interfacing with third-party systems, the risk is an order of magnitude higher.
A personal AI agent used in a professional context can: access confidential data, make decisions on behalf of the employee, create dependencies invisible to IT, and engage the organisation's EU AI Act liability.
BYOA is the most underestimated emerging AI governance risk in 2026.
AI does not create value evenly. It redistributes it, towards actors who control models, training data, interfaces and standards. Organisations that deploy AI without a control strategy create value for their vendors as much as for themselves.
Understanding where economic power concentrates in the AI era is the central strategic question for every executive committee. It is not a technology question, it is a market structure question.
Not all AI decisions are irreversible. But some are: the choice of cloud infrastructure, integrating an LLM into critical processes, transferring proprietary data to a vendor, deploying a high-risk EU AI Act system.
The characteristic of an irreversible AI decision is that it structures subsequent decisions, it reduces the space of future choices. This is why strategic intelligence must intervene before these decisions, not after.
These concepts are not abstract. They describe concrete situations that executive committees are facing today.
We intervene at the decision level for as long as those decisions can still be corrected. If any of these concepts describe your situation, a reading of your position takes 48 hours.
let's talk