· glossary · strategic intelligence & AI

The concepts that structure
decisions in the AI era.

AI creates new power dynamics, new dependencies, new risks. These concepts are what every CEO and executive committee must master to decide correctly, not to understand the technology, but to understand what it changes in power and control dynamics.

France & international · SMEs, mid-market, large companies · CEOs & executive committees
· defined terms
· 01

Strategic Intelligence in the AI era

The ability to correctly read the power, dependency and advantage dynamics created by AI in a given sector, in order to make decisions that preserve an organisation's control and competitive advantage.

This is not a technical skill. It is a decisional skill. Most organisations have an AI strategy, a deployment plan for tools. Few develop AI strategic intelligence, the ability to read what those deployments change in power dynamics, who captures value, who loses control.

Organisations that develop this intelligence before their competitors build an advantage that is almost impossible to reverse. Those who wait are subject to decisions made by their vendors, competitors and regulators.

· founding concept tointelligence
· 02

AI Sovereignty

AI sovereignty is the mastery of technological dependencies: the ability to choose, steer and reverse dependencies on AI vendors and systems, at a cost and within a timeframe compatible with one's strategy.

Sovereignty is not the absence of dependencies. A sovereign organisation can depend on an LLM, a cloud provider, an AI vendor, provided that dependency is chosen, steered and reversible.

Three questions assess real sovereignty: Can you exit this dependency? In what timeframe? At what cost? If the answers are unclear, sovereignty is compromised.

In France, the contribution to the national digital sovereignty framework established this logic at the level of critical infrastructure. The same reasoning applies at the level of each organisation.

· founding thesis tointelligence
· 03

Strategic AI Dependency

A dependency created by integrating an AI system into critical decision-making or operational processes, making it difficult or costly to remove or replace that system.

A strategic AI dependency differs from a functional dependency by its impact on the organisation's ability to act freely. When an AI system becomes the filter between data and executive decisions, or when it is embedded in critical processes without a viable alternative, the dependency is strategic.

The most frequent strategic AI dependencies involve: LLMs integrated into decision workflows, cloud platforms hosting proprietary data, and model vendors who unilaterally modify their terms.

· tointelligence analysis
· 04

Shadow AI

Undeclared and ungoverned use of AI systems by employees within an organisation, without the knowledge of management or the executive committee.

Shadow AI is the modern form of shadow IT, but with more severe consequences. When an employee uses ChatGPT, Claude or a personal AI agent at work without management validation, three types of risk are created simultaneously:

Invisible dependencies, critical processes become dependent on unvalidated tools.
Regulatory exposure, the EU AI Act imposes executive-level accountability for all AI systems used in the organisation, including those not officially deployed.
Strategic data leaks, proprietary data may be used to train third-party models.

Shadow AI is systematically underestimated by executive committees. AI governance must explicitly address it.

· governance risk
· 05

AI Governance at Board Level

The set of accountability structures, supervision mechanisms and decision policies enabling an executive committee to control AI systems deployed in its organisation, in compliance with the EU AI Act.

Effective AI board governance answers three fundamental questions: Who decides which AI systems are deployed? Who is accountable when an AI system produces an erroneous or discriminatory decision? Who answers to regulators in the event of an EU AI Act audit?

Without clear answers, AI governance is nominal, it exists on paper but does not structure real decisions. The EU AI Act compliance deadline for high-risk systems is August 2026.

· EU AI Act
· 06

Decisional Sovereignty

The ability of an executive committee to maintain effective control over strategic decisions, even in a context of mass AI system deployment.

Decisional sovereignty is threatened when AI systems become invisible filters between reality and executive decision-making. When an executive committee decides on the basis of AI recommendations without understanding their biases, underlying data or created dependencies, the decision is nominally executive but effectively delegated to the system.

This concept is particularly critical for mid-market company executives who lack the resources to audit the AI systems they deploy, but who bear full accountability for them.

· mid-market · board
· 07

AI Learning Dependency

A dependency created not by using an AI tool, but by the progressive training of an AI system on an organisation's proprietary data.

This is the most insidious dependency. Initially, the AI system seems interchangeable, you could theoretically switch to a competitor. But over time, the system learns from the organisation's data, processes and preferences. It becomes progressively irreplaceable, not because its technology is unique, but because its training on your data is unique.

Learning dependency is rarely modelled in build/buy/partner trade-offs. This is a frequent strategic error.

· tointelligence analysis
· 08

BYOA, Bring Your Own Agent

The practice by which employees use their own personal AI agents in a professional context, without validation or supervision from management.

BYOA is an evolution of shadow AI, which concerned tools. With autonomous AI agents capable of acting, deciding and interfacing with third-party systems, the risk is an order of magnitude higher.

A personal AI agent used in a professional context can: access confidential data, make decisions on behalf of the employee, create dependencies invisible to IT, and engage the organisation's EU AI Act liability.

BYOA is the most underestimated emerging AI governance risk in 2026.

· emerging risk · 2026
· 09

AI Value Capture

The dynamic by which certain ecosystem actors capture a disproportionate share of AI-created value, typically model providers, cloud vendors or data owners.

AI does not create value evenly. It redistributes it, towards actors who control models, training data, interfaces and standards. Organisations that deploy AI without a control strategy create value for their vendors as much as for themselves.

Understanding where economic power concentrates in the AI era is the central strategic question for every executive committee. It is not a technology question, it is a market structure question.

· economic power
· 10

Irreversible AI Decision

A strategic AI decision whose consequences can no longer be corrected without prohibitive cost or delay, committing an organisation's trajectory for 5 to 10 years.

Not all AI decisions are irreversible. But some are: the choice of cloud infrastructure, integrating an LLM into critical processes, transferring proprietary data to a vendor, deploying a high-risk EU AI Act system.

The characteristic of an irreversible AI decision is that it structures subsequent decisions, it reduces the space of future choices. This is why strategic intelligence must intervene before these decisions, not after.

· founding concept tointelligence
· frequently asked questions
What is strategic intelligence in the AI era?
Strategic intelligence in the AI era is the ability to correctly read the power, dependency and advantage dynamics created by AI in a given sector, in order to make decisions that preserve control and competitive advantage. It is not a technical skill, it is a decisional skill.
What is AI sovereignty for a company?
AI sovereignty for a company is the mastery of its technological dependencies: the ability to choose which dependencies to accept, to steer them, and to exit them at a cost and within a timeframe compatible with its strategy. Sovereignty is not the absence of dependencies, it is their conscious mastery.
What is shadow AI and why is it a risk for executive committees?
Shadow AI refers to the undeclared use of AI systems by employees without management knowledge. It creates invisible dependencies, unanticipated EU AI Act regulatory exposure, and strategic data leak risks, without the executive committee being aware or formally accountable.
What is the difference between AI strategy and AI strategic intelligence?
An AI strategy defines which tools to deploy and how. AI strategic intelligence enables reading where value is created, who gains control, and which dependencies are forming, before deciding. AI strategy without strategic intelligence produces locally rational but systemically costly decisions.
How can an executive committee assess its AI sovereignty?
Three questions assess real AI sovereignty: Can you exit your main AI dependencies? In what timeframe? At what cost? If the answers are unclear, sovereignty is compromised. An AI dependency mapping exercise is the first assessment tool.
· tointelligence

These concepts are not abstract. They describe concrete situations that executive committees are facing today.

We intervene at the decision level for as long as those decisions can still be corrected. If any of these concepts describe your situation, a reading of your position takes 48 hours.

let's talk