speak to your IT department.
It speaks to you.
What the EU AI Act actually imposes on executives.
Most discussion of the EU AI Act focuses on AI system providers : OpenAI, Mistral, software editors. This is a misreading. The regulation creates two categories of actors: providers (who develop systems) and operators (who deploy them). If your organisation uses an AI system, it is an operator, and operator obligations are substantial.
For high-risk systems, obligations include: effective human oversight, technical documentation, risk management, regulatory transparency, and incident notification. These obligations cannot be delegated to an external vendor. They belong to your organisation.
You may already be operating high-risk systems.
AI tools used in recruitment, performance evaluation, promotion management or termination decisions are classified as high-risk. If you use an AI-scored ATS or automated assessment tool, verify your classification.
AI systems influencing credit decisions, insurance or solvency assessment. This includes customer scoring tools, risk analysis and fraud detection that produce automated recommendations.
AI systems in energy, transport, water or essential public service management are high-risk. Operators in these sectors face particularly extensive obligations.
What already applies and what is coming.
What you must do now.
1. Inventory and classify your AI systems. Which AI systems do you use? In which domains? Do they produce decisions or recommendations affecting people? This mapping is the prerequisite for any compliance approach.
2. Assess your operator status. For each high-risk system identified, which obligations apply to your organisation as operator? Is human oversight real or nominal? Does documentation exist?
3. Structure governance at the appropriate level. EU AI Act obligations for high-risk systems must be carried by executive leadership, not solely by IT or the DPO. Compliance requires an executive decision on resources, processes and responsibilities.