The EU AI Act doesn't
speak to your IT department.
It speaks to you.
tointelligence · omer taki

What the EU AI Act actually imposes on executives.

Most discussion of the EU AI Act focuses on AI system providers : OpenAI, Mistral, software editors. This is a misreading. The regulation creates two categories of actors: providers (who develop systems) and operators (who deploy them). If your organisation uses an AI system, it is an operator, and operator obligations are substantial.

For high-risk systems, obligations include: effective human oversight, technical documentation, risk management, regulatory transparency, and incident notification. These obligations cannot be delegated to an external vendor. They belong to your organisation.

If you deploy an AI system, you are an operator. Operator obligations don't disappear because you outsourced development.

You may already be operating high-risk systems.

· human resources

AI tools used in recruitment, performance evaluation, promotion management or termination decisions are classified as high-risk. If you use an AI-scored ATS or automated assessment tool, verify your classification.

· credit and finance

AI systems influencing credit decisions, insurance or solvency assessment. This includes customer scoring tools, risk analysis and fraud detection that produce automated recommendations.

· critical infrastructure

AI systems in energy, transport, water or essential public service management are high-risk. Operators in these sectors face particularly extensive obligations.

What already applies and what is coming.

already in force
Absolute prohibitions
Prohibited AI practices (subliminal manipulation, social scoring, vulnerability exploitation) apply since February 2025. If you use these practices, you are already in breach.
august 2026
High-risk systems
Full obligations for operators of high-risk systems take effect. The critical deadline for organisations using AI in HR, credit or essential services.
2027
General purpose AI
Full obligations for general-purpose AI models (GPAI) take effect. Primarily affects providers but impacts access conditions for models you use.

What you must do now.

1. Inventory and classify your AI systems. Which AI systems do you use? In which domains? Do they produce decisions or recommendations affecting people? This mapping is the prerequisite for any compliance approach.

2. Assess your operator status. For each high-risk system identified, which obligations apply to your organisation as operator? Is human oversight real or nominal? Does documentation exist?

3. Structure governance at the appropriate level. EU AI Act obligations for high-risk systems must be carried by executive leadership, not solely by IT or the DPO. Compliance requires an executive decision on resources, processes and responsibilities.