The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024. Most organisations know it exists. Far fewer know which of their AI deployments are already in scope, what August 2, 2026 specifically triggers, and what concrete obligations land on their desk.
The question most CTOs and CISOs ask us is not "does the AI Act apply to us?" It applies to any organisation that deploys AI systems on the EU market, full stop, whether you built the system or just use it. The real question is: which risk class do your deployments fall into, and what does that classification actually require?
This article is practical and opinionated. We are not going to restate Article 15 in plain English. We will walk through the classification criteria that determine whether you are in the "high-risk" category, the specific obligations that flow from that classification, the Article 6(3) exception that lets you document your way out of scope, and the articulation with GDPR that will keep your DPO busy for the next few months.
The real timeline: what has already kicked in, what arrives August 2
The AI Act rolls out in layers. Understanding what has already applied helps calibrate the urgency correctly.
February 2025 (already in force). The ban on unacceptable-risk AI practices took effect on February 2, 2025. Social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow law enforcement exceptions), and systems that exploit psychological vulnerabilities to manipulate behaviour. If your organisation ran any of these, you have been non-compliant for over a year.
August 2025 (already in force). Obligations for general-purpose AI (GPAI) models applied from August 2, 2025. All providers placing GPAI models on the EU market (OpenAI, Anthropic, Google, Mistral, and others) had to comply with the transparency and copyright documentation requirements of Articles 53-55 and adhere to the GPAI Code of Practice. All major providers signed before the deadline.
February 2026 (already published). The European Commission published practical guidelines on Article 6 on February 2, 2026, including a comprehensive list of concrete examples of AI systems that are and are not high-risk. These guidelines are the foundation of any credible self-assessment.
August 2, 2026 (three months away). The main deadline. Full application of Articles 9-17 (provider requirements for high-risk systems) and Article 26 (deployer requirements). From this date, operating a high-risk AI system without conformity documentation, CE marking for regulated products, EU database registration, human oversight mechanisms, and log retention constitutes a clear breach. Fines reach EUR 15 million or 3% of global annual turnover.
November 2026. Transparency obligations for end users (Article 50) take effect. You must inform users when they are interacting with an AI system and when content has been AI-generated.
Three months. That is the operational window before the main enforcement date.
High-risk classification: are you in scope?
The most consequential decision in your AI Act compliance programme is classification: do your AI systems qualify as high-risk under Annex III?
Annex III lists eight categories. For enterprises deploying AI in a professional context, three come up most often.
Employment and worker management. Any AI system used for recruitment (targeted job advertising, CV filtering, candidate ranking), performance evaluation, task allocation, or decisions on promotion and termination is high-risk. If your HR team uses a screening tool with automated scoring, it is high-risk. No interpretive wiggle room.
Access to essential services. Credit scoring and creditworthiness evaluation systems (excluding fraud detection), life and health insurance pricing, and systems determining eligibility for public benefits or healthcare. A credit scoring tool embedded in a SaaS banking or insurance platform falls squarely into this category.
Critical infrastructure. Safety components in the management of electricity grids, water, gas, road traffic, or critical digital infrastructure. If your AI agent supervises an industrial network classified as critical infrastructure, it is high-risk by statutory definition.
The remaining categories cover biometrics (identification, categorisation, emotion recognition), education (selection and assessment), law enforcement (profiling, recidivism risk), and migration.
One structural point: classification follows actual use, not marketing descriptions. A general-purpose LLM used to score job candidates is high-risk. The same LLM used to draft sales emails is not. Use determines classification, not architecture.
The Article 6(3) exception: documenting your way out of scope
Your system appears in Annex III, but you are convinced it should not be classified as high-risk? Article 6(3) opens a door, with one strict condition: the burden of proof falls entirely on you.
The exemption applies when the system does not pose a significant risk to health, safety, or fundamental rights. More specifically, Article 6(3) addresses the case where the system does not materially influence the outcome of a decision: the AI provides information, a human decides freely, and the AI's recommendation is not determinative.
Two examples to calibrate the boundary. An HR tool that generates a narrative summary of a CV for a recruiter who decides freely, with no automated score or ranking: potentially out of high-risk scope with solid documentation. A tool that produces a ranked list of candidates by score, where the recruiter almost always validates the top positions: high-risk, because the human decision is materially influenced by the system's output.
You must document the assessment before deployment and register in the EU AI database (Article 49). The national competent authority can request that document at any time. An undocumented assertion is not a defensible position in an audit.
The practical guidelines published by the Commission on February 2, 2026 provide the reference list of examples for calibrating this self-assessment. Reading them before making a classification decision is not optional.
What Article 15 concretely requires
Article 15 applies to providers of high-risk systems, not deployers. But it shapes two things for deployers: what they must contractually demand from their vendors, and what they must implement if they are themselves providers (in-house development, or heavily configured solutions).
Three concrete requirements stand out from the text.
Accuracy throughout the lifecycle. Accuracy metrics must be declared in the instructions for use. This is not a one-shot benchmark at launch. The system must maintain an appropriate level throughout its operational life. In practice: a documented eval set, verifiable metrics, and a process to detect drift over time.
Robustness, including feedback loop management. Systems that continue to learn after deployment (continuous fine-tuning, RAG fed by recent interactions) must be designed to prevent biased outputs from contaminating future training data. This is an architectural constraint to address at design time, not a checkbox to tick afterwards.
Cybersecurity: resistance to AI-specific attacks. The text of Article 15 is remarkably specific. It explicitly names resistance to attempts to alter the system's outputs or performance by exploiting its vulnerabilities, and lists data poisoning, model poisoning, adversarial examples, and confidentiality breaches. This is not a vague general security obligation. It is a list of AI-specific attack vectors against which the system must be resilient, documented.
For a CTO or CISO, the translation is direct: any high-risk AI solution purchased from a vendor must come with verifiable Article 15 documentation. If your SaaS provider cannot produce it, that is your compliance problem by August 2, not theirs.
Deployer obligations: Article 26 in practice
Article 26 specifically addresses deployers: organisations that use a high-risk AI system built by a third party. This is the situation of most enterprises integrating an HR scoring tool, a credit solution, or an anomaly detection tool into their infrastructure.
Five operational obligations to implement before August 2.
Use in accordance with instructions. You must use the system as documented by the provider. Deploying a tool in a context or for a population different from those on which it was validated puts you out of compliance, regardless of what the provider does.
Designating a human oversight responsible. A competent person with the resources and authority to monitor the system, understand its outputs, and intervene when necessary. Human oversight is not just "a human reviews the final decision". It requires genuine understanding of the system's limitations and an effective ability to override.
Log retention for at least six months. Logs automatically generated by the high-risk system must be retained for a minimum of six months. For organisations running AI via a SaaS provider, this requires an explicit contractual clause: the provider must give you access to logs or retain them on your behalf.
Input data quality. To the extent you control the data fed into the system, it must be relevant and sufficiently representative for the intended purpose. Feeding a scoring tool with historically biased data makes you jointly responsible for discriminatory outputs.
Informing workers before workplace deployment. Before deploying a high-risk system that applies to employees, you must inform worker representatives and the affected workers. This is a direct deployer obligation that comes on top of any existing works council or co-determination requirements under national labour law.
Articulation with GDPR
Almost all enterprise AI systems process personal data. Both regulations apply simultaneously and cumulatively, as the CNIL confirmed in its first FAQ on the regulation's entry into force. It is not one or the other. It is both.
Two articulation points deserve particular attention.
DPIA and fundamental rights impact assessment. GDPR requires a Data Protection Impact Assessment (DPIA) for high-risk processing. The AI Act requires a fundamental rights impact assessment for certain deployers of high-risk systems. These two assessments overlap substantially. The CNIL recommends conducting them together with shared documentation. If you already have a DPIA for your HR scoring or credit tool, it must be reviewed against AI Act criteria before August 2.
Training data governance. Article 10 of the AI Act requires that training data for high-risk systems be relevant, representative, and free of errors. This directly echoes GDPR's data quality principles and personal data audit obligations. If you are fine-tuning a model on customer data, you have a GDPR question (legal basis, purpose, retention period) and an Article 10 question (quality, representativeness, bias detection) to address in parallel, with the same teams.
The good news: organisations that have structured their GDPR compliance already have part of the documentary infrastructure needed for the AI Act. The processing register, existing DPIAs, and data mapping are a starting point. The bad news: many assume that "GDPR covers AI", which is inaccurate. The AI Act adds obligations specific to high-risk AI systems that GDPR does not address, including robustness requirements, technical documentation under Articles 11-12, and human oversight obligations.
What we set up at GettIA
When a client comes to us on AI Act readiness, we do not start with the legal texts. We start with a deployment inventory.
Over 60% of AI and SaaS applications run outside IT visibility, according to a Cloud Security Alliance analysis published in March 2026. Before you can know whether you are compliant, you need to know what you are actually running, shadow AI included. Organisations that think they have two or three AI systems in production typically find eight or ten once the inventory is done seriously.
Our four-step process.
Inventory and classification. We map all AI systems in production and development, then apply the Annex III and Article 6(3) decision tree to each. Output: a map of confirmed high-risk systems, systems to document under Article 6(3), and systems clearly out of scope.
Vendor gaps. For each high-risk system purchased from a third party, we verify the vendor can produce Articles 9-17 documentation, particularly Article 15 on robustness and cybersecurity. Where they cannot, we negotiate appropriate contractual clauses or identify compliant alternatives.
Deployer compliance. Designation of human oversight responsibles, log retention procedures, input data review, worker information plans, and updating DPIAs into combined AI Act and GDPR impact assessments.
Eval set and monitoring. For systems developed in-house or heavily configured: documenting accuracy metrics, building a documented eval set on representative cases, establishing a drift detection process, and a defined incident response plan. This is also what Article 15 implicitly requires.
This is a three-to-six-month project for a mid-sized enterprise with several high-risk systems in production. Organisations that wait until July to start are taking a real risk, not just a regulatory one: building a solid compliance file requires time and coordination across technical, legal, and HR teams.
Want to review your situation together? Book a slot, we will block 30 minutes to map your AI deployments and identify what falls under the August 2, 2026 high-risk obligations.