← All our expertise
Expertise · GDPR and AI Act compliance

Ship AI that's actually compliant.

Architect your AI deployments to comply with GDPR, the AI Act, and your sector-specific obligations. From model selection to data governance, including decision traceability.

0
Data outside EU (on request)
100%
Auditable decisions
6-8 wks
Mapping + audit
The problem

What's broken.

GDPR is 8 years old, the AI Act starts applying on August 2, 2026, and most companies are flying blind. Lawyers are still reading the text, CTOs ship without a framework, and «compliant» off-the-shelf solutions are mostly marketing. For regulated mid-market (healthcare, finance, public sector, HR), it's a major operational risk. Here's what we keep seeing.

  • 01

    The AI Act is coming and no one knows what really applies

    Major obligations on high-risk systems take effect on August 2, 2026 (Article 15 onwards). Most companies don't know if they're affected, or to what level of requirement. Early enforcement will come fast, with sanctions modeled on GDPR.

  • 02

    SaaS AI tools override your GDPR requirements

    OpenAI, Anthropic, Microsoft Copilot, Google Gemini have opaque or shifting data processing terms. For regulated mid-market (healthcare, finance, HR, public), it doesn't fly. And the Enterprise contract isn't always sufficient either.

  • 03

    No one knows how to trace an AI decision

    Internal audit, CNIL inspection, GDPR user request. When asked «why did your AI reject this application?», you need to answer. Most architectures deployed today can't. Missing logs, prompts not historized, RAG sources not cited.

  • 04

    DPO and CTO don't talk to each other

    The DPO validates or blocks without being able to assess technically. The CTO ships without visibility on requirements. AI projects get delayed, throttled, or shelved for lack of a shared operating framework. Nobody acts in bad faith, the bridge just doesn't exist.

  • 05

    «Compliant» off-the-shelf solutions are marketing

    «GDPR-friendly» without proof, «AI Act ready» without a framework. The word compliance has become a sales pitch emptied of content. At the first audit, the promises collapse. You stay the data controller, not your vendor.

Our approach

How we do it.

AI compliance isn't a legal spec dropped onto a tech project. It's a dimension built into the architecture from day one and instrumented continuously. Here's our framework.

Regulatory mapping at the start

We identify which frameworks apply to you (GDPR, AI Act, sector-specific: DORA, NIS2, MDR healthcare, MIFID II finance). We classify your AI use cases by AI Act risk (unacceptable, high, limited, minimal). The level of requirement follows.

Sovereignty-oriented architecture

Hosting choice isn't neutral. Local LLM on on-premise GPU for sensitive use cases. Sovereign cloud SecNumCloud (OVH Bleu, S3NS) for compliance-heavy mid-market. Public cloud with EU data residency for standard cases. Choice instructed by use case, not by default.

Native AI decision traceability

Every inference is logged with full context: model, version, prompt, RAG sources cited, user metadata. Audit possible at any time, GDPR user request easy, Article 15 documentation almost ready out of the box.

Hybrid DPO and CTO governance

We build with your DPO and CTO a shared operating framework. AI project checklist, risk matrix, phased validation process. No more blocking for lack of shared reference, no more wild deployment for lack of visibility.

Eval set that includes compliance

Not just precision and hallucination. We also measure adherence to business rules, absence of bias on protected classes (gender, age, origin where applicable), correct refusal on out-of-scope topics. Compliance is a metric, not an intention.

Article 15 documentation delivered

For high-risk systems under AI Act, we deliver the technical documentation (system description, training data, eval methodology, risk management, instructions for use) ready for audit. Not a 2-page PDF, a dossier that holds in case of inspection.

Case study

Shipped on a real mission.

On Peps Digital, the RAG chatbot runs on healthcare data, an ultra-sensitive sector under GDPR and soon MDR. The project was designed from day one with constraints baked in: architecture without non-EU transfer, source traceability on every answer, default human escalation on sensitive cases. That's the level of rigor we bring to every project.

Logo Peps Digital
Peps Digital  ·  SaaS · Healthcare (PSDM)

80% of customer support digitized

A RAG-powered AI chatbot integrated into the Peps Digital platform, answering PSDM users' questions directly from the interface, 24/7.

Read the case study
Methodology

Our process.

01

Regulatory mapping

We identify the frameworks applicable to your sector, classify your AI use cases by AI Act risk, list the GDPR-specific obligations. You walk out with an actionable map of the regulatory landscape that affects you, independently of the AI project.

02

Audit the existing setup

We look at your deployed AI (chatbots, internal copilots, agents), identify gaps versus the frameworks. You get a clear view of current risks (legal and operational) and a prioritized action list.

03

Target architecture and reference

Based on your constraints, we define the architecture (models, hosting, traceability, governance). We deliver a technical reference and architecture diagram validated by DPO and legal. It's the document that survives team turnover.

04

Implementation and training

We roll out the framework on a pilot project and train your teams. DPO, CTO, product teams, data teams: everyone gets their own application guide. The goal: you can operate without us by the end of the mission.

FAQ

Frequently asked questions.

Got a question before we go further? Reach out directly.

  • 01Does the AI Act really apply to us?

    If you deploy AI internally or in a product accessible in the EU, yes. The level of requirement depends on classification (unacceptable, high, limited, minimal). Regulatory mapping at the start of the mission settles the question for each of your use cases.

  • 02GDPR and model training, how do we navigate?

    If you fine-tune a model on personal data, you must document the legal basis, individual notice, purpose, and enable erasure. For LLMs, this often means pseudonymization layers and an eval set without real data. It's manageable but it's designed in from day one.

  • 03Is SecNumCloud worth it for our company?

    For public services, vital operators, and regulated sectors (healthcare, finance), yes. For standard SMBs, public cloud with EU data residency is often enough. Regulatory mapping at the start of the mission settles the question.

  • 04How do you prove an AI isn't biased?

    Eval set including bias cases (balance by protected class, within GDPR limits). Performance gap measurement per class. Trade-off documentation. If training data comes from a vendor, we ask for the technical sheet. No magic, just process.

  • 05We use OpenAI, is that compliant?

    Depends on the contract (Enterprise vs standard API) and the use case. OpenAI offers a Data Processing Agreement, Zero Data Retention option, and EU residency as a paid option. It's enough for many cases, not all. We help you decide based on your real constraints.

  • 06How long for a compliance project?

    For mapping + audit + target architecture, plan 6 to 8 weeks. For full implementation on an existing AI deployment, 12 to 20 weeks depending on technical complexity and the size of the AI estate.

  • 07Do you replace our legal counsel?

    No. We work with your DPO and your lawyers, we bring the technical and operational dimension. Legal validation stays in their hands. Our role: give them an actionable technical reading and a concrete framework to validate.

Let's build together

Ready to
automate everything

We listen. We analyze. We build. With you.