Compliance

EU AI Act Security Controls: A Practitioner's Mapping of Articles 9-15 to Concrete Technical Measures

Fw
Nicholas Falshaw
||11 min read

The EU AI Act became enforceable in 2026. For high-risk AI system providers, the obligations under Articles 9 through 15 are concrete and auditable. This post maps each article to specific technical controls and provides crosswalks to ISO 42001 and ISO 27001 so security teams that already operate under those frameworks can extend rather than rebuild.

Three observations frame what follows. First, most AI providers deploying high-risk systems also operate ISO 27001-certified information security management. The AI Act adds AI-specific controls; it does not replace existing ones. Second, the AI Act is intentionally framework-agnostic about the underlying certification path; ISO 42001 is becoming the de-facto standard because it maps cleanly. Third, the EU AI Office will issue sector-specific guidance over time, which will refine the practical interpretation. The mapping below is the framework-agnostic baseline.

Article 9 — Risk Management System

Article 9 requires a documented risk management system for high-risk AI systems, established and maintained throughout the lifecycle. Risks must be identified, analyzed, evaluated, and addressed. Residual risks must be communicated to the deployer.

Technical controls:risk register specific to the AI system (separate from organizational risk register), per-risk treatment plan, residual risk acceptance with named approver, retraining triggers tied to risk re-evaluation.

ISO 42001 mapping: A.6 AI risk treatment, A.6.2 risk assessment process. ISO 27001 crosswalk: Annex A.5.7 (threat intelligence), A.5.20 (information security in supplier relationships).

Article 10 — Data and Data Governance

Article 10 sets quality criteria for training, validation, and testing data. Data must be relevant, sufficiently representative, and free of errors. Bias detection and mitigation are explicit obligations.

Technical controls: data provenance tracking (source, collection date, license, consent basis), data quality validation pipelines, bias-detection metrics per protected attribute, data retention and deletion procedures with deletion proof.

ISO 42001 mapping: A.7 data management, A.7.4 data quality. ISO 27001 crosswalk: Annex A.5.34 (privacy, PII), A.8.10 (information deletion), A.8.11 (data masking).

Article 11 — Technical Documentation

Article 11 requires technical documentation drawn up before market placement and kept up to date. Annex IV specifies content: system description, design specifications, training data, validation results, monitoring plan.

Technical controls: model card with required fields, automated regeneration of validation results on each model version, version control of the documentation set itself.

Article 12 — Logging

Article 12 requires automatic recording of events while the AI system is operating. The records must enable identification of situations that may result in risks, traceability of system operation throughout its lifetime, and post-market monitoring.

Technical controls:per-inference log entry (timestamp, input hash, output, model version, confidence if applicable), retention duration tied to product lifetime expectations, log integrity controls, restricted access to log records with audit trail.

ISO 42001 mapping: A.9 AI system operation, A.9.4 logging. ISO 27001 crosswalk: Annex A.8.15 (logging), A.8.16 (monitoring activities).

Article 13 — Transparency and Information for Deployers

Article 13 requires the system to be sufficiently transparent for deployers to interpret the output and use the system properly. Instructions for use must be provided.

Technical controls: per-output confidence or uncertainty signal, documentation of known limitations and failure modes, deployer-facing instructions covering input requirements and output interpretation.

Article 14 — Human Oversight

Article 14 requires human oversight measures that allow humans to understand the system's capacities and limitations, monitor operation, decide when to use the system, override or reverse outputs, and stop the system.

Technical controls: a control plane separate from the inference plane, override mechanism that does not require model retraining or redeployment, kill switch with documented activation procedure, oversight operator training records, audit trail of override invocations.

Security implication: the control plane is a high-value target. Adversarial actions there either disable oversight or manipulate operators into accepting risky outputs. Treat the control plane as the most-privileged zone in the AI system architecture.

Article 15 — Accuracy, Robustness, and Cybersecurity

Article 15 is the cybersecurity-explicit article. High-risk AI systems must be designed to achieve appropriate levels of accuracy, robustness, and cybersecurity, and to perform consistently throughout their lifecycle.

Technical controls (accuracy):documented accuracy metrics per evaluation set, regression testing on each model update, drift detection in production with thresholds.

Technical controls (robustness):adversarial example testing, out-of-distribution detection, fail-safe defaults when input is rejected, redundancy where the AI system is a single point of failure.

Technical controls (cybersecurity):threat model for the AI system specifically (the 7-class threat model from the agent-security post is one starting point), penetration testing covering AI-specific attacks (prompt injection, data poisoning, model extraction), incident response procedures.

ISO 42001 mapping: A.6.2.6 risk treatment. ISO 27001 crosswalk: Annex A.8.7 (protection against malware), A.8.8 (technical vulnerabilities), A.8.29 (security testing).

Implementation Sequencing

For organizations with existing ISO 27001 certification, an AI Act readiness program in three phases over six months is realistic.

  • Months 1-2: AI system inventory, classification per AI Act risk tiers, gap analysis against Articles 9-15.
  • Months 3-4: implement missing technical controls (logging, oversight mechanisms, robustness testing). Extend ISO 27001 procedures with AI-specific addenda rather than separate frameworks.
  • Months 5-6: ISO 42001 readiness assessment, internal audit, gap remediation. Optional: pursue ISO 42001 certification to externalize the AI risk-management posture.

Organizations without existing ISO 27001 should plan a longer cycle. The AI Act assumes mature information security management as a foundation; building both in parallel doubles the timeline and the failure rate.

About the Author

Nicholas Falshaw is a Principal Security Architect with 17+ years of enterprise security experience across DAX-30 clients, KRITIS-regulated operators, and EU financial services. He authored the FwChange methodology and is currently focused on AI-system security architecture for regulated industries.

Author and Methodology Behind FwChange

FwChange is built and authored by Nicholas Falshaw, drawing on 17+ years of enterprise firewall experience and 280+ migrations. Read the methodology behind the platform.

Stay Updated

Get firewall management tips, compliance guides, and product updates.

No spam. Unsubscribe anytime.

Fw

Nicholas Falshaw

Principal Security Architect & AI Systems Engineer. 17+ years of enterprise firewall and network security across DAX-30 and KRITIS-regulated operators. Author of FwChange and the 280-migrations dataset.

Ready to Automate Firewall Changes?

See how FwChange streamlines multi-vendor firewall management with compliance automation and AI-powered rule analysis.