Reference guide

ISO/IEC 42001

The international standard for AI Management Systems — clearly explained, logically structured and usable for organisations that want to develop, procure or deploy AI responsibly.

What this standard does

ISO/IEC 42001 defines requirements for an AI Management System (AIMS). It helps organisations design, deploy and maintain AI systems in a responsible, transparent and controllable way — similar to how ISO 27001 does for information security.

The goal is straightforward:

  • Manage risks and impact of AI systems on individuals and society.
  • Ensure responsible, transparent and explainable AI use across the organisation.
  • Build trust with clients, regulators and internal stakeholders.
  • Align with regulation such as the EU AI Act and existing management systems.

AI principles

Four core principles to make responsible AI discussable.

Principle

Accountability

Clear responsibilities for AI systems, from governance to operational roles and escalation paths.

Principle

Transparency

Understandable information about the operation, limitations and decisions of AI systems for relevant stakeholders.

Principle

Fairness

Protection against unintended bias, unfair outcomes and discriminatory effects in AI applications.

Principle

Safety & Security

Control of risks of physical, digital and organisational harm throughout the full lifecycle.

Management system clauses

The seven main clauses of the AIMS

ISO/IEC 42001 follows the Harmonized Structure (HLS). These clauses connect context, leadership, planning and operation with monitoring and improvement of AI activities.

Cl. 4

Context of the organization

Determine internal and external issues, interested parties and the scope of the AI Management System (AIMS).

Cl. 5

Leadership

Top management demonstrates commitment, sets the AI policy and assigns roles, responsibilities and authorities.

Cl. 6

Planning

Identify risks and opportunities, assess AI impact and set measurable objectives for responsible AI use.

Cl. 7

Support

Provide resources, competences, awareness, communication and documented information around AI activities.

Cl. 8

Operation

Plan, implement and control operational AI processes, including impact assessments and lifecycle management.

Cl. 9

Performance evaluation

Monitor, measure, analyse and evaluate AIMS performance through internal audits and management reviews.

Cl. 10

Improvement

Manage non-conformities and structurally drive continuous improvement of the AI Management System.

Annex A controls

The controls that make the AIMS concrete.

Annex A bundles controls into domains that together demonstrably enable responsible AI management — from policy and governance to data, lifecycle and use.

AI Policies

  • AI policy
  • Alignment with the organisation
  • Periodic review

Internal Organization

  • Roles & responsibilities
  • AI governance
  • Reporting of concerns

Resources for AI

  • Data resources
  • Tools & compute
  • Human expertise

Impact Assessment

  • AI system impact
  • Effects on individuals
  • Effects on society

AI System Lifecycle

  • Design & development
  • Verification & validation
  • Deployment & maintenance

Data for AI Systems

  • Data quality
  • Data provenance
  • Data preparation

Information for Interested Parties

  • Documentation for users
  • Incident communication
  • Transparency

Use of AI Systems

  • Responsible use
  • Intended use
  • Management of third-party AI

AI actors

The standard becomes concrete per type of AI actor.

Each role in the AI chain has its own responsibilities. The guide makes visible where you need to record policy, controls and evidence — depending on your position.

AI Providers

  • Model development
  • Documentation & datasheets
  • Conformity statements

AI Developers

  • Engineering practices
  • Bias & robustness testing
  • Model versioning

AI Deployers

  • Operational controls
  • Monitoring in production
  • User instructions

AI Users & Subjects

  • Awareness & training
  • Feedback channels
  • Protection of rights

Core principles

Principles that keep ISO/IEC 42001 practical in your organisation.

Risk-based approach

Measures are proportional to AI risk, with explicit impact and risk assessments.

Human oversight

Humans stay in control of critical AI decisions and can intervene or correct where needed.

Lifecycle management

Manage AI systems from concept and data through deployment, monitoring and decommissioning.

Continuous improvement

Lessons learned, audits and monitoring feed structural improvements to policy and controls.

From guide to evidence

Make ISO/IEC 42001 directly applicable with templates.

Use the package to capture AI policy, procedures, risk and impact assessments, lifecycle and self-assessments in documents your team can use straight away.