CAIO – Specialized in Risk Management (ISO/IEC 42001)

Objective

Designed for organizations seeking to appoint an AI supervisor with the expertise required to implement a risk management system based on ISO/IEC 42001:2023 – Information Technology – Artificial Intelligence Management System.

The program provides participants with the theoretical and practical foundations necessary to initiate the implementation of an AI management system and to prepare for third-party audits, thereby strengthening governance and protecting business models.

Target Audience

This program is designed for executives, senior management, auditors, compliance officers, and professionals involved in activities related to:

  • Information technology
  • Deployment of artificial intelligence models
  • Risk management consulting
  • Corporate governance
  • Data science
  • Auditing
  • Corporate integrity and compliance programs
  • Supervision of AI systems
  • Risk management functions
  • Government and public policy
  • Data protection and privacy officers
  • Compliance and regulatory affairs

Program Modules and Objectives

Module 1: Artificial Intelligence Concepts and Terminology (ISO/IEC 22989:2022)

Objective: Provide a comprehensive understanding of the fundamental concepts and terminology of artificial intelligence according to the ISO/IEC 22989:2022 standard. Participants will learn to identify and apply these concepts within the context of AI management in their organizations, while also understanding emerging trends and the strategic value of data.

Module 2: Integration of Regulatory Frameworks for AI Risk Management

Objective: Understand how international regulations and local legal frameworks impact the implementation and operation of an AI risk management system under ISO/IEC 42001. Participants will learn how to integrate regulatory requirements into AI risk management frameworks, ensuring compliance and mitigating legal risks.

Module 3: AI Policy and Ethical Impact Assessments

Objective: Establish responsible policies for the use of AI and develop the capacity to evaluate the ethical and social impacts of AI systems. Participants will learn how to formulate and implement AI policies and conduct ethical impact assessments aligned with international best practices.

Module 4: Implementation of the AI Management System (ISO/IEC 42001)

Objective:
Develop a comprehensive AI risk management system in accordance with the criteria and guidelines established in ISO/IEC 42001:2023. Participants will learn to identify, assess, and treat risks associated with the use of AI within their organizations.

Module 5: AI Crisis Management

Objective: Learn how to design crisis management strategies related to AI deployment and incident response. Participants will develop the ability to communicate effectively with stakeholders and the broader public environment during AI-related crises.

Preparation for Case Defense Before a Jury

Objective: Participants will work in group sessions to develop and present a practical case. During this process, they will produce four deliverables designed to support the implementation of an AI management framewor

Certification

Upon successful evaluation, participants receive a certification recognizing them as Artificial Intelligence Compliance Officers specialized in AI Risk Management under ISO/IEC 42001, issued by NYCE, Mexico’s National Standards Body and a leading organization in evaluation and certification, with representation in more than 85 countries.

Download course brochure

Register here
WhatsApp

Contact

We welcome your inquiries. Please use the form below to request information, share feedback, or explore potential collaboration opportunities

Calle Newton 186, Polanco, Miguel Hidalgo, 11560 Ciudad de México

Call us

(52) 55 4358 0019
(52) 55 1334 4054
(52) 55 2648 5116

Schedule

Monday to Friday from 9:00 a.m. to 6:00 p.m.

Carrito de compra0
No hay productos en el carrito