Foreword
The objective of this article is to provide a cohesive overview of the controls outlined in Annex B of the recently released ISO/IEC 42001:2023 standard. While not intended to be an exhaustive examination of the entire standard, this article serves as a starting point for understanding the specific controls within Annex B. The focus is on facilitating comprehension and offering insight into the purpose and scope of these controls, allowing readers to gain a foundational understanding of their relevance within the ISO/IEC 42001:2023 framework.
AI Policy
Crafting a robust policy for the development of AI systems is essential for organizations seeking to navigate the complexities of artificial intelligence. This policy document should serve as a comprehensive guide, taking into consideration a myriad of factors crucial to its success. Business strategy, organizational values, and the organization’s risk appetite are key elements that must be factored into the policy framework. Equally important is an evaluation of the risk associated with AI systems, adherence to legal requirements, understanding the organizational risk environment, and assessing the impact on stakeholders.
Within this policy documentation, it becomes imperative to integrate guiding principles that align with the organization’s values and culture. Furthermore, the inclusion of processes for handling deviations and exceptions to policies ensures adaptability and flexibility in the face of evolving challenges. Specific aspects like managing AI resources, conducting impact assessments, and overseeing system development must also be explicitly addressed to provide a comprehensive framework.
These policies should not exist in isolation; they must guide the entire lifecycle of AI systems, influencing their development, purchase, operation, and usage. The organization is tasked with identifying and addressing how the introduction of the AI policy affects existing policies in other domains. A careful examination is necessary to ensure alignment and to implement any necessary updates in accordance with the AI policy.
To maintain the effectiveness of the AI policy, regular reviews are crucial. These reviews should be conducted at planned intervals or as needed to guarantee the ongoing suitability, adequacy, and effectiveness of the policy. The establishment of a designated role for the adjustment and evaluation of the AI policy is recommended, empowering individuals to assess opportunities for improvement in response to a changing environment and circumstances. This proactive approach ensures that the AI policy remains a dynamic and robust guide for the organization’s engagement with artificial intelligence.
Internal Organisation
This control aims to instill accountability within the organization, ensuring a responsible approach to the implementation, operation, and management of AI systems. Defining clear roles and responsibilities for AI is crucial for accountability across the organization. These roles must align with the organization’s needs, taking into account the AI policy, objectives, and associated risks. The standard outlines specific roles, including AI system impact assessments, security, safety, privacy, performance, and human oversight.
To foster accountability, the standard mandates a process for reporting concerns about the organization’s role in relation to AI systems. It provides a comprehensive set of mechanisms for this reporting process, emphasizing transparency and proactive management of concerns.
Resources for AI system
This control underscores the importance of comprehensively understanding the AI system and its associated risks by meticulously documenting all assets and resources involved. These encompass a broad spectrum, including but not limited to data resources, tooling resources, system and computing resources, human resources, and AI system components. The standard provides specific recommendations on how to document each of these resources, acknowledging that the outlined recommendations may not be exhaustive. Organizations are encouraged to tailor their documentation based on the unique context and complexity of their AI system.
Assessing impacts of AI systems
This ISO/IEC 42001 Annex B control focuses on addressing the potential impact of an AI system and emphasizes the necessity for organizations to document and assess this impact comprehensively. The AI system’s impact assessment procedure is designed to evaluate its effects on various aspects, including the legal position and life opportunities of individuals, the physical or psychological well-being of individuals, universal human rights, and societies at large.
The procedure outlines circumstances triggering an AI system impact assessment, considering factors such as the system’s criticality, complexity, level of automation, and the sensitivity of data types. The assessment process involves identification, evaluation, treatment, and documentation. The responsible individual for conducting the assessment, the utilization of the assessment results, and potential impacts on individuals and societies based on the system’s intended purpose are crucial elements.
Documentation of impact assessments should be retained and updated, aligning with the original AI Impact Assessment documentation. Key considerations for documentation include the intended use of the AI system, both positive and negative impacts, predictable failures, system complexity, and the role of humans in relation to the system.
The impact assessment extends to individuals, groups, and societal impacts, requiring organizations to align with their governance principles, AI policies, and objectives. Areas of impact encompass fairness, accountability, transparency, security and privacy, safety and health, financial consequences, accessibility, and human rights. Consultations with domain experts are recommended to gain a comprehensive understanding of potential impacts.
Additionally, the impact assessment should evaluate societal impacts, considering both beneficial and detrimental aspects such as environmental sustainability, economic implications, and health and safety. This holistic approach ensures organizations are mindful of the broader consequences of deploying AI systems and are equipped to make informed decisions in line with ethical and responsible practices.
AI systems life cycle
Management Guidance
Organizations are advised to articulate and document clear objectives that will serve as guiding principles throughout the development of AI systems. These objectives should be integrated into the entire development life cycle, ensuring a cohesive approach to achieving them.
To uphold responsible design and development practices, organizations must define and document specific processes tailored for AI systems. This encompasses various elements such as life cycle stages, testing requirements, expectations regarding training data, and protocols for change control. The standard documentation provides a comprehensive and inclusive list of considerations that should be taken into account during this definition and documentation process. By incorporating these measures, organizations can foster a structured and ethical approach to the design and development of AI systems.
Development Life Cycle
The organization is mandated to thoroughly document and specify requirements for both new AI systems and substantial enhancements to existing ones. This documentation should include the rationale for system development, its goals, and details on training the model and meeting data requirements.
Moving through the system design and development phase, the organization is required to document key aspects aligned with objectives and specified requirements. This encompasses critical design choices, such as the chosen machine learning approach, methods for assessing model quality, hardware and software considerations, and security protocols, among other factors.
Verification and validation measures must be documented, outlining a comprehensive plan for evaluating the entire system’s risk-related impacts. This includes assessing the interpretability of AI system outputs by relevant decision-makers. Factors leading to potential performance deviations should be clearly outlined.
A detailed deployment plan, ensuring compliance with specified requirements, including performance, testing, and necessary management approvals, should be documented. Ongoing operation documentation should cover essential elements like system and performance monitoring, repair procedures, updates, and support. For AI systems evolving through continuous learning, supervision and safeguards against security threats, such as data poisoning, are crucial.
Determining necessary technical documentation, including technical limitations, usage instructions, and a general description of the system’s intended purpose, is crucial. Additionally, life cycle-related documentation may encompass design and system architecture details, assumptions made, impact assessments, and more. Technical information related to responsible AI system operation must also be documented.
The organization should decide the phase in the AI system life cycle when logging should be enabled. This logging should include traceability information and detection of system performance outside of its intended operating conditions. These comprehensive documentation practices ensure transparency, accountability, and effective management throughout the entire life cycle of AI systems.
Data for AI systems
This ISO/IEC 42001 Annex B control centers on the crucial aspects of defining, documenting, and implementing data management processes, specifically addressing privacy, security implications, transparency, accuracy, and more in the context of AI systems.
The organization is required to thoroughly document the methods employed for acquiring and selecting data for use in the AI system. The specifics of this documentation will be heavily dependent on the attributes and type of data being utilized.
Furthermore, the organization must define and document the quality requirements for the data used in developing the AI system to ensure alignment with the specified performance and accuracy standards. This step is crucial in maintaining the integrity of the AI system’s output.
A robust process for recording the provenance of data used in AI systems must be established and documented by the organization. This includes information about the creation, updates, transcriptions, and other relevant details. According to the standard, data sharing and transformations must also be considered in the context of data provenance and should be accurately recorded.
Prior to use in an AI system, data needs to undergo preparation to yield meaningful outputs. The organization is obligated to document the criteria used for selecting and preparing the data, providing clarity on the steps taken to ensure data readiness for the AI system. This meticulous documentation ensures a transparent and accountable approach to data management in the development and operation of AI systems.
Information for interested parties
This ISO/IEC 42001 Annex B control is designed to ensure that interested parties are well-informed and capable of comprehending and evaluating the risks associated with an AI system, emphasizing transparency and communication.
The organization is mandated to furnish users with necessary documentation and information pertaining to the AI system. This encompasses both technical details and general notifications, facilitating an understanding of how users interact with the AI system.
A structured process must be in place to enable interested parties to report impacts, issues, and failures related to the system. This feedback loop is integral to maintaining a continuous awareness of the system’s performance and addressing any concerns raised by users.
The organization is also required to define and document a plan for communicating incidents to users. The nature and extent of this documentation may vary based on legal requirements and the specific type of incident that has occurred. Therefore, the organization must thoroughly understand its obligations concerning incident reporting.
Furthermore, the organization must determine and document the information that will be reported about the AI system to various interested parties, which may include jurisdictions, regulatory authorities, or customers. This documentation ensures a clear and consistent approach to communication, aligning with legal and regulatory requirements and promoting transparency in the operation of the AI system.
Use of AI systems
This ISO/IEC 42001 Annex B control is in place to ensure that the organization responsibly employs AI systems in accordance with established organizational policies. Key elements include defining and documenting processes for the responsible use of AI systems, obtaining required approvals, and adhering to legal requirements.
The organization is tasked with creating and documenting guidelines for the responsible usage of the AI system, taking into account contextual factors. These guidelines may encompass objectives such as fairness, accountability, reliability, safety, and more. Subsequently, the organization is responsible for implementing processes to ensure these objectives are met, incorporating mechanisms like monitoring and human review.
An additional aspect of this control involves verifying that the AI system is used in alignment with its intended purpose, as outlined in the documentation. This verification can be achieved through human oversight and monitoring via data logs. These measures collectively contribute to the responsible and ethical use of AI systems within the organization, promoting transparency, accountability, and adherence to predefined guidelines and objectives.
Third-party and customer relationships
This ISO/IEC 42001 Annex B control is implemented to ensure that the organization comprehensively understands its responsibilities and accountability when involving third parties at any stage of the AI system life cycle.
Within the AI system life cycle, the organization is required to delineate and allocate accountability and responsibilities among itself, its partners, suppliers, customers, and any third parties involved. This allocation ensures clarity and transparency regarding each entity’s role in the AI system development and usage.
A crucial aspect of this control involves establishing processes with suppliers to align their services, products, or materials with the organization’s approach to AI system development and use. This may necessitate corrective action with suppliers to ensure adherence to the organization’s approach.
Additionally, the organization must ensure that its development approach takes into account the needs and expectations of the customer. This consideration may occur during the design or engineering phase and can be specified through requirements. It is imperative for the organization to provide customers with information about the risks associated with using the service or product, allowing them to assess and understand the potential risks involved in their engagement with the AI system. This disclosure promotes transparency and empowers customers to make informed decisions regarding the associated risks.