The regulatory landscape for AI is continuing to shape up as we close out a landmark year in the advancement of AI capabilities and adoption. In just the past couple of weeks, all relevant EU institutions have agreed upon the leading draft regulation called the AI Act, and the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have published their first standard for requirements and guidelines for AI governance and risk management.
Take a look below at the key points you need to know about both the ISO/IEC 42001 standard and EU AI Act.
ISO/IEC 42001
The ISO/IEC 42001 document outlines the requirements and guidelines for establishing, implementing, maintaining, and continually improving an AI management system within an organization. It is applicable to any organization, regardless of its size, type, or nature, that provides or uses products or services utilizing AI systems. The standard aims to help organizations develop, provide, or use AI systems responsibly.Key areas covered in the standard include:
- Scope and Normative References: The document defines its scope and references other necessary documents.
- Terms and Definitions: It outlines specific terms and definitions used throughout the document, ensuring clarity and consistency.
- Context of the Organization: This section addresses understanding the organization and its context, including the needs and expectations of interested parties and determining the scope of the AI management system.
- Leadership: Focuses on leadership commitment, AI policy, and assigning roles, responsibilities, and authorities.
- Planning: This involves actions to address risks and opportunities, AI objectives, planning to achieve them, and planning of changes.
- Support: It covers resources, competence, awareness, communication, and documented information.
- Operation: This section includes operational planning and control, AI risk assessment, AI risk treatment, and AI system impact assessment.
- Performance Evaluation: It entails monitoring, measurement, analysis, evaluation, internal audit, and management review.
- Improvement: The document discusses continual improvement, nonconformity, and corrective action.
- Policies Related to AI:
- Objective: Provide management direction and support for AI systems in line with business requirements.
- Controls: Include documentation of AI policy, alignment with other organizational policies, and regular review of the AI policy.
- Internal Organization:
- Objective: Establish accountability within the organization for the implementation, operation, and management of AI systems.
- Controls: Define and allocate AI roles and responsibilities and establish a process for reporting concerns related to AI systems.
- Resources for AI Systems:
- Objective: Ensure that the organization accounts for all resources (including AI system components and assets) to fully understand and address risks and impacts.
- Controls: Involve documentation of resources required for AI system life cycle stages and other AI-related activities.
- Assessing Impacts of AI Systems:
- Objective: Assess AI system impacts on individuals, groups, and societies affected by the AI system throughout its life cycle.
- Controls: Establish a process for AI system impact assessment and document these assessments.
- AI System Life Cycle:
- Objective: Define criteria and requirements for each stage of the AI system life cycle.
- Controls: Include management guidance for AI system development, specification of AI system requirements, and documentation of AI system design and development.
- Data for AI Systems:
- Objective: Ensure understanding of the role and impacts of data in AI systems throughout their life cycles.
- Controls: Define data management processes, acquisition of data, data quality requirements, data provenance, and data preparation methods.
- Information for Interested Parties of AI Systems:
- Objective: Ensure relevant parties have necessary information to understand and assess the risks and impacts of AI systems.
- Controls: Include system documentation and information for users, external reporting, communication of incidents, and information sharing with interested parties.
- Use of AI Systems:
- Objective: Ensure that the organization uses AI systems responsibly and in accordance with organizational policies.
- Controls: Define processes for responsible use of AI systems and identify objectives to guide responsible use.
- Third-party and Customer Relationships:
- Objective: Ensure understanding and accountability when third parties are involved in any stage of the AI system life cycle.
- Controls: Allocate responsibilities between the organization, partners, suppliers, and customers, and establish processes for managing these relationships.
EU AI Act
European Parliament members have forged a consensus on a pivotal piece of legislation designed to shape the use of artificial intelligence (AI) across Europe, ensuring its alignment with fundamental rights and democratic values while fostering a conducive environment for businesses to innovate and expand. Negotiators from the Parliament and the Council settled on the terms of the Artificial Intelligence Act. This act is set to safeguard fundamental rights, democracy, and the rule of law from the potential risks posed by high-stakes AI technologies, propelling Europe towards becoming a frontrunner in the AI domain. It establishes a framework of responsibilities proportionate to the risk and impact level of different AI applications.Key Prohibitions:
Acknowledging the dire risks certain AI applications could pose to citizen rights and democracy, the legislators have agreed to ban:- AI that categorizes individuals based on sensitive traits such as political views, religious beliefs, or racial characteristics.
- The indiscriminate harvesting of facial recognition data from the internet or CCTV for database creation.
- The use of emotion recognition systems in work and educational settings.
- Social scoring systems that judge individuals based on social behavior or personal traits.
- AI tools designed to manipulate human behavior, undermining free will.
- AI that preys on individuals’ vulnerabilities based on factors like age, disability, or socio-economic status.