AI Act Guide Version 1.1 – September 2025
AI Act Guide Version 1.1 – September 2025 The "AI Act Guide" offers an overview of the main aspects of the AI Act, a legal framework that regulates Artificial Intelligence (AI) within the European Union (EU). The guide aims to provide insights into the rules and regulations for organizations that develop, deploy, or use AI systems. It highlights that the legal text of the AI Act always overrides other considerations and encourages users to give feedback to improve future versions of the guide. Scope and Objectives of the AI Act: The AI Act aims to ensure the responsible development and use of AI, protecting the safety, health, and fundamental rights of individuals. It applies to businesses, governments, and other organisations operating within the EU. The regulations are being introduced in phases, with some prohibitions already in effect. Key Steps for Organisations: The guide outlines a four-step process for organisations to determine how the AI Act applies to them:
- Risk Assessment: Determine if the AI system falls under any of the defined risk categories.
- AI Classification: Ascertain whether the system qualifies as "AI" according to the AI Act's definition.
- Role Identification: Identify whether the organisation is a provider or a deployer of the AI system.
- Obligations: Understand the obligations that must be complied with based on the AI system's risk category and the organisation's role.
- Prohibited AI Practices: These are AI systems that pose an unacceptable risk to society and are banned. Examples include systems that manipulate human behaviour, exploit vulnerabilities, or engage in social scoring.
- High-Risk AI Systems: These systems have the potential to cause significant harm to people's health, safety, or fundamental rights. They are subject to strict requirements and conformity assessments. High-risk AI systems are further divided into:
- High-Risk Products: AI systems that are also subject to existing product regulations, such as those used in medical devices or machinery.
- High-Risk Applications: AI systems used in specific areas such as law enforcement, education, employment, or essential services.
- General Purpose AI Models and Systems: These models and systems are subject to specific information requirements, and in certain cases, other requirements must be complied with in order to mitigate risks.
- Generative AI and Chatbots: These applications are subject to specific transparency requirements depending on whether the system is or is not a high-risk system.
- Other AI: AI systems not covered by the categories described above.
- Providers: Entities that develop or commission the development of AI systems and place them on the market or put them into service. Providers have the strictest obligations.
- Deployers: Entities that use AI systems under their authority. Deployers also have obligations to mitigate risks and ensure responsible use.
- Providers of high-risk AI systems must comply with a range of requirements, including:
- Establishing a risk management system.
- Ensuring data quality and governance.
- Creating technical documentation.
- Maintaining records (logs).
- Providing transparency and information.
- Ensuring human oversight.
- Ensuring accuracy, robustness, and cybersecurity.
- Implementing a quality management system.
- Monitoring the AI system's performance.
- Using the system in accordance with instructions.
- Assigning human oversight.
- Ensuring data relevance and representativeness.
- Monitoring system operation.
- Informing the provider and ceasing use if the system no longer complies with requirements.
- Informing relevant parties about potential risks and serious incidents.
The original document can be accessed here.
Comments
Post a Comment