AI Act Guide Version 1.1 – September 2025

AI Act Guide Version 1.1 – September 2025 The "AI Act Guide" offers an overview of the main aspects of the AI Act, a legal framework that regulates Artificial Intelligence (AI) within the European Union (EU). The guide aims to provide insights into the rules and regulations for organizations that develop, deploy, or use AI systems. It highlights that the legal text of the AI Act always overrides other considerations and encourages users to give feedback to improve future versions of the guide. Scope and Objectives of the AI Act: The AI Act aims to ensure the responsible development and use of AI, protecting the safety, health, and fundamental rights of individuals. It applies to businesses, governments, and other organisations operating within the EU. The regulations are being introduced in phases, with some prohibitions already in effect. Key Steps for Organisations: The guide outlines a four-step process for organisations to determine how the AI Act applies to them:

  1. Risk Assessment: Determine if the AI system falls under any of the defined risk categories.
  2. AI Classification: Ascertain whether the system qualifies as "AI" according to the AI Act's definition.
  3. Role Identification: Identify whether the organisation is a provider or a deployer of the AI system.
  4. Obligations: Understand the obligations that must be complied with based on the AI system's risk category and the organisation's role.
Risk Categories: The AI Act categorises AI systems based on risk levels:
  • Prohibited AI Practices: These are AI systems that pose an unacceptable risk to society and are banned. Examples include systems that manipulate human behaviour, exploit vulnerabilities, or engage in social scoring.
  • High-Risk AI Systems: These systems have the potential to cause significant harm to people's health, safety, or fundamental rights. They are subject to strict requirements and conformity assessments. High-risk AI systems are further divided into:
    • High-Risk Products: AI systems that are also subject to existing product regulations, such as those used in medical devices or machinery.
    • High-Risk Applications: AI systems used in specific areas such as law enforcement, education, employment, or essential services.
  • General Purpose AI Models and Systems: These models and systems are subject to specific information requirements, and in certain cases, other requirements must be complied with in order to mitigate risks.
  • Generative AI and Chatbots: These applications are subject to specific transparency requirements depending on whether the system is or is not a high-risk system.
  • Other AI: AI systems not covered by the categories described above.
Definition of AI: The AI Act defines an AI system as a "machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments." This definition includes elements such as autonomy, adaptiveness, and the capacity to infer outputs from inputs. Roles and Responsibilities:
  • Providers: Entities that develop or commission the development of AI systems and place them on the market or put them into service. Providers have the strictest obligations.
  • Deployers: Entities that use AI systems under their authority. Deployers also have obligations to mitigate risks and ensure responsible use.
Obligations for High-Risk AI Systems:
  1. Providers of high-risk AI systems must comply with a range of requirements, including:
  2. Establishing a risk management system.
  3. Ensuring data quality and governance.
  4. Creating technical documentation.
  5. Maintaining records (logs).
  6. Providing transparency and information.
  7. Ensuring human oversight.
  8. Ensuring accuracy, robustness, and cybersecurity.
  9. Implementing a quality management system.
  10. Monitoring the AI system's performance.
Deployers of high-risk AI systems also have obligations, such as:
  1. Using the system in accordance with instructions.
  2. Assigning human oversight.
  3. Ensuring data relevance and representativeness.
  4. Monitoring system operation.
  5. Informing the provider and ceasing use if the system no longer complies with requirements.
  6. Informing relevant parties about potential risks and serious incidents.
General Purpose AI Models and Systems Obligations: Providers of general purpose AI models have obligations related to technical documentation, information for downstream providers, copyright policies, and training data summaries. Providers of general purpose AI models with systemic risks must also implement model evaluations, mitigate systemic risks, and report incidents. Transparency for Generative AI and Chatbots: Providers of chatbots must inform users that they are interacting with an AI system. Providers of systems that generate audio, image, video, or text content must ensure that the output is marked in a machine-readable format to indicate that it is artificially generated or manipulated. Deployers of such systems must also ensure that it is clear that the content is artificially generated or manipulated. Other Considerations: The guide also notes that AI systems not covered by the described categories are not required to comply with the AI Act. However, if such systems are used for high-risk applications, they automatically become high-risk AI systems, and the user must comply with the requirements as a system provider.


The original document can be accessed here.


Comments

Popular posts from this blog

Gland Pharma is expected to secure 180-day exclusivity for Angiotensin II Acetate.

Medical Reimbursement System in the US, J-Code, and the Potential Impact on 505(b)(2) Applications with Recent Changes