EU AI Act: First Set of Requirements Go into Effect February 2, 2025

Posted

The first binding obligations of the European Union’s landmark AI legislation, the EU AI Act (the Act), came into effect on February 2, 2025. Essentially, from this date, AI practices which present an unacceptable level of risk are prohibited and organizations are required to ensure an appropriate level of AI literacy among staff. For a comprehensive overview of the Act, see our earlier client alert here.

Prohibited AI Practices from February 2, 2025
Article 5 prohibits the use of specific AI practices deemed harmful or inconsistent with EU values. The restricted practices are:

  1. Manipulative AI: AI systems using subliminal or deceptive techniques to distort an individual’s decision-making and causing significant harm.
  2. Exploitative AI: Exploitation of vulnerabilities of individuals or groups (e.g., due to age, disability or socio-economic status) to materially distort behavior and causing harm.
  3. Social Scoring: AI systems evaluating individuals or groups over time based on social behavior, resulting in discriminatory or detrimental outcomes.
  4. Predictive Policing: AI systems assessing individuals’ risks of criminal behavior through profiling or personality traits.
  5. Facial Recognition Databases: The creation or expansion of facial recognition databases through untargeted scraping of the internet or CCTV footage.
  6. Emotion Inference: AI systems inferring emotions of individuals in workplaces or educational institutions, except for narrowly defined medical or safety purposes. (The scope of this prohibition is subject to particular debate.)
  7. Biometric Categorization: Using biometric data to deduce sensitive attributes such as race, political opinion or sexual orientation, except for certain law enforcement purposes.
  8. Real-Time Biometric Identification: Public-space deployment of real-time remote biometric identification systems for law enforcement, subject to narrowly defined exceptions (e.g., targeted searches for missing persons).

Violating these prohibitions can result in substantial penalties under Article 99(3): i.e., fines of up to €35 million or 7% of global annual turnover, whichever is higher.

Mandatory AI Literacy
Also taking effect on February 2, 2025, is the obligation under Article 4 for providers and deployers of AI systems to ensure sufficient “AI literacy” among their staff and operators. Key aspects of the AI literacy obligation are as follows:

  • “AI literacy” is defined as the ability to make informed decisions regarding the deployment and risks of AI systems, as well as understanding the potential harms AI can cause.
  • Providers and deployers of AI systems must ensure individuals involved in the operation or use of AI systems have sufficient skills, knowledge and understanding to handle the systems responsibly.
  • Training must be tailored to the technical expertise of the staff, the context of the AI systems’ deployment, and the characteristics of the individuals or groups impacted by the AI systems.

While the Act is silent on specific penalties for non-compliance with Article 4, in all likelihood, regulators may consider insufficient training as an aggravating factor when determining penalties for other violations under the Act.

Remaining Implementation Timeline
To recap, beyond February 2025, additional obligations under the Act will come into force as follows:

  • August 2, 2025: Obligations for providers of general-purpose AI (GPAI) models take effect.
  • August 2, 2026: Remaining obligations for providers and deployers of AI systems take effect.
  • August 2, 2027: Obligations for AI systems that are safety components of products subject to third-party conformity assessments under existing EU regulations take effect.

Other Recent Developments

General-Purpose AI Code of Practice
The European Commission published the first draft of the General-Purpose AI Code of Practice on November 14, 2024. The Code, once finalized by May 2025, will provide practical guidance to seek to help providers of GPAI models comply with the Act. Compliance with the Code will be mandatory by August 2025. A draft can be accessed here.

Key highlights of the Code include the following:

  • It encourages privacy-preserving techniques such as differential privacy and robust data selection.
  • It emphasizes the need for strong governance, adversarial testing and transparency in AI model development.
  • It stresses the importance of minimizing risks of personal data being revealed through model outputs.

Recent EDPB Guidance on AI and GDPR
On December 18, 2024, the European Data Protection Board (EDPB) issued Opinion 28/2024, which addresses data protection considerations in the context of AI models. The opinion offers practical guidance on determining whether AI models trained on personal data constitute personal data under GDPR and establishing the legal basis for processing personal data during the development and deployment of AI models. It also provides guidance on managing the consequences of unlawful processing during AI model development.

With respect to the first point—whether an AI model trained on personal data can itself be considered personal data under the GDPR—the EDPB states that AI models trained on personal data must be assessed on a case-by-case basis, applying specific criteria.

To argue that a model is anonymous (and therefore not, itself, personal data), providers must demonstrate, using reasonable means, that personal data related to the training data cannot be extracted from the model and that any output produced when querying the model does not relate to a data subject whose personal data was used to train the model.

The EDPB offers some practical guidance for developers seeking to support anonymization in AI models. While achieving full anonymization may often be unattainable, these measures could well set a regulatory benchmark for responsible AI development in accordance with fundamental GDPR principles.

Key recommendations include:

  • Careful Data Selection: Limit personal data collection by carefully choosing training data sources.
  • Data Preparation: Employ processes such as anonymization, pseudonymization, data minimization, and filtering to reduce personal data processing.
  • Robust Training Methods: Use methodologies prioritizing generalization over memorization, while incorporating privacy-preserving techniques like differential privacy where feasible.
  • Output Safeguards: Implement measures to minimize the risk of revealing personal data through model outputs.
  • Governance and Audits: Ensure strong governance with audits to verify privacy measures’ effectiveness.
  • Testing for Resistance: Conduct adversarial testing to evaluate the model’s resilience against attempts to extract personal data, such as membership inference or model inversion.
  • Comprehensive Documentation: Maintain GDPR-compliant documentation, including Data Protection Impact Assessments (DPIAs) and advice from Data Protection Officers (DPOs).

Key Takeaways
In light of these latest developments, businesses developing or deploying AI systems or models should:

  • Review Prohibited AI Use Cases: Conduct an audit of existing and proposed AI systems and projects to ensure compliance with Article 5 prohibitions.
  • Implement AI Literacy Training: Develop and roll out comprehensive training programs to meet Article 4 requirements.
  • Prepare for Future Obligations: Stay ahead of upcoming deadlines, including obligations on GPAI models and high-risk AI systems.
  • Engage with Providers and Developers: Deployers of AI systems should consider raising questions with providers or developers about the steps they are taking to ensure AI models are anonymized, in light of the EDPB opinion. Additionally, it is important to ensure AI procurement terms address the risks posed by the Act and existing legislation such as the GDPR, particularly where models, systems, or outputs will be used within the EU.
  • Monitor Regulatory Updates: Follow developments related to the General-Purpose AI Code of Practice and guidance from EU regulators such as the EDPB.

The Act represents a seismic shift in the regulation of AI systems in Europe, with wide-reaching implications for providers and deployers alike. With the first obligations taking effect in February 2025, organizations should act now to ensure compliance and mitigate risks.