The European Artificial Intelligence Act (AI Act), the world’s first comprehensive regulation on artificial intelligence, entered into force on the 1st of August. The AI Act is designed to ensure that AI developed and used in the EU is trustworthy, with safeguards to protect people’s fundamental rights.

The regulation aims to establish a harmonised internal market for AI in the EU, encouraging the uptake of this technology and creating a supportive environment for innovation and investment. The AI Act introduces a forward-looking definition of Artificial Intelligence, based on a product safety and risk-based approach in the EU.

  • Minimal risk: Most AI systems, such as AI-enabled recommender systems and spam filters, fall into this category. These systems face no obligations under the AI Act due to their minimal risk to citizens’ rights and safety.
  • Specific transparency risk: AI systems like chatbots must clearly disclose to users that they are interacting with a machine. Certain AI-generated content, including deep fakes, must be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used.
  • High risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity. Such high-risk AI systems include for example AI systems used for recruitment, or to assess whether somebody is entitled to get a loan, or to run autonomous robots.
  • Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behaviour of minors, systems that allow ‘social scoring’ by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces.

EU member states have until 2 August 2025 to designate national competent authorities, who will oversee the application of the rules for AI systems and carry out market surveillance activities. The Commission’s AI Office will be the key implementation body for the AI Act at EU-level, as well as the enforcer for the rules for general-purpose AI models.

Companies not complying with the rules will be fined. Fines could go up to 7% of the global annual turnover for violations of banned AI applications, up to 3% for violations of other obligations and up to 1.5% for supplying incorrect information. The majority of rules of the AI Act will start applying on 2 August 2026. However, prohibitions of AI systems deemed to present an unacceptable risk will already apply after six months.

By EH