European Artificial Intelligence Act Comes into Force

European CommissionThe European Artificial Intelligence Act (AI Act), the world's first comprehensive regulation on artificial intelligence, enters into force. The AI Act is designed to ensure that AI developed and used in the EU is trustworthy, with safeguards to protect people's fundamental rights. The regulation aims to establish a harmonised internal market for AI in the EU, encouraging the uptake of this technology and creating a supportive environment for innovation and investment.

The AI Act introduces a forward-looking definition of AI, based on a product safety and risk-based approach in the EU:

  • Minimal risk: Most AI systems, such as AI-enabled recommender systems and spam filters, fall into this category. These systems face no obligations under the AI Act due to their minimal risk to citizens' rights and safety. Companies can voluntarily adopt additional codes of conduct.
  • Specific transparency risk: AI systems like chatbots must clearly disclose to users that they are interacting with a machine. Certain AI-generated content, including deep fakes, must be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.
  • High risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems. Such high-risk AI systems include for example AI systems used for recruitment, or to assess whether somebody is entitled to get a loan, or to run autonomous robots.
  • Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users' free will, such as toys using voice assistance encouraging dangerous behaviour of minors, systems that allow ‘social scoring' by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).

To complement this system, the AI Act also introduces rules for so-called general-purpose AI models, which are highly capable AI models that are designed to perform a wide variety of tasks like generating human-like text. General-purpose AI models are increasingly used as components of AI applications. The AI Act will ensure transparency along the value chain and addresses possible systemic risks of the most capable models.

Application and enforcement of the AI rules

Member States have until 2 August 2025 to designate national competent authorities, who will oversee the application of the rules for AI systems and carry out market surveillance activities. The Commission's AI Office will be the key implementation body for the AI Act at EU-level, as well as the enforcer for the rules for general-purpose AI models.

Three advisory bodies will support the implementation of the rules. The European Artificial Intelligence Board will ensure a uniform application of the AI Act across EU Member States and will act as the main body for cooperation between the Commission and the Member States. A scientific panel of independent experts will offer technical advice and input on enforcement. In particular, this panel can issue alerts to the AI Office about risks associated to general-purpose AI models. The AI Office can also receive guidance from an advisory forum, composed of a diverse set of stakeholders.

Companies not complying with the rules will be fined. Fines could go up to 7% of the global annual turnover for violations of banned AI applications, up to 3% for violations of other obligations and up to 1.5% for supplying incorrect information.

Next Steps

The majority of rules of the AI Act will start applying on 2 August 2026. However, prohibitions of AI systems deemed to present an unacceptable risk will already apply after six months, while the rules for so-called General-Purpose AI models will apply after 12 months.

To bridge the transitional period before full implementation, the Commission has launched the AI Pact. This initiative invites AI developers to voluntarily adopt key obligations of the AI Act ahead of the legal deadlines.

The Commission is also developing guidelines to define and detail how the AI Act should be implemented and facilitating co-regulatory instruments like standards and codes of practice. The Commission opened a call for expression of interest to participate in drawing-up the first general-purpose AI Code of Practice, as well as a multi-stakeholder consultation giving the opportunity to all stakeholders to have their say on the first Code of Practice under the AI Act.

Background

On 9 December 2023, the Commission welcomed the political agreement on the AI Act. On 24 January 2024 the Commission has launched a package of measures to support European startups and SMEs in the development of trustworthy AI. On 29 May 2024 the Commission unveiled the AI Office. On 9 July 2024 the amended EuroHPC JU Regulation entered into force, thus allowing the set-up of AI factories. This allows dedicated AI-supercomputers to be used for the training of General Purpose AI (GPAI) models.

Continued independent, evidence-based research produced by the Joint Research Centre (JRC) has been fundamental in shaping the EU's AI policies and ensuring their effective implementation.

For further information, please visit:

Most Popular Now

500 Patient Images per Second Shared thr…

The image exchange portal, widely known in the NHS as the IEP, is now being used to share as many as 500 images each second - including x-rays, CT, MRI...

Is Your Marketing Effective for an NHS C…

How can you make sure you get the right message across to an NHS chief information officer, or chief nursing information officer? Replay this webinar with Professor Natasha Phillips, former...

We could Soon Use AI to Detect Brain Tum…

A new paper in Biology Methods and Protocols, published by Oxford University Press, shows that scientists can train artificial intelligence (AI) models to distinguish brain tumors from healthy tissue. AI...

Welcome Evo, Generative AI for the Genom…

Brian Hie runs the Laboratory of Evolutionary Design at Stanford, where he works at the crossroads of artificial intelligence and biology. Not long ago, Hie pondered a provocative question: If...

Telehealth Significantly Boosts Treatmen…

New research reveals a dramatic improvement in diagnosing and curing people living with hepatitis C in rural communities using both telemedicine and support from peers with lived experience in drug...

AI can Predict Study Results Better than…

Large language models, a type of AI that analyses text, can predict the results of proposed neuroscience studies more accurately than human experts, finds a new study led by UCL...

Using AI to Treat Infections more Accura…

New research from the Centres for Antimicrobial Optimisation Network (CAMO-Net) at the University of Liverpool has shown that using artificial intelligence (AI) can improve how we treat urinary tract infections...

Research Study Shows the Cost-Effectiven…

Earlier research showed that primary care clinicians using AI-ECG tools identified more unknown cases of a weak heart pump, also called low ejection fraction, than without AI. New study findings...

New Guidance for Ensuring AI Safety in C…

As artificial intelligence (AI) becomes more prevalent in health care, organizations and clinicians must take steps to ensure its safe implementation and use in real-world clinical settings, according to an...

Remote Telemedicine Tool Found Highly Ac…

Collecting images of suspicious-looking skin growths and sending them off-site for specialists to analyze is as accurate in identifying skin cancers as having a dermatologist examine them in person, a...

Philips Aims to Advance Cardiac MRI Tech…

Royal Philips (NYSE: PHG, AEX: PHIA) and Mayo Clinic announced a research collaboration aimed at advancing MRI for cardiac applications. Through this investigation, Philips and Mayo Clinic will look to...

Deep Learning Model Accurately Diagnoses…

Using just one inhalation lung CT scan, a deep learning model can accurately diagnose and stage chronic obstructive pulmonary disease (COPD), according to a study published today in Radiology: Cardiothoracic...