What Does the EU's Recent AI Act Mean in Practice?

The European Union's law on artificial intelligence came into force on 1 August. The new AI Act essentially regulates what artificial intelligence can and cannot do in the EU. A team led by computer science professor Holger Hermanns from Saarland University and law professor Anne Lauber-Rönsberg from Dresden University of Technology has examined how the new legislation impacts the practical work of programmers. The results of their analysis will be published in the autumn.

"The AI Act shows that politicians have understood that AI can potentially pose a danger, especially when it impacts sensitive or health-related areas," said Holger Hermanns, professor of computer science at Saarland University. But how does the AI Act affect the work of the programmers who actually create AI software? According to Hermanns, there is one question that almost all programmers are asking about the new law: "So what do I actually need to know?" After all, there aren't many programmers with the time or inclination to read the full 144-page regulation from start to finish.

But an answer to this frequently asked question can be found in the research paper "AI Act for the Working Programmer," which Holger Hermanns has written in collaboration with his doctoral student Sarah Sterz, postdoctoral researcher Hanwei Zhang, professor of law at TU Dresden Anne Lauber-Rönsberg and her research assistant Philip Meinel. Sarah Sterz summarized the main conclusion of the paper as follows: "On the whole, software developers and AI users won't really notice much of a difference. The provisions of the AI Act only really become relevant when developing high-risk AI systems."

The European AI Act aims to protect future users of a system from the possibility that an AI could treat them in a discriminatory, harmful or unjust manner. If an AI does not intrude in sensitive areas, it is not subject to the extensive regulations that apply to high-risk systems. Holger Hermanns offered the following concrete example as an illustration of what this means in practice: "If AI software is created with the aim of screening job applications and potentially filtering out applicants before a human HR professional is involved, then the developers of that software will be subject to the provisions of the AI Act as soon as the program is marketed or becomes operational. However, an AI that simulates the reactions of opponents in a computer game can still be developed and marketed without the app developers having to worry about the AI Act."

But high-risk systems, which in addition to the applicant tracking software referred to above, also include algorithmic credit rating systems, medical software or programs that manage access to educational institutions such as universities, must conform to a strict set of rules set out in the AI Act now coming into force. "Firstly, programmers must ensure that the training data is fit for purpose and that the AI trained from it can actually perform its task properly," explained Holger Hermanns. For example, it is not permissible that a group of applicants is discriminated against because of representational biases in the training data. "These systems must also keep records (logs) so that it is possible to reconstruct which events occurred at what time, similar to the black box recorders fitted in planes," said Sarah Sterz. The AI Act also requires software providers to document how the system functions - as in a conventional user manual. The provider must also make all information available to the deployer so that the system can properly be overseen during its use in order to detect and correct errors. (Researchers have recently discussed the search for effective 'human oversight' strategies in another paper.)

Holger Hermanns summarized the impact of the AI Act in the following way: "The AI Act introduces a number of very significant constraints, but most software applications will barely be affected.' Things that are already illegal today, such as the use of facial recognition algorithms for interpreting emotions, will remain prohibited. Non-contentious AI systems such as those used in video games or in spam filters will be hardly impacted by the AI Act. And the high-risk systems mentioned above will only be subject to legislative regulation when they enter the market or become operational," added Sarah Sterz. There will continue to be no restrictions on research and development, in either the public or private spheres.

"I see little risk of Europe being left behind by international developments as a result of the AI Act," said Hermanns. In fact, Hermanns and his colleagues take an overall favourable view of the AI Act - the first piece of legislation that provides a legal framework for the use of artificial intelligence across an entire continent. "The Act is an attempt to regulate AI in a reasonable and fair way, and we believe it has been successful."

Hermanns, H., Lauber-Rönsberg, A., Meinel, P., Sterz, S., Zhang, H.
AI Act for the Working Programmer.
2024. doi: 10.48550/arXiv.2408.01449

Most Popular Now

500 Patient Images per Second Shared thr…

The image exchange portal, widely known in the NHS as the IEP, is now being used to share as many as 500 images each second - including x-rays, CT, MRI...

Is Your Marketing Effective for an NHS C…

How can you make sure you get the right message across to an NHS chief information officer, or chief nursing information officer? Replay this webinar with Professor Natasha Phillips, former...

We could Soon Use AI to Detect Brain Tum…

A new paper in Biology Methods and Protocols, published by Oxford University Press, shows that scientists can train artificial intelligence (AI) models to distinguish brain tumors from healthy tissue. AI...

Welcome Evo, Generative AI for the Genom…

Brian Hie runs the Laboratory of Evolutionary Design at Stanford, where he works at the crossroads of artificial intelligence and biology. Not long ago, Hie pondered a provocative question: If...

Telehealth Significantly Boosts Treatmen…

New research reveals a dramatic improvement in diagnosing and curing people living with hepatitis C in rural communities using both telemedicine and support from peers with lived experience in drug...

AI can Predict Study Results Better than…

Large language models, a type of AI that analyses text, can predict the results of proposed neuroscience studies more accurately than human experts, finds a new study led by UCL...

Using AI to Treat Infections more Accura…

New research from the Centres for Antimicrobial Optimisation Network (CAMO-Net) at the University of Liverpool has shown that using artificial intelligence (AI) can improve how we treat urinary tract infections...

Research Study Shows the Cost-Effectiven…

Earlier research showed that primary care clinicians using AI-ECG tools identified more unknown cases of a weak heart pump, also called low ejection fraction, than without AI. New study findings...

New Guidance for Ensuring AI Safety in C…

As artificial intelligence (AI) becomes more prevalent in health care, organizations and clinicians must take steps to ensure its safe implementation and use in real-world clinical settings, according to an...

Remote Telemedicine Tool Found Highly Ac…

Collecting images of suspicious-looking skin growths and sending them off-site for specialists to analyze is as accurate in identifying skin cancers as having a dermatologist examine them in person, a...

Philips Aims to Advance Cardiac MRI Tech…

Royal Philips (NYSE: PHG, AEX: PHIA) and Mayo Clinic announced a research collaboration aimed at advancing MRI for cardiac applications. Through this investigation, Philips and Mayo Clinic will look to...

Deep Learning Model Accurately Diagnoses…

Using just one inhalation lung CT scan, a deep learning model can accurately diagnose and stage chronic obstructive pulmonary disease (COPD), according to a study published today in Radiology: Cardiothoracic...