European Commission Moves to Ease AI Rules as WHO Warns of Patient Risks due to Regulatory Vacuum
There is immense innovation in AI-driven robots, like this ARI -V2 robot, for use in healthcare, yet regulatory frameworks and ethical standards are lagging.

Technological advances in Artificial Intelligence applications for healthcare are quickly outpacing regulatory and ethical safeguards, creating a dangerous gap in patient safety, warns a milestone report on AI in Health Systems, published Wednesday by the World Health Organization’s European Region (WHO/EURO). 

Paradoxically, the WHO’s urgent call for tighter AI regulation coincided with a far-reaching European Commission (EC) proposal Wednesday to loosen certain AI regulations in the European Union’s 27 member states – as part of a new  “Digital Omnibus” package. The package aims to cut red tape for AI and other digital industries in the EU, but critics argue that it would severely water down data protection for individuals. 

The WHO report’s findings are based on the first comprehensive survey of AI-implementation conducted in the WHO European Region from 2024-2025. The results culled from 50 out of 53 WHO European Region member states – whose borders extend from the United Kingdom to Russia, and through Central Asia to Turkey and Israel – highlight how countries are struggling to keep up with the pace of change. 

“The rapid rise of AI in healthcare is happening without the basic legal safety nets needed to protect patients and healthcare workers,” warned Hans Kluge, WHO Regional Director for Europe.

The report comes at a time when AI is fundamentally transforming healthcare, helping doctors, nurses and other health workers diagnose and track diseases, and communicate better with patients. The high costs involved in developing and adopting AI in public healthcare systems also threaten to deepen the digital divide.

The report identifies “legal uncertainty” (reported by 86% of the states) and “financial affordability (78% of the states) as the biggest barriers to AI adoption. But along with the barriers to uptake, loose or non-existent regulatory standards pose new issues in terms of patient safety, liability and privacy.

WHO warning collides with EU deregulation moves

EU Commissioners Henna Virkkunen, Valdis Dombrovskis, and Michael McGrath present the “Digital Omnibus” package in Brussels.

In terms of the proposed “Omnibus” package, the Commission, the EU’s executive branch, claims it would simplify digital regulations, reducing administrative costs of AI uptake, particularly for small and medium-sized enterprises, as well as improving rules harmonisation amongst EU member states.

But a key element of the proposal involves amendments to the 2018 EU General Data Protection Regulation (GDPR), trumpeted as the “toughest data privacy and security law in the world,” to alter the definition of sensitive data. 

Critics claim that this will also have a negative impact on the protection of health data. Prior to the Commission’s announcement, over 120 civil society organisations had strongly criticised the Omnibus package, labelling it the “greatest setback for digital fundamental rights in the history of the EU”.

‘Our DNA could be used to train the AI systems of big companies’

Another proposed amendment to the GDPR would allow companies to use personal data to develop and operate AI systems based on “legitimate interest”.

Ella Jakubowska, EDRi

“According to that change, a giant healthcare company could simply declare the use of sensitive data to train their AI systems as legitimate interest,” said Ella Jakubowska, an AI policy expert with the NGO European Digital Rights (EDRi), an association of civil and human rights organisations from across Europe.

“Our DNA could be used to train the AI systems of big companies,” Jakubowska warned in an interview with Health Policy Watch

The Commission, meanwhile, maintains that under the new Omnibus rules, companies would still have to adhere to specific transparency criteria, as well as preserving the unconditional right for persons to whom the data relates to object.

European Commission also aims to postpone rollout of new AI rules specific to medical devices

In another move that worries patient advocates, the Commission also has proposed postponing  the rollout of new rules specific to medical devices in the EU’s new EU Artificial Intelligence Act, which came into force last year. The rules aim to safeguard health, safety, and fundamental rights of patients with respect to high-risk AI systems used in certain medical  procedures.  

The rules were supposed to come into effect in August 2026, but the Commission wants to delay that by up to 16 months. The AI Act is the world’s first comprehensive set of AI regulations by a major regulatory authority.

European Union AI Act, which came into force in 2024.

Industry groups had lobbied for an even longer delay, arguing that applying the AI Act alongside existing medical device laws would create overlapping requirements. They claimed that this “dual regulatory burden” would stifle innovation and drive the development of life-saving technology out of Europe.

The Commission did not respond to a request from Health Policy Watch to respond to the WHO report or elaborate on the logic of the Omnibus package, with respect to the health sector, prior to publication. 

In a statement from the EU’s Brussels headquarters, however, Michael McGrath, the EU Commissioner for Democracy, Justice, the Rule of Law and Consumer Protection defended the new EU Omnibus legislation, saying: “The proposed amendments fully respect the high level of protection of personal data that we are committed to.” 

He added that the “Digital Omnibus” proposal would still require approval from the EU Council of government ministers, as well as from the European Parliament.

Lack of liability rules puts patients at risk 

Gaps in existing laws and in liability standards for the use of AI are widespread, with only four countries with health-specific AI rules in place.

The EU’s push to ease regulatory burdens for companies comes as the WHO report highlights the stark consequences of an already existing legal vacuum in healthcare both within the EU as well as across the wider WHO European Region. The failure to regulate AI strictly has left vulnerable populations exposed to critical risks, particularly in areas of liability and ethical standards, the report charges.

In the absence of clear regulations, hospital staff and patients are faced with critical liability issues, such as: who is responsible when an AI system makes a mistake? 

Only four countries in the WHO European Region have established liability standards for AI in healthcare, the report reveals, with three more in the process of introducing legal requirements. This lack of clarity leaves doctors exposed and patients vulnerable to shouldering the burden alone of erroneous diagnoses and treatments.

Beyond liability related to a mistaken individual diagnosis or treatment, lurks dangers of algorithmic bias, the report states. For instance, if AI systems are trained using unrepresentative data, they can discriminate against vulnerable populations systematically. Critics say that distortions frequently occur along lines of gender, origin or social status, leading to patients either being invisible to the system or being unfairly targeted by it.

Other critical  ethical concerns highlighted include the lack of safeguards around data privacy.

Governments are also failing to listen to the public. While most nations consult AI developers and healthcare providers, only 42% of countries included patient associations in the conversation. Just 22% of countries consulted the general public. The report warns that this “limited engagement” could result in the development of tools that do not meet real-world needs.

A deepening digital divide in regulation as well

The broader public was only conuslted by 22% of WHO/EURO member states in developing policies on the use of AI-driven technologies in health systems.

In terms of regulatory processes, per se, the European region is also suffering from severe fragmentation, with a clear divide between nations that are ready to govern AI, such as the United Kingdom and high-income nations in the EU and the European Economic Area, and less developed nations in central Asia and elsewhere, which are only just beginning to consider the issue. In addition, the vast majority of countries that have regulations (33) rely on cross-sector measures that often lack the specificity required to address risks to the health system.

Wealthier nations are, meanwhile, pushing ahead. The UK, for example, is proactively addressing regulatory gaps by testing AI medical devices in controlled clinical environments through initiatives like the AI Airlock system

According to the WHO analysis, this ensures that new AI-based devices meet safety and efficacy standards before full deployment. This baseline requirement  for medical devices  is also preserved even in the looser regulatory measures of the  EU’s “Digital Omnibus” proposal.

By contrast, countries such as Georgia report facing obstacles on every front, ranging from legal uncertainty to basic infrastructure deficiencies. 

Financial constraints were identified as a major hurdle by 78% of Member States. The high cost of infrastructure and steep subscription fees for advanced systems risk turning AI into a luxury rather than a public service.

Kluge stressed that “equity must remain our guiding principle, ensuring that the benefits of AI extend not only across Member States but also within them, reaching all communities regardless of geography, income or digital capacity”.

WHO calls for strengthening funding and cross-border harmonisation

Private sector investments are concentrated in wealthier regions.

With private investment largely concentrated in Western and Northern Europe, the WHO is also calling on countries to clearly define what AI-related healthcare responsibilities should remain public and what is or will be delegated to private actors. Countries also need to ensure transparency in all public-private partnerships and secure access to AI technologies to uphold rights.

To overcome implementation challenges and harmonise regulation across the region, cross-border partnerships must also be strengthened, WHO says. 

Dedicated financing streams and AI-sensitive public health reimbursement models similar to those used for medicines or medical procedures are needed to ease the AI financing gap. Under such models, healthcare providers such as hospitals and clinics would be compensated for using an approved AI system in patient care, for instance.

The WHO emphasises the importance of adhering to core principles when integrating AI. These include placing patients at the centre of care, upholding equity and human rights, ensuring system safety and public well-being, maintaining transparency, and establishing clear lines of responsibility and accountability.

“We stand at a fork in the road,“ said Natasha Azzopardi-Muscat, WHO Director of Health Systems. “Either AI will be used to improve people’s health and well-being, reduce the burden on our exhausted health workers and bring down health-care costs, or it could undermine patient safety, compromise privacy and entrench inequalities in care. The choice is ours.”

Image Credits: European Union, European Union , EDRi, EU , WHO/European Union , WHO/European Region , WHO/European Region.

Combat the infodemic in health information and support health policy reporting from the global South. Our growing network of journalists in Africa, Asia, Geneva and New York connect the dots between regional realities and the big global debates, with evidence-based, open access news and analysis. To make a personal or organisational contribution click here on PayPal.