CPME: Maintain Strict AI Regulations for Medical Software

Mon 23 March 2026
AI
News

European doctors and healthcare institutions are calling on EU lawmakers to continue strictly regulating AI in medical devices under the AI Act. In a joint open letter, the Standing Committee of European Doctors (CPME) and the European Hospital and Healthcare Federation (HOPE) advocate for clear, harmonized, and future-proof regulations to ensure patient safety and trust in AI applications.

According to the organizations, existing European regulations for medical devices (MDR) and in vitro diagnostics (IVDR) fall short for modern, data-driven AI systems. These regulations are primarily focused on traditional, predictable products and do not sufficiently account for adaptive technologies such as machine learning.

AI Act as a Framework

The AI Act, however, does provide a framework for this new generation of systems. By keeping AI in medical applications within the scope of this law, CPME and HOPE argue that a consistent and transparent assessment framework is created for risks specific to AI.

A key advantage of the AI Act is that it explicitly sets out how so-called “notified bodies” (certification bodies) must assess AI systems. This provides greater clarity for both manufacturers and users of AI in healthcare. Without this framework, fragmentation is a risk: member states or individual certification bodies would apply their own interpretations, which could lead to inequality and uncertainty in the market. The open letter can be seen here (pdf).

Addressing Risks

The AI Act addresses risks that are barely covered in current medical regulations. These include:

  • bias in algorithms
  • deterioration of model performance
  • insufficiently representative datasets
  • risks related to model training and validation

These factors can have direct consequences for clinical decision-making. Extra oversight is particularly necessary for applications such as large language models (LLMs), which can be continuously adapted and influenced.

The organizations emphasize that the AI Act is not a replacement for existing regulations, but a supplement. The requirements for “high-risk AI” can be integrated into the existing conformity assessment of medical devices. Notably, the same notified bodies remain responsible for both medical devices and AI assessments, which promotes efficiency and consistency.

Data quality and transparency are crucial

Another key point is that the AI Act sets minimum requirements for dataset management. In healthcare, datasets must be validated and specifically tailored to medical applications. The use of unreliable data can lead to incorrect diagnoses or treatment recommendations.

Additionally, the law requires manufacturers to be transparent about the technology and training data used. This is essential at a time when new AI architectures, such as autonomous systems and machine-to-machine communication, are becoming increasingly common.

Trust as the key to adoption

According to CPME and HOPE, trust is the most important prerequisite for the widespread adoption of AI in healthcare. Without clear certification and oversight, implementation will lag behind.

The AI Act contributes to that trust by setting requirements for:

  • human oversight
  • transparency and explainability
  • validation of AI models
  • performance monitoring
  • logging and liability

This gives healthcare professionals greater insight into and control over AI systems.

Call to EU lawmakers

The signatories call on the European Parliament and the member states to explicitly keep medical AI within the scope of the AI Act. Only in this way, they argue, can Europe create a safe and reliable digital healthcare environment where innovation goes hand in hand with patient safety.

As early as 2023, the Rathenau Institute warned, following a literature-based study commissioned by the Dutch Ministry of the Interior and Kingdom Relations, that generative AI amplifies existing risks of digitization and introduces new threats.

In a study (Dutch) based on a literature review, expert interviews, and working sessions, the institute concludes that current and planned regulations may be insufficient to manage these risks. Generative AI increases the likelihood of discrimination, disinformation, and insecurity, among other things, while transparency regarding how systems operate continues to decline.

In addition, new risks are emerging, such as intellectual property infringement and the growing influence of large technology companies on sectors such as healthcare, education, and science. This also puts pressure on democratic processes, for example through the manipulation of information and the public debate. The institute therefore called for stricter and better-tailored regulations, including the possibility of banning high-risk AI systems.

At the same time, it emphasizes that organizations and citizens must already consider whether the responsible use of generative AI is possible.