EU sees AI in healthcare gain momentum, but also issues a warning

Mon 11 May 2026
AI in health
News

AI is rapidly evolving from experimental technology to a structural component of European healthcare. According to recent analyses by both the European Parliament and the World Health Organisation (WHO Europe), virtually all EU Member States now use AI applications in healthcare processes, ranging from medical image analysis and clinical decision-making to administrative support and patient communication.

The scale on which AI is now being deployed marks a turning point. Whereas for years digitalisation mainly revolved around electronic patient records and telehealth, the focus is now shifting to systems that actively contribute, predict and support. According to WHO Europe, nearly three-quarters of EU countries use AI-supported diagnostics, whilst 63 per cent use chatbots for patient contact.

Nevertheless, Brussels and international organisations are emphatically sending the same message (pdf): technological development is outpacing governance, skills and societal preparedness.

AI is making inroads in European healthcare

European analyses make it clear that AI is no longer seen as a standalone innovation, but as fundamental infrastructure for future healthcare systems. In particular, the combination of generative AI, predictive algorithms and medical data creates new opportunities for personalised care, early diagnosis and improved efficiency.

AI systems now assist radiologists with image interpretation, help doctors with triage and speed up administrative processes such as reporting and record-keeping. According to OECD analyses, AI can automate up to 30 per cent of routine administrative tasks in healthcare, leaving healthcare professionals with more time for direct patient contact.

As a result, AI is shifting from support software to a technology that directly influences work processes, decision-making and the organisation of healthcare. At the same time, this development is leading to new roles within hospitals and healthcare organisations, such as AI specialists, data scientists and clinical information managers. According to the WHO, almost half of EU member states have now established dedicated AI and data roles for the healthcare sector.

Significant differences between countries

Despite the rapid adoption, the maturity of AI across Europe varies considerably. Some countries have been investing for years in national AI strategies, interoperable data systems and training programmes, whilst others are still struggling with basic requirements such as data quality and digital infrastructure.

According to the WHO, successful implementation is closely linked to three factors: governance, skills and trust. The latter, in particular, appears to be crucial. Healthcare professionals remain legally and ethically responsible for decisions that are partly based on AI, whilst many systems function as a ‘black box’ whose workings are difficult to explain.

This increases the pressure for explainable AI: systems that can transparently demonstrate how a conclusion or recommendation is reached. Without this explainability, there is a risk of reluctance among doctors and healthcare institutions. The public also appears to be critical. European policymakers fear that insufficient transparency and public engagement could lead to mistrust or even rejection of AI in healthcare. WHO Europe therefore emphasises that public participation is essential for successful implementation.

AI Act changes the playing field

Europe is now firmly positioning itself as a global leader in the field of AI regulation. With the introduction of the European AI Act, a comprehensive legal framework specifically focused on artificial intelligence is being established for the first time. For the healthcare sector, this represents a fundamental change. Many medical AI systems fall under the ‘high-risk’ category, meaning stricter requirements will apply regarding safety, transparency, data quality and human oversight.

The European Parliament briefing emphasises that healthcare is precisely one of the sectors where the societal impact of AI could be greatest, both positive and negative. Insufficiently regulated AI can lead to discrimination, incorrect diagnoses, privacy issues and undesirable dependence on technology.

In addition, concerns are growing about generative AI and AI companions. According to the European Parliament, such systems can simulate feelings of social connection, but at the same time also reinforce isolation or exacerbate mental health problems among vulnerable groups. This tension highlights that Europe is trying to strike a balance between innovation and protection.

Improving AI literacy among healthcare professionals

One of the most urgent challenges appears to be the preparation of healthcare professionals themselves. Although AI applications are being rolled out rapidly, many doctors, nurses and managers still lack the skills to critically assess AI or apply it responsibly.

WHO Europe therefore identifies a strong need for AI literacy within the healthcare sector. Several countries are now integrating AI training into medical education and continuing professional development programmes. This goes beyond technical knowledge alone. Healthcare professionals must learn to deal with bias, data quality, algorithmic decision-making and ethical considerations. Generative AI, in particular, gives rise to a new type of risk: systems that generate medical information that sounds convincing but is factually incorrect.

According to researchers at Stanford University and other international institutions, this new generation of foundation models requires intensive multidisciplinary collaboration, precisely because the systems are becoming increasingly complex and difficult to explain.

High-quality data is essential

Underlying virtually all reports is the same structural challenge: access to reliable and interoperable health data. Without high-quality data, AI systems remain of limited use or even pose a risk.

Many European healthcare systems still struggle with fragmented datasets, divergent standards and limited data exchange between institutions and countries. At the same time, there is a growing realisation that data forms the strategic raw material of future healthcare systems.

That is why the EU and its member states are investing increasingly heavily in European health data ecosystems, including the European Health Data Space (EHDS). The aim: to enable secure and standardised data exchange for healthcare, research and AI development. According to policymakers, it is precisely this infrastructure that will determine Europe’s international competitive position in the field of medical AI.

Crucial years

The central conclusion from the European analyses is twofold. On the one hand, AI is on the verge of fundamentally transforming healthcare, from diagnostics and workflow to prevention and personalised treatment. On the other hand, its success depends heavily on enabling conditions that are still very much under development. The coming years will therefore revolve not only around technological innovation, but above all around governance, trust, training and public acceptance.

Europe appears to be aware of the strategic stakes involved. Whilst the United States and China are primarily competing on speed and scale, the EU is attempting to establish an alternative model in which innovation is combined with regulation, ethics and human oversight.

Whether that model will actually prove successful will depend on one crucial question: Will Europe succeed in making AI not only a smart but also a responsible part of healthcare?

AI takes centre stage at the ICT&health World Conference 2027

During the ICT&health World Conference 2027, the annual launch of the new healthcare year, AI will also play a prominent role within the international programme. Spread over three days, healthcare professionals, policymakers, researchers, technology companies and governments will come together to discuss themes such as generative AI, European AI factories, interoperability, governance, ethics, data availability and the practical implementation of AI in healthcare, among other topics. The focus is on applications that are already making an impact today, as well as on the strategic choices Europe must make in the coming years to ensure that AI becomes a responsible, scalable and future-proof part of healthcare.