The report Artificial intelligence is reshaping health systems: state of readiness across the WHO European Region presents the first region-wide assessment of AI integration in healthcare, based on data collected from 50 Member States through the 2024–2025 survey. Here are the key outcomes.
AI is getting adopted, but rarely strategically
AI is rapidly moving from experimental pilots to real-world implementation, with 64% of Member States already using AI-assisted diagnostics and 50% deploying AI-powered chatbots for patient support.
“From triaging patients and analysing diagnostic images to enhancing national health surveillance and shaping precision public health, AI is now at work in clinics, hospitals, and ministries across our Region. In recent years, AI has shifted from a theoretical tool to a real-time companion in health care delivery,” according to Dr Hans Henri P. Kluge, Regional Director, WHO European Region.
However, the report reveals that only 8% (four of 50) Member States have published national health-specific AI strategies, while an additional 14% (seven) are developing one. Meanwhile, 66% (33 of 50) have national cross-sector AI strategies, which include health as one of the domains. This indicates that while many countries acknowledge AI’s role in health, only a few have tailored policies specifically for health systems.
Cross-sector strategies, while beneficial for harmonization, often lack the health sector’s specific regulatory, ethical, and operational requirements. Health-specific strategies allow for targeted oversight, improved coordination, and focused investments, but risk fragmentation if not aligned with broader digital and AI policies.
When it comes to oversight, 46% (19 of 41) of Member States assign strategy implementation to an existing government agency, while another 46% distribute responsibilities across multiple agencies. Only 12% (five) have created new government agencies specifically for AI governance. This demonstrates a preference for integrating AI governance into existing structures, though it may also lead to diffuse accountability and slower implementation.
AI literacy lags behind technological developments
The report shows that 72% (36 of 50) of Member States have engaged stakeholders in discussing AI in health. Yet engagement remains limited primarily to experts, industry representatives, and government officials. The most consulted were government actors (81%), health care providers (75%), and AI developers (75%), while patient associations (42%) and the broader public (22%) were least involved. Only 28% made consultation findings publicly available, limiting transparency and shared learning.
This imbalance poses risks: AI solutions might fail to meet real-world needs, exacerbate bias, or reduce public trust. Moreover, insufficient workforce training is a major barrier. Only 24% (12 of 50) offer in-service AI training, and 20% (10) offer preservice AI education. Furthermore, just 42% (21 of 50) have created new professional roles in AI and data science. These shortages suggest inadequate preparation for AI-enabled health services.
To change it, WHO recommends establishing codesign and coregulation models involving patients, clinicians, policymakers, and developers; expanding digital and AI education in medical schools, nursing, public health, and allied professions; and creating new specialized roles, such as clinical data scientists, AI ethicists, and AI safety auditors.
Accountability challenge: unsolved
Legal preparedness for AI in health is still emerging. While 46% (23 of 50) of Member States have assessed legal gaps, only 8% (four) have developed liability standards for AI in health. Furthermore, a mere 6% (three) have introduced legal requirements for generative AI systems in health care.
A significant concern is the lack of clear accountability when AI systems harm patients or malfunction. Without clear liability rules, clinicians may either hesitate to use AI or over-rely on it, risking patient safety. Additionally, adoption of ethical guidelines varies greatly, with post-market monitoring and real-world surveillance remaining rare.
The recommended actions include establishing clear liability frameworks, outlining the responsibilities of developers, clinicians, data providers, and institutions; implementing robust post-market surveillance; ensuring AI systems remain accurate, safe, and unbiased after deployment; and promoting explainable AI (XAI) to improve transparency, traceability, and clinical trust.
Data infrastructure prioritized, but EHDS readiness is very low
AI depends on secure, high-quality, interoperable data. 66% (33 of 50) Member States have a national health data strategy, while 76% (38 of 50) have or are developing health data governance frameworks. Additionally, 66% (33 of 50) have established regional or national health data hubs, enabling data sharing and the training of AI models.
However, only 30% (15 of 50) have issued guidance on the secondary use of health data, and another 30% have rules for cross-border data sharing. The results are worrying – Europe is implementing the European Health Data Space, which aims to facilitate the exchange of primary data and the secondary use of data for research and innovation. Collaboration between public health institutions and private sector AI developers is also inconsistently regulated.
To invest or not to invest in AI?
AI tools are being used, but unevenly. 64% (32 of 50) of Member States reported AI-assisted diagnostics, making it the most widely adopted application. 50% (25) reported using AI-powered chatbots for patient support. These applications address priority areas identified by Member States, such as improving patient care (98%), reducing workforce pressures (92%), and enhancing health system efficiency (90%).
However, only 52% (26 of 50) have designated priority AI areas, and even fewer have allocated specific funding. This creates a gap between strategy and execution.
The top barrier to AI adoption is legal uncertainty, reported by 86% (43 of 50) of Member States. The second most common barrier is financial affordability, reported by 78%. Most countries agree that clear liability rules (92%) and guidance on transparency and explainability (90%) are key enablers of AI adoption. To break the barriers, regulatory sandboxes can be implemented to safely test AI in real-world settings under supervised conditions.
The findings from the 2024–2025 WHO European Region survey show that AI is increasingly being recognized as a strategic priority in health. Yet, significant readiness gaps persist across governance, workforce capability, data infrastructure, and ethical regulation.