The debate on artificial intelligence in healthcare is increasingly polarised. Between bold promises and rising concerns about trust, safety, and hype, the question “Is AI in healthcare a bubble or real revolution?” is timely. Whether it delivers lasting impact will depend less on the technology itself and more on the health systems adopting it – an opinion by Anca del Río, Health Systems Innovation Thought Leader.
And nowhere is this tension more visible than in the domain we speak about the least yet rely upon the most: the mental health, wellbeing, and stability of the health and care workforce.
Across the European Region, health systems are struggling under the weight of short-term sick leave, workforce attrition, and deteriorating working conditions. The scale of the problem is no longer anecdotal; it is measurable, alarming, and systemic. The WHO’s Artificial intelligence is reshaping health systems report makes clear that AI is already embedded in clinical, operational and policy workflows across the Region, but readiness varies dramatically and governance gaps remain substantial. Meanwhile, the brief Accelerating the uptake of digital solutions by the health and care workforce highlights persistent barriers in training, digital literacy, trust, and organisational readiness – barriers that stand between technological potential and real-world adoption. Most urgent of all, the Mental Health of Nurses and Doctors (MeND) survey paints an unfiltered picture of the mental health toll on clinicians: one in three reporting anxiety or depression, one in ten experiencing suicidal thoughts, and chronic overwork is common across the Region.
This confluence of pressures raises a fundamental question: Can AI move beyond the hype and effectively strengthen healthcare and workforce resilience?
A reality check from the frontlines
I have spent the past decade oscillating between two worlds: the frontline reality of hospitals grappling with burnout, and the policy and innovation tables where digital strategies are shaped, and technological solutions are deployed. The disconnect is often stark.
While policymakers debate governance frameworks and future health data infrastructures, wards and clinics struggle to maintain sufficient staffing to deliver timely and safe care. While industry champions new AI tools, nurses and doctors contend with understaffing, night shifts, and emotionally demanding environments that are directly linked to poor mental health outcomes, as repeatedly evidenced across the MeND survey’s analysis of working hours, violence, and lack of protective factors.
In this context, AI can easily feel like a solution looking for a problem unless it addresses what clinicians actually experience day after day: pressure building long before it becomes visible to leadership, and warning signs of deterioration hiding in plain sight within routine data.
The overlooked opportunity: secondary data as a signal of systemic distress
One of the biggest untapped opportunities in European health systems lies in the routine data they already hold. Staffing ratios, sick leave patterns, overtime records, waiting times, treatment delays, and after-hours documentation trends - none of these were designed for mental health forecasting. And yet, collectively, they form a remarkably sensitive proxy for organisational strain. The WHO AI report reinforces this, highlighting that health data governance and responsible use of system-level data are becoming central pillars for trustworthy AI adoption. Still, while countries increasingly explore AI for imaging, diagnostics, or triage, far fewer apply the same intelligence to understanding the health of the workforce, keeping these systems afloat.
This is the space where the concept of a federated learning model (such as the AI Blueprint developed by public- and private-sector experts within WHO’s Strategic Partners’ Initiative for Data and Digital Health) offers a compelling illustration of what purpose-driven AI could look like. Federated learning, as envisioned in the Blueprint (to be published in 2026), enables pattern detection across institutions without centralising sensitive employee data, directly addressing one of the Region’s biggest public concerns: data privacy and trust.
Crucially, it reframes AI not as an external layer added to strained systems, but as a tool to help identify “where the water starts heating up” before the frog boils, before burnout peaks, before spikes in short-term leave, and before the quality of care deteriorates.
The AI bubble doesn’t have to burst
AI will fall short of its promise if it becomes synonymous with efficiency alone. But it can succeed (quietly and meaningfully) when directed towards the human foundations of care.
1. AI can help restore balance in overstretched systems
Secondary routine data patterns often reflect dysfunction long before people speak up. Recognising “red zones” of risk early allows for targeted, context-specific interventions that improve retention and quality of care. This mirrors the fact that working conditions, workload, and lack of support are directly associated with mental health deterioration, and that system-level visibility is essential to address them sustainably.
2. AI can rebuild trust if deployed transparently and collaboratively
We often speak about inclusive governance and workforce engagement, yet too many AI tools are still designed far from the realities of clinical practice. Trust and legitimacy will follow only when frontline staff are part of the design table, when the tools work within the messiness of real workflows, and when ethical protections are visible, not merely theoretical.
3. AI can move us from forecasting problems to preventing them
Predicting pressure points is only the first step; the real value lies in using those insights to redesign workflows, redistribute strain, and prevent the spiral into sick care. When AI is used to intervene upstream rather than merely anticipate downstream impact, health systems can protect workforce wellbeing long before crises unfold.
4. AI’s power lies in enhancing human judgement
The goal is not the automation of complex human experiences, but augmentation: giving leaders better visibility so they can make better human decisions.
A future built on shared responsibility
The real revolution in AI for healthcare will not come from algorithms alone. It will come from how systems choose to use them.
This includes how public–private partnerships are structured, how procurement models incentivise ethical design, how institutions measure value, and how leaders signal that workforce wellbeing is not a secondary outcome but a strategic priority. The field examples and early explorations embedded in the AI Blueprint mentioned above are useful not because they present a perfect model, but because they demonstrate what becomes possible when data, technology, ethics, and collective governance converge.
Ultimately, what Europe needs is not more pilots, more dashboards, or more high-level declarations, but more coherence. And coherence is only possible if governments, healthcare organisations, clinicians, digital innovators, and regulators move together with shared purpose.
From digital strain to digital strength
At a time when clinicians feel digitally burdened rather than digitally empowered, the question is not whether AI is a bubble or a revolution. The question is whether we are willing to allow AI to serve a purpose broader than automation: strengthening the systems on which healthcare depends.
AI can help prevent burnout. AI can help improve clinical quality and patient safety. AI can help leaders see the unseen. AI can help health systems breathe again.
But only if AI becomes a tool for healthcare and workforce resilience, rather than merely operational optimisation. Then, the bubble will not burst because it will no longer be “a hype”. It will form part of the foundation of a stronger, more sustainable, and more hopeful future for healthcare.