“The true impact of AI in healthcare is still ahead of us, but if we want to get there safely, we must start learning now,” according to Eva Deckers, Head of the AI Centre of Excellence at Catharina Hospital, Eindhoven, Netherlands. In an interview with ICT&health Global, she explains why building data foundations, setting realistic expectations, and fostering regional collaboration matter more than chasing the next algorithm or AI model.
What exactly does the AI Centre of Excellence do, why was it created, and what is your role as Head of the Centre?
The hospital understood early on that to safely and effectively develop and implement AI, with real return on investment – whether in experience, efficiency, or quality of care – you need to establish strong foundational infrastructure. The AI Centre of Excellence brought together people across the organization, including system engineers, software engineers, data engineers, data scientists, and general AI developers, to work on data pipelines and the broader architectural backbone required to enable AI.
What truly shaped the decision to create the Center was the hospital's culture and DNA. We are located in the Brainport region, near Eindhoven University of Technology, and we consider ourselves an engineering-minded hospital. We have around 20 professors, many of whom have ties to the technical university, which is unusual for a top clinical (not academic) hospital. That academic-engineering orientation means we naturally think long-term and strategically about technology.
We focus heavily on what I call ‘plumbing’. Setting up data pipelines involves establishing the infrastructure to enable reliable, scalable, and secure AI use. Under my lead, we’ve also expanded our project portfolio to include internal development projects, collaborations with startups, operational AI use cases, speech-to-text solutions, and partnerships with larger vendors. My role is to manage this portfolio, align it with our digital infrastructure, and ensure AI is legally, technically, and clinically embedded across the organization.
Do you think hospitals should now start investing in AI and set up similar centers to coordinate AI transformation?
I believe the actual impact of AI in healthcare is still ahead of us. If hospitals want to harness AI safely and sustainably, they need to create environments that support learning and experimentation. That doesn’t necessarily mean every hospital needs a Centre of Excellence like ours. Instead, each hospital should define its AI role and strategy.
Some hospitals may need to focus on implementing solutions from external providers. Others, like ours, may invest in building their own data pipelines and platforms. The main point is: learning must be intentional and structured. AI should not be fragmented across departments. You need coordination, but it must be deeply embedded in existing structures: procurement, privacy, legal, and IT. AI transformation can’t be isolated in a separate “AI club”.
How did you begin shaping the Centre when you assumed leadership?
My first goal was to create “power to act”. I mean a central point of gravity that could steer projects and connect them directly to platform development. We had excellent people, but they were scattered across departments, and their work was not always consciously aligned. I focused on strategically aligning infrastructure development with project selection.
Then I began influencing other parts of the organization. The Centre of Excellence cannot do everything. For example, I worked with Healthcare Intelligence – our data center – to elevate data management as a hospital-wide priority. I collaborated with HR to ensure that digital skills, including data literacy and AI understanding, are included in staff development. Why? Because AI readiness must be both technical and organizational.
We now see the Centre more as a program with different stakeholders, where we take responsibility for the platform and strategic AI projects. However, other teams lead projects closely aligned with their workflows. For instance, speech-to-text projects are now led by process teams, as this previously experimental technology is becoming standard software from more mature vendors.
One of your priorities is improving AI readiness. How do you manage organizational change to ensure everyone is aligned with the technological transformation?
For us, this applies to all digital innovations, including hybrid care, remote monitoring, and, of course, AI. We use care pathway design as a structured approach. First, we map the current pathway to fully understand the relations, needs, and pain points. We then redesign it to improve value, workflow, and the patient experience. AI may be part of the solution, but often it is not the first step. Our starting point is always human + AI = team. AI is not an addition or a replacement; it is an active participant in the processes we work with.
In reality, we often find that workflows and processes need improvement before AI can have a meaningful impact. That’s why we focus first on value and process, and then on tech. And if we want AI to make an impact in five years, we must start today by collecting, defining, and cleaning data. Our strategy prioritizes care pathway design, which makes change readiness much easier because professionals see the purpose, not just the new tool.
What is your long-term vision of AI in healthcare?
In about 10–15 years, AI, robotics, and data will enable us to deliver truly personalized, precision medicine at scale. We will be able to define, in a standardized way, what each individual patient needs: the right treatment, at the right moment, in the right context.
But AI alone can’t achieve that. We need networks within regional care ecosystems. No single hospital can deliver personalized care alone. I expect AI to have a greater short-term impact in areas such as care coordination, capacity management, and demand forecasting. These are urgent challenges today, especially in systems dealing with scarcity.
How do you balance experimentation with new tech while not overburdening healthcare staff?
Interestingly, doctors often ask me, “Why don’t we have more AI?” At the same time, I'm also responsible for hybrid care, a technology that has been in use for years. There, the change resistance is much higher.
So expectations regarding AI are sometimes unrealistic. My role is to build a balanced portfolio: projects that help the entire hospital, projects that help us learn, and strategically scalable projects. I don’t want to suppress innovation ideas, but I also can’t support every startup or experiment. We use existing project management structures to evaluate whether an idea is valuable, affordable, and usable.
The biggest challenge is that, on the one hand, we have innovative but immature startups, and, on the other, large vendor platforms that are mature but restrictive. Both extremes can be problematic. That’s why we’re exploring a middle route: opening our platform as a regional AI marketplace where third parties can safely experiment with our data in a controlled environment. It would let us innovate faster without stretching our resources or compromising safety.
How can hospitals develop sustainable AI business models, given that healthcare still primarily pays for volume rather than quality?
This is one of the biggest questions for European healthcare. Most hospitals still operate under production-based payment models. You get paid for human-delivered care, not technology-enabled care. There is a systemic mismatch: technology is often categorized as “cost,” while physicians’ work is classified as “value.”
We need to rethink this, as AI can significantly improve capability, capacity management, coordination, and overall system efficiency. But the return on investment is often two or more years away and may benefit the population or the system, rather than the hospital directly.
So, hospitals must look at broader value. Sometimes we do things that are not financially beneficial to us but are valuable to society. Ideally, we need new funding models that reward value and view technology differently.
What, based on your work with professionals, are the key opportunities and challenges?
Many clinicians hope AI will address fragmented workflows and reduce administrative burden, particularly in documentation and EMR usability. And although we can learn from first point solutions, the opportunity lies in redesigning workflows, defining what data is needed for whom and when, and investing in infrastructure. My challenge in all that is to take people along with the results we deliver. These are often technical in nature. For example, we have unique multimodal datasets that integrate data from multiple sources around a patient and make it accessible. This basis sets us up for success, but its value is not directly tangible for everyone. Next to it might mean that you start with improvements in your care processes that have nothing to do with AI, but will yield direct value for patients and staff, and ensure we can work towards meaningful implementation of AI. It is great to see that our leadership fully supports this line and is walking the talk by ensuring the right and responsible investments.
Interestingly, if I had to choose today between implementing five AI applications or focusing on regional collaboration and data availability, I would prefer the latter. AI only works if foundational systems are in place. Professionals come to understand this through hands-on experience. That’s why creating space to experiment – safely, on a small scale – is key. People need to feel it, not just hear about it. You cannot simply copy what other hospitals do. You need your own experience to match your abilities and ambitions, building trust and readiness.