For building an AI-powered post-visit medical app in just seven days, he placed third in a prestigious hackathon organized by Anthropic, competing against 13,000 applicants. Michał Nedoszytko, a cardiologist at Cliniques de l’Europe in Belgium and a long-time developer, believes AI’s most immediate impact on healthcare may come from fixing the workflows that keep doctors away from patients.
How did you, as a cardiologist, become an AI developer?
I’ve been working in computer science for more than 20 years. Over that time, I built a drug search engine, electronic medical record systems, and platforms for managing on-call schedules. But no matter the project, the same issue kept surfacing: administration. Documentation and so-called paperwork consume far too much of a doctor’s time.
About four years ago, I became deeply involved in artificial intelligence. In invasive cardiology, my specialty, I started training neural networks to recognize specific changes in coronary angiography images. At the time, that was still relatively novel. In medicine, AI was largely confined to radiology and image analysis.
When ChatGPT arrived, I immediately saw its potential for a very different part of healthcare: the administrative layer. AI could finally help with the “gray zone” of clinical work, the tasks that are essential but take doctors away from patients.
That was the starting point for my previsit project, a system designed to collect a patient’s medical history before the appointment. The idea was simple: streamline part of the history-taking process before the visit even begins. It turned out to work very well.
There is so much happening in AI and healthcare right now that it’s difficult to keep up. What are the most important developments?
I would divide the field into two main areas. The first is administrative AI, meaning everything related to history-taking, documentation, and collecting patient information. This is the part directly connected to the doctor’s visit. The clearest trend today is the rise of so-called AI scribes: systems that listen to the conversation between doctor and patient and automatically generate clinical notes.
Most of these tools are built to support the physician. They summarize the consultation and reduce documentation time. But there is also a third, increasingly important layer: support for the patient after the visit. That was the focus of a project I presented during an Anthropic hackathon. In that case, AI helps patients understand what the doctor recommended, what the next steps are, and what they are supposed to do once they leave the clinic.
The second area is clinical AI. This receives less media attention, but in some cases, it feels close to science fiction. In cardiology, for example, there are already systems that analyze ECGs and detect signals that are invisible to the human eye.
An ECG is no longer just about classic parameters. AI models can now predict things like heart failure, thyroid dysfunction, and, in some cases, even a patient’s sex. In radiology, systems can detect metabolic diseases, such as diabetes, from imaging data. These are remarkable developments. However, only a small proportion of these solutions have been fully clinically validated. There is a lot of promise, but much of it is still ahead of routine practice.
One example I find especially interesting is an ECG-based system that can detect coronary artery occlusion using logic completely different from the standard criteria used in cardiology. In some cases, it can outperform the physician.
At the same time, there is also a boom in patient-facing AI. In the United States, we are already seeing products such as ChatGPT Health from OpenAI, Claude for Healthcare from Anthropic, and Copilot Health from Microsoft. And I would expect Apple to make a serious move into this space sooner or later.
I was recently in San Francisco for the first time in eight years, and what I saw there honestly felt a little like science fiction.
Healthcare is still a conservative sector, largely because of patient safety. As a result, new technologies often enter the system slowly, while patients are already using tools like ChatGPT on their own. Is that a good or a bad development?
It is both an opportunity and a risk. Of course, AI systems can make mistakes, which is why an appropriate legislative framework is essential. But we also need to find the right balance. Personally, I actually like it when patients come to me after consulting ChatGPT. Years ago, they came in with “Dr. Google,” and then I often had to filter out a lot of poor-quality information from random websites.
With AI, the conversation is often more structured. Of course, today’s models are probabilistic, and responses can vary. Hallucinations do happen. But in clinical reality, people make mistakes too. When I ask a patient what medications they are taking, the answer often has to be verified anyway.
Medicine moves more slowly than computer science because it deals with people’s health and lives. We cannot afford to experiment without validation. We work according to the principles of evidence-based medicine. But AI also makes it possible to prototype ideas incredibly quickly.
Doctors have always had good ideas. In the past, they could mostly write about them in papers. Today, with tools like Claude or Codex, they can build a working prototype in a matter of days. Of course, that prototype still needs to be reviewed by professional engineers for security, code quality, and regulatory compliance.
That was one of the most striking things I saw at the Anthropic hackathon. Many of the winners were not computer scientists at all. A lawyer from California built a system to support administrative permitting. A road inspector from Uganda created a tool that analyzes dashcam footage to predict road repair costs. Someone else built a system to coordinate drone swarms for missing-person searches.
The lesson was clear: domain experts can now create meaningful technology much faster than before. But in healthcare, we still need to separate the hype from what is actually ready for clinical use. My post-visit system works very well, but it is still a hackathon prototype, not yet a clinically deployable product. Even so, the cost and time required to build something useful are dramatically lower today than they were even a year ago.
When you build AI solutions, what are you trying to change?
My goal is very straightforward: improve clinical workflow and make care more efficient.
For years, I’ve focused on improving processes. And very often, when you improve the process, patient outcomes improve as well.
The pre-visit system was originally built to save time and automate part of the medical history-taking process. In Belgium, I now spend much less time in the interventional cardiology lab and much more time in the outpatient clinic. In that setting, you quickly notice how repetitive many parts of the visit are. At some point, I realized how useful it would be to have an assistant that could help organize and structure this information.
But the benefits went beyond time savings. I noticed that when patients complete the medical history at home before the visit, they come in much better prepared. They already have a sense of what questions may come up and what direction the consultation will take. That improves the quality of the interaction.
Take medication lists, for example. In the clinic, asking “What medications are you taking?” can consume a surprising amount of time. At home, the patient can check the boxes, ask their partner, look at their prescriptions, and answer more carefully.
That allows the consultation to get to the heart of the matter more quickly. We can focus on diagnosis and treatment rather than on collecting basic information. In that sense, streamlining the process can directly improve the quality of care and diagnostic effectiveness. Of course, in medicine, you need evidence to prove that, which is why I’m currently working on a study.
This is also where the difference between Europe and the United States becomes visible, isn’t it?
Absolutely. I had this idea three years ago, and when I registered the domain for the project, my first thought was that it should eventually be available directly to patients. But I did not even try to launch it in Europe initially, because I knew how difficult that would be from a regulatory perspective.
We were the first hospital in Brussels to introduce ChatGPT into clinical practice as a pilot. And almost immediately, the data protection officer came asking what exactly we were doing. Fortunately, everything had been prepared carefully from a legal standpoint, so there were no consequences. But it illustrates the environment we operate in.
Very often in Europe, before we even begin building something, we are already thinking about all the reasons it might fail. I fully share European values. Privacy and data protection are absolutely essential. But we also need balance. If we become too cautious, Europe risks falling behind on innovation.
Technology is also changing the roles of doctors and patients. Could this cultural shift be harder than building the right technological solution?
We tested this solution across several specialties, and doctors' reactions varied widely. Some said immediately: “No, I prefer to take the history myself and stay fully in control of the conversation.”
There are also specialties where the history is only one part of the visit, and most of the time is spent on technical procedures. That is true, for example, in cardiology, gynecology, or ophthalmology. In those specialties, adoption was easier.
But it is important to understand that the value of a tool like this is not just time savings. It also prepares the patient for the consultation. The goal is not to replace the medical interview. The goal is to give the doctor a structured starting point.
The doctor still has to verify the information. This is not a system that automatically writes directly into the medical record without review. I often compare it to a nurse or secretary calling a patient before a planned procedure to collect preliminary information. By the time the anesthesiologist sees the patient, the basics are already there.
So no, I do not believe AI will replace doctors. But I do believe it can substantially improve the way they work. As with any new technology, adoption takes time. That said, studies are already showing that doctors who use AI scribes report higher job satisfaction.
In the United States, adoption is also faster because there are stronger financial incentives tied to documentation quality. That changes the equation.
In the U.S., big tech companies like Amazon and Apple are also entering the healthcare sector…
Yes, but most of those solutions are primarily aimed at end users, meaning patients.
In Europe, we are likely to see some delay simply because we do not yet have access to all of these tools. But in the U.S., the involvement of big tech may actually increase public trust in AI for healthcare. And healthcare accounts for roughly 20 percent of U.S. GDP, making it an enormous market.
There is a paradox in healthcare AI. We know there is a severe physician shortage that will persist for years, yet innovators keep saying that technology will not replace doctors. Why? Is it because healthcare is so sensitive and the doctor-patient relationship matters so much?
I think there are two reasons. The first is human. In medicine, there are moments when human presence is irreplaceable. I cannot imagine an AI system telling a patient they have cancer. The second reason is structural. A huge portion of a doctor’s work is administrative and does not actually require medical expertise.
When healthcare was digitized, we were promised that work would become easier. In reality, we often ended up with even more forms, more fields, and more boxes to tick. Doctors now spend an enormous share of their time on administrative work.
And that, to me, is the real opportunity of AI. Healthcare is one of the least optimized sectors of the economy. AI offers a chance to change that. It may finally allow doctors to return to their core role: caring for patients.