Artificial intelligence in public health: Wanted but not there yet

Tue 21 October 2025
AI
Interview

AI enables pandemic surveillance, identifies risk groups, and supports tailored communication when designing disease prevention campaigns. In an interview for ICT&health Global, Dr Stefan Buttigieg, Specialist in Public Health Medicine at the Ministry for Health in Malta, discusses whether ChatGPT is a good or bad health advisor, how AI can help fight misinformation, and why cooperation between health authorities and big tech companies is key to improving population health.

After the COVID-19 pandemic, there has been much debate about whether AI can support pandemic preparedness. Are we at that stage yet?

On a global level, we're still sorting out our data issues, and we haven't yet achieved the implementation momentum needed to start seeing clear, visible outcomes. However, we are seeing both binding and non-binding policies, as well as strategic momentum beyond what we could have imagined. Now we need to switch gears and apply the work outlined in all this strategic planning that has taken place.

Beyond pandemic response, where else can AI be applied in public health?

Here, I would like to guide you to our recent work in The Lancet, where we outlined how AI can be used to deliver public health activities. There are multiple use cases for AI in public health, and I believe we're just getting started. AI techniques are used across the board to identify potential outbreaks and issue timely warnings, monitor trends in risk factors for non-communicable diseases by analyzing demographic, behavioral, and environmental data, and conduct behavioral epidemiology, where data from mobile apps and social media can be analyzed to track health behaviors such as diet, physical activity, and mobility. Even more interestingly, there are increasing applications of AI in public health communication, which help tailor messages to specific populations.

Of course, let's not forget something that is somewhat challenging to measure at scale. Yet, at the European Public Health Conference in Lisbon in 2024, we found that “Translation,” “Writing,” and “Coding” were the most common reasons for using generative AI. I recently had the opportunity to follow up on this work at a conference in Australia, and there have been comparable insights, as well as some novelties that we hope to share soon.

These applications sound promising, but how realistic are they, considering that digitalization in healthcare often lags behind other economic sectors?

I think there are some excellent opportunities here, but of course, we haven't seen the scale we would like in multiple countries. We really need to work on perception, which means doubling down on AI literacy across public health organizations and showcasing these solutions at conferences, podcasts, webinars, and more.

I'm happy that I'm already seeing specialized conferences, such as the AI in Public Health Research Conference at the Robert Koch Institute in Germany, that are really raising awareness and understanding of these applications. We're also seeing that overall, countries – especially within the European Region – are still adapting their regulatory frameworks to align with higher-level regulations published by the European Union, such as the European Health Data Space (EHDS) and the AI Act, which play an integral role in the safe and ethical deployment of AI solutions at scale.

Nevertheless, money makes the world go round, and we first need to work on curating our data and building data cultures within health organizations to truly see the monumental impact these applications can have. I have a good feeling that the “Apply AI Strategy” will get us much closer to the tipping point.

People around the world are increasingly turning to ChatGPT and similar generative AI tools for health and mental health advice. Is this development beneficial or harmful?

I think it can be both beneficial and harmful. I highly recommend that large-scale tech companies ensure that the principles they adopt – especially those offering health-related advice – are closely aligned with those put forward by WHO. These basics cannot be ignored: protecting human autonomy, ensuring transparency, explainability, and intelligibility, and promoting inclusiveness and equity, among others.

AI also has a darker side, as it can help spread health-related misinformation on social media, for example, about vaccination. Can AI itself be used to fight medical misinformation?

Every technology has its dark and “holy” side, with a lot in between. Yes, AI plays a critical role in combating medical misinformation. Truly, one of the biggest barriers I’m seeing so far is the very low number of people working specifically in marketing and communication within public health organizations. Many times, they work on shoestring budgets globally, and there’s only so much they can explore. Just look at the number of dismissals happening in public health organizations around the world. Nonetheless, there might be an opportunity in disguise for individuals and teams within these organizations to use AI tools to scale up their work.

Tech companies are increasingly having a significant impact on citizens’ health. They control the algorithms behind social media platforms that amplify misinformation and are often reluctant to cooperate with health authorities. How can this challenge be addressed?

Regulation and stakeholder alignment are key. Regulation is core to our mission to ensure that the interests of individuals and populations are prioritized equitably so that no one is left behind. In this area, we still have a lot of work to do as public health organizations. Secondly, and this is where we really need to work harder, we must engage closely with tech companies through regular meetings and forums to ensure they build the necessary capacity to put health front and center.

For one, I would love to see YouTube Health content, especially from verified health professionals and health authorities, extended to many more countries, and for Meta Verified to be enabled for trustworthy health professionals and authorities. Building trust networks and communicating relevant regulatory developments to these tech companies will make a real difference in the journey ahead.

We are moving toward large action models, and soon AI agents on our smartphones may advise us on what to eat or whether to vaccinate. In this new reality, will public health authorities still have a meaningful impact on society?

Of course they will! This comes with a caveat, though. We need public health authorities to lead the way, not necessarily by developing the agents themselves, but by providing high-quality, curated data sources that will feed credible content to these AI agents. Ideally, public health authorities should provide the best possible model context protocol servers for agents to connect with. More than ever, public health authorities need to double down on data and the relevant data cultures and strategies.

The purpose of public health authorities shouldn’t be popularity – it never really was – but rather the focus should be on providing high-quality content that is appropriately communicated to the right audiences in a way they understand.

I don’t know of any national health chatbot more popular than ChatGPT, Perplexity, or Co-Pilot.

I have no problem with a national health chatbot being less prevalent than ChatGPT, Perplexity, or Co-Pilot. What I do have a problem with is that public health authority data sources are not being used in the answers generated by these tools. That’s where we need to double down and work harder.

How do you imagine an AI-powered public health system, and what steps are needed to make it a reality?

First, data. We need to step up our game and ensure that all relevant stakeholders are aligned. Second, interoperability. We need to ensure interoperability on all levels (following the LOST model), but even more importantly, human interoperability, where we set aside our agendas and focus on public health principles and the matters that truly count.

What recent applications of AI in healthcare have impressed or inspired you the most?

I am particularly impressed by the growth in agentic AI and by the deep integration of existing platforms with ChatGPT, Co-Pilot, and other generative AI tools. Now we’re even hearing about hierarchical reasoning models and much more.

In your view, will AI ultimately improve public health or threaten it?

Here, I need to go back to the words of the inspirational health AI visionary, Dr Ricardo Baptista Leite, which I heard for the first time at the open sessions of the European Health Forum Gastein. The moment we’re facing with artificial intelligence is very similar to when the lift was first introduced to modern society. Once its safety was demonstrated to the general public and trust was put front and center, the revolution took place, and today we see modern and staggering feats of engineering everywhere.

I see a similar path ahead for AI in public health. We need to double down on AI safety and ensure that, in alignment with public health principles, it is properly evaluated. Following that, we will deploy artificial intelligence applications and techniques across multiple situations and use cases.