AI chatbots boost resilience to health misinformation

Fri 15 May 2026
AI in health
News

As generative AI tools become more sophisticated and widely accessible, concerns about misinformation are intensifying. From convincingly written articles to deepfake images and automated social media accounts, the line between fact and fiction is increasingly difficult to discern. While public awareness of misinformation has grown, so too has the complexity of identifying and resisting it, particularly in health-related contexts.

Misinformation about topics such as vaccination, mental health, or lifestyle choices often spreads through trusted social networks. This makes it harder to challenge, as individuals are more likely to accept claims shared by friends or family. According to researchers, overcoming this requires not only factual knowledge but also the conversational skills to question and counter misleading claims in everyday interactions.

Scalable support tool

Researchers from University of Oulu and University of Tokyo have explored whether AI can play a role in strengthening public resilience to misinformation. Their work focuses on AI-driven conversations as a scalable method to support critical thinking without replacing human judgment.

“AI gives us a way to support people at scale without replacing human judgment,” says Simo Hosio. “We wanted to bridge the gap between knowing misinformation exists and being able to respond to it in real-life conversations.”

The team developed an AI chatbot named Forty, designed to engage users in structured, evidence-based dialogues around common health misconceptions. The chatbot addresses topics such as oral hygiene, physical activity and mental well-being, alcohol use, and environmental health.

Cognitive inoculation

The approach is rooted in the concept of “cognitive inoculation,” a theory from social psychology that compares resistance to misinformation with the body’s immune response. By exposing individuals to weakened versions of misleading arguments in a controlled setting, they can build mental defenses against future persuasion attempts.

Although cognitive inoculation has proven effective in fields such as health education and prevention programs, its broader implementation has been limited by the need for trained professionals to deliver personalized, conversational interventions. AI, the researchers argue, offers a way to overcome this barrier.

“AI allows us to deliver consistent, evidence-based educational conversations at population scale,” explains Dániel Szabó. “Traditional methods simply cannot achieve this under current resource constraints.”

Study results

To evaluate the effectiveness of the chatbot, the research team conducted a study involving 65 participants. Users interacted with Forty via a public website and compared the experience to more traditional approaches, such as reading educational materials or writing reflective essays.

The results indicate that conversational AI may offer a more effective way to build resilience against health misinformation. Participants who engaged with the chatbot showed greater improvement in their ability to resist misleading claims compared to those using non-interactive methods.

The findings were presented at ACM CHI Conference on Human Factors in Computing Systems, a leading conference in the field of human-computer interaction, where the study received an Honorable Mention Award.

While the exact mechanisms behind the chatbot’s effectiveness are not yet fully understood, researchers suggest that the interactive nature of dialogue, combined with tailored responses, may play a key role in strengthening critical thinking.

Enhance human resilience

Building on these findings, the research team is now exploring broader applications of AI-driven conversational support. In March 2026, they hosted a workshop at the Augmented Humans Conference in Okinawa, bringing together international researchers to examine how similar approaches could enhance other forms of human resilience.

The discussions extended beyond misinformation to include challenges such as negative thinking patterns, procrastination, and motivation. Participants from institutions including Aalborg University, UNSW Sydney, and the University of Melbourne collaborated on early-stage prototypes aimed at supporting mental well-being.

“What became clear is that resilience is not limited to misinformation alone,” says Szabó. “There is potential to apply these methods to a much wider range of everyday challenges.”

Next phase

The next phase of development will focus on expanding the capabilities of the Forty chatbot. In collaboration with international partners, the researchers aim to adapt the system to support users facing stress, uncertainty, and life transitions.

Rather than replacing human care, the goal is to provide accessible, preventive tools that help individuals build resilience before problems escalate. This aligns with a broader trend in digital health: shifting from reactive treatment to proactive support.

“Our long-term vision is to help people prepare mentally for challenges before they arise,” says Hosio. “AI can play a meaningful role in making that support widely available.” These findings highlight the growing potential of conversational AI not just as an information tool, but as an active intervention in public health, capable of strengthening both individual and societal resilience in an increasingly complex information landscape.