Nosta on LLMs: More information won’t make us healthier

Mon 9 March 2026
AI
Interview

In his new book, The Borrowed Mind. In Reclaiming Human Thought in the Age of AI, innovation theorist John Nosta argues that large language models are reshaping how people make decisions. In this interview for ICT&health Global, he explains how AI can extend thinking in clinical judgment and why it will take time before LLMs become trusted allies for patients.

You’ve spent years working at the intersection of healthcare, medicine, and innovation. What pulled you beyond healthcare into AI as a broader human question?

My attraction to AI felt inevitable. Digital health was advancing in meaningful ways, but large language models represented something categorically different. It felt less like another tool entering medicine and more like a structural shift in how knowledge itself functions.

I’ve often described it this way: Gutenberg unlocked words. The internet unlocked facts. Large language models are beginning to unlock thought.

That realization moved the conversation beyond healthcare almost immediately. Once I saw that these systems could participate in reasoning across disciplines, not just within a clinical niche, the question stopped being about medical innovation. It became about cognition itself. After that initial exposure, there was no going back.

Early in the book, you say this isn’t really a book about technology, but a book about us. What do you mean by that?

Technology is the catalyst, not the subject. The book examines what happens to human thinking when answers become immediate and compelling. Large language models generate coherence without the lived experience. When we collaborate with them, effort shifts and the distance between question and answer narrows. That shift affects judgment and responsibility. The book explores how we adapt when thinking is no longer a solitary act.

You introduce the idea of the “borrowed mind.” How do we know when AI is extending our thinking and when it is replacing it? In medicine, is AI expanding doctors’ capabilities or quietly eroding their critical thinking?

AI extends thinking when it widens perspective and sharpens inquiry. It begins to replace thinking when it becomes the endpoint rather than the starting point. In medicine, the distinction is subtle but important. These systems can scan literature and surface patterns at remarkable speed. Yet clinical judgment develops through exposure to nuance and consequence over time. If AI supports that process, it functions as an amplifier. If it becomes the answer before doubt has done its work, the erosion is gradual and easy to overlook.

Patients are increasingly using chatbots to ask health-related questions. Soon, AI agents may orchestrate our lives and optimize how we stay healthy. Will they become trusted allies we follow, or will people start avoiding them to escape the constant advice and demanding behavior change?

People appreciate guidance, but they resist feeling managed. Continuous optimization can produce fatigue, even when the advice is accurate. My sense is that health unfolds inside competing priorities and imperfect habits (and people). AI may earn trust if it respects autonomy and acknowledges limits. If they operate as relentless advisors, some individuals will disengage simply to preserve a sense of control. That being said, I do believe there is a clear path to AI becoming a trusted ally, but that accomplishment will come with the bumpy progress of two steps forward and one step back.

AI offers access to knowledge we have never had before. But will this knowledge actually make us healthier, or simply better informed?

Access to knowledge has rarely been the central barrier to health, and I think it’s important to understand this. Information is abundant, but applying it consistently is harder. AI can make medical insight widely available, which is meaningful. Whether it improves health depends on interpretation and follow-through. Being informed does not automatically translate into exercising sound judgment, particularly in the downstream aspects of self-care.

Medicine is full of ambiguity, uncertainty, emotionally charged decisions, and patients with very different stories and needs. Is AI really ready to step into a doctor’s office?

AI performs well in structured environments where patterns are explicit, and data are clear. Medicine often unfolds before clarity arrives. Symptoms evolve and context shapes meaning. In those moments, judgment involves more than statistical correlation. AI belongs in the clinical environment as a computational layer, but it does not replace the lived dimension of care. Simply put, we will almost certainly ask, “What did the computer say?” That query can often be followed by, “What did the doctor do?”

Do you think doctors may become supervisors of algorithms, while decisions based on intuition or experience are increasingly seen as high risk?

In some settings, that shift is already underway. As models become more persuasive, deviating from them may appear risky from a liability standpoint. The issue is not supervision itself; it’s about preserving professional agency. Algorithms can inform decisions, but they cannot assume responsibility for them. If that boundary blurs, the role of the physician changes in ways we should examine carefully.

You argue that depth, responsibility, and virtue remain human responsibilities. Yet these values are often squeezed out of healthcare by ten-minute appointment slots. In overburdened health systems, could artificial depth and empathy be better than none at all?

Artificial empathy can create a sense of attentiveness and, in strained systems, may offer real comfort. Still, empathy involves more than well-formed language. It includes shared vulnerability and a willingness to stand inside the outcome with a patient. An algorithm does not carry the moral weight of a decision. Simulation may soften the interaction, but it does not substitute for responsibility.

What can go wrong in a healthcare system where patients are highly empowered by AI?

Empowerment can improve care when it strengthens dialogue between patient and clinician. It becomes problematic when confidence exceeds context. AI-generated explanations may sound complete while missing nuance. Fragmentation can increase if individuals follow algorithmic advice without coordination. The task ahead is not to reduce empowerment, but to integrate it into relationships built on trust and humility.

You explore the philosophical meaning of AI for humans. But most people do not ask these questions. They use AI simply because it makes life easier. Why do you believe this book is so important right now?

Ease is precisely why this moment deserves attention. When effort decreases, we rarely stop to ask what else is changing. The friction involved in reasoning – from the struggle of deep thought to the joy of discovery – is part of how judgment forms.

There’s a familiar phrase that happiness is found in the journey rather than the destination. There is something similar at play here. The struggle to understand, the slow shaping of an idea, even the mistakes along the way, are not inefficiencies. They are formative and build discernment. They create intellectual resilience.

Large language models compress that journey. They offer polished destinations without requiring the same interior process. That is an extraordinary capability. But human cognition is not only about arriving at answers. It is shaped by friction, by loss, by love, by lived consequence. AI computes without biography. We do not.

This perspective is critical now because we are entering an era where thinking can feel effortless. Effortless thinking is seductive. It can also be quietly transformative. The question is not whether we should use these systems. We should. The question is whether we remain active participants in our own reasoning, or gradually surrender the parts of thought that once formed us.

John Nosta - The Borrowed Mind
Starting March 16, The Borrowed Mind by John Nosta will be available on Amazon.