Generative AI is rapidly finding its way into medical education and clinical training. But without clear institutional policies and regulatory safeguards, heavy dependence on these tools may undermine the critical thinking skills of new and future doctors, warn experts in an editorial published in BMJ Evidence Based Medicine.
The authors highlight a growing tension: while AI can accelerate learning and decision-making, the technology also carries meaningful risks. Especially for students and junior clinicians who are still developing foundational clinical judgment.
The risks behind unchecked AI adoption
The University of Missouri research team outlines several concerns that are increasingly relevant as AI becomes woven into routine medical work:
- Automation bias; the growing tendency to uncritically trust AI-generated information.
- Cognitive off-loading; outsourcing reasoning, evidence appraisal, and clinical synthesis to AI systems, weakening learners’ memory and analytical skills.
- Deskilling; reduced development of essential competencies among students and early-career clinicians who may lack the expertise to question AI outputs.
- Bias reinforcement; embedding inequities from biased training data into clinical guidance.
- Hallucinations; convincingly written but inaccurate or fabricated information.
- Privacy and security risks; especially problematic in healthcare, where highly sensitive patient data is involved.
Rethinking medical education in the AI era
To counter these risks, the authors advocate for substantial redesign of medical assessments and curricula. Rather than evaluating only final outputs, educators should grade the reasoning process, assuming that learners will use AI tools.
They also recommend creating AI-free assessments, including supervised practical exams and in-person evaluations that focus on bedside communication, physical examination, teamwork, and professional judgment, competencies that cannot be outsourced to algorithms.
AI literacy itself should be treated as a core competency. Future clinicians must understand how AI works, where it fits in clinical workflows, and how to evaluate its performance, limitations, and potential biases.
One effective approach is to integrate critical thinking exercises where learners must evaluate AI responses containing both correct and intentionally flawed elements, accepting, modifying, or rejecting them based on primary evidence.
A call for regulatory guidance
The authors argue that educators cannot manage these challenges alone. Regulators, professional associations, and accreditation bodies should issue clear and regularly updated guidance on the role of AI in medical education and training.
Their conclusion is clear: while generative AI offers significant benefits, it also poses real risks to medical learners. Without proactive governance, AI could foster over-reliance, amplify bias, and disrupt the development of essential clinical reasoning skills. Medical programs must remain vigilant, continually adapt their training strategies, and ensure that AI strengthens, rather than weakens, the next generation of healthcare professionals.
GenAI to reshape clinical practice
Last month, we reported on a review in Nature Medicinet that concluded that generative AI is beginning to reshape clinical practice. Mainly as an assistant rather than an autonomous decision-maker. Modern transformer models such as GPT-5, Gemini 2.5 Pro, and DeepSeek-R1 can now reason through clinical problems, retrieve guidelines, write notes, and even generate code to test hypotheses. Crucially, these systems can be trained on smaller, domain-specific datasets, allowing hospitals to build tailored models without massive computational costs.
The authors emphasize AI’s collaborative potential, describing a “doctor–patient–AI triad” in which AI offers evidence-based insights while clinicians sustain judgment and empathy. Early research shows that human–AI teams outperform either alone in triage and diagnostic tasks.
Generative models also create synthetic medical data, patient records, labs, or imaging, supporting education and algorithm development while preserving privacy. But synthetic datasets carry risks, including over-fitting and accidental re-identification, requiring strict validation and balanced use of real and artificial data.
AI is already easing clinical workloads: drafting notes, summarizing charts, and reducing documentation time by over 70% in pilots. It also supports medical education through adaptive tutoring and realistic simulations. Still, hallucinations, bias, and documentation errors remain concerns.
The review called for rigorous clinical trials, transparent evaluation, ongoing monitoring for “performance drift,” and training clinicians to use AI safely. With thoughtful governance, generative AI could enhance care delivery, accelerate research, and broaden global access to expertise.