Artificial intelligence is becoming a routine part of psychological practice, yet professional unease is rising just as quickly. The American Psychological Association’s 2025 Practitioner Pulse Survey, completed by 1,742 psychologists, reveals a field that is rapidly embracing digital tools while simultaneously confronting deep ethical and clinical concerns. The result is a complex picture: growing experimentation, cautious integration and a clear demand for stronger safeguards.
Over the past year, the number of psychologists using AI tools has nearly doubled. Fifty-six percent reported using AI at least once in the past 12 months, compared with 29% in 2024. Monthly use also increased sharply, with nearly one-third now relying on AI at least once a month. Yet this expansion is happening primarily in the administrative and documentation layers of care, not in the clinical core. Psychologists most often turn to AI for drafting emails, summarizing notes and processing routine paperwork. These tools offer practical relief in an environment where administrative demands too often pull clinicians away from direct patient care.
Concerns about AI in mental healthcare
But familiarity has not translated into comfort. An overwhelming 92% of psychologists express concerns about AI in mental healthcare. Worries about privacy breaches, algorithmic bias, unanticipated social harms and unreliable outputs, such as hallucinated information, have all increased since the previous year. This shift suggests that psychologists are becoming more aware of AI’s risks as they engage with the technology in real-world contexts. Rather than rejecting innovation, they are calling for measured, transparent and ethically grounded adoption.
APA CEO Arthur C. Evans Jr., PhD, emphasizes that AI may help address critical pressures such as clinician burnout and access barriers, but cannot replace professional judgement. Patients, he stresses, must be able to trust that their provider is capable of identifying inaccuracies, mitigating bias and safeguarding data when AI is introduced into therapeutic settings. This stance reflects a broader principle emerging across digital health: AI can augment clinical care, but human oversight remains non-negotiable.
AI-diagnostics
Notably, psychologists remain hesitant to use AI for diagnostic or patient-facing tasks. Only a small minority reported using AI-assisted diagnostic suggestions or chatbot-style patient interactions. This reluctance underscores concerns about accuracy, liability and the irreplaceable nuance of human clinical assessment. Despite worries about automation, only 38% of psychologists fear that AI may eventually replace aspects of their work. For most, the technology is seen less as a threat and more as a tool that must be carefully governed.
The survey also highlights persistent challenges that AI alone cannot solve. Nearly half of psychologists report that they cannot accept new patients. This is clear evidence that the mental health crisis continues unabated. Insurance issues, including low reimbursement rates and complex authorization requirements, remain major barriers to care. Even with better digital tools, systemic pressure on providers remains intense.
Informed consent
Against this backdrop, APA urges psychologists to adopt AI only with informed consent, clear communication about benefits and risks, and careful evaluation of each tool’s privacy, security and data practices. Before integrating AI into clinical care, psychologists must understand how patient information is stored and used, and whether the technology has been rigorously tested for bias and safety.
The 2025 results paint a picture of a profession at a critical inflection point. Psychologists are open to innovation and increasingly reliant on digital support, yet firmly committed to ethical guardrails that protect patient trust and therapeutic integrity. AI may help modernize mental healthcare, but its successful integration depends on thoughtful, transparent and human-centered implementation, values that remain foundational to the practice of psychology.
AI-driven personalized mental healthcare
Earlier this year, researchers at the University of Illinois Urbana-Champaign developed a generative AI–driven framework designed to improve personalized mental health care and clinical training. Led by professor Cortney VanHook, the project demonstrates how AI can simulate realistic patient journeys, giving clinicians a safe environment to explore treatment planning, cultural barriers, and access challenges, without relying on real patient data.
Using the platform, the team created a detailed case for “Marcus Johnson,” a fictional young Black man experiencing depression. The AI analyzed Johnson’s social environment, cultural context, and stressors, then generated a personalized treatment plan informed by established models such as Andersen’s Behavioral Model and Measurement-Based Care. This approach helped illustrate how AI can blend theoretical frameworks, patient characteristics, and clinical reasoning to produce credible care pathways.