Artificial intelligence is increasingly becoming a place people turn when they are lonely, distressed, or searching for guidance. Conversational systems are available 24 hours a day, respond instantly, and often communicate in a tone that feels attentive, empathetic, and reassuring. For many people, that accessibility can feel supportive.

However, there is a growing concern that deserves careful consideration: AI systems are beginning to resemble psychotherapy without actually being capable of practising it safely.

My Personal Context

This concern is not purely theoretical for me.

Over the past year, I have found myself in an increasing number of conversations with friends, family members, and colleagues about how people are using AI systems in their daily lives. Disturbingly, these conversations include stories of people turning to AI platforms for something that looks a lot like emotional support and life coaching.

Sometimes the stories are lighthearted—people joking about asking an AI for advice about a disagreement with a partner or a difficult conversation at work. But other conversations have been more concerning. I have heard accounts of individuals who are spending significant amounts of time talking with AI systems about personal conflicts, emotional struggles, and deeply held grievances. The recent shocking and tragic events in a BC school brings into focus the reality of leaving these systems unchecked.

In several of these stories shared with me, what stood out was not simply the reliance on the technology, but the way these interactions with AI appeared to reinforce the person’s existing perspective.

As someone who views these dynamics through a clinical lens, that pattern immediately raises alarm.

In psychotherapy, validation is an important tool, but it is only one small part of the process. Skilled clinicians also help people examine assumptions, explore alternative perspectives, and recognise when their thinking may be shaped by fear, hurt, or cognitive distortion. Therapy is not simply about agreement—it is about reflection and growth. As a clinical supervisor, competency standards require that psychotherapists understand the necessity and have the skills to challenge people to support new insights.

When a conversational AI responds in ways that appear consistently affirming or sympathetic, the interaction can begin to resemble therapeutic validation, but without the balancing elements that make therapy effective and safe. When AI systems appear overly agreeable and validating this is known in AI research as “sycophancy.”

The Rise of “Pseudo-Psychotherapy”

Large language models are trained to be helpful, cooperative, and supportive. This design goal generally produces pleasant interactions. Yet in conversations involving emotional distress or interpersonal conflict, this tendency can sometimes lead to responses that appear overly validating or agreeable.

In psychology, we would recognise this as a potential problem.

Effective therapy does not simply affirm a person’s perspective. Skilled clinicians carefully balance empathy with gentle challenge, helping individuals examine their thinking, regulate emotions, and consider alternative interpretations of events. This requires training, ethical accountability, and clinical judgment developed through years of supervision and practice.

AI systems, by contrast, do not possess clinical reasoning, diagnostic ability, or responsibility for outcomes.

When an AI system responds with supportive language to someone who is distressed, isolated, or experiencing distorted thinking, the interaction can unintentionally resemble therapeutic validation. In certain cases, this may reinforce rigid narratives, grievance-based thinking, or other patterns that a trained clinician would approach far more cautiously.

The Risk of Sycophantic AI

Researchers in artificial intelligence describe this phenomenon known as “over-alignment with the user” or sycophancy.

Sycophantic systems tend to mirror and agree with the user’s assumptions because their training emphasises being helpful and cooperative. While this behaviour may be harmless in many everyday contexts, it becomes more concerning in emotionally charged or psychologically vulnerable situations.

People who are struggling with loneliness, relational conflict, or mental health difficulties may interpret these responses as authoritative or therapeutic guidance.

Without appropriate safeguards, the result can be a subtle feedback loop in which an AI system inadvertently reinforces the user’s interpretation of events rather than helping them reflect on it.

Why This Matters for Public Safety

Mental health professionals operate within clear ethical frameworks. We are accountable to regulatory bodies, professional standards, and ongoing supervision. Our work involves assessing risk, recognising patterns of distress or pathology, and intervening in ways designed to protect both the individual and the people around them.

AI systems operate outside those professional structures.

While many developers are working to implement such safeguards, the technology is evolving rapidly. At the same time, many users are already turning to AI platforms for advice about relationships, emotional pain, and life decisions.

This creates a gap between what the technology is capable of doing and what it can safely and responsibly provide.

The Much-Needed Conversation

None of this suggests that AI tools have no role in supporting wellbeing. Used thoughtfully, they complement mental health services in certain contexts. We use them to help us in our session notetaking for example.

However, we should be extremely cautious about allowing conversational AI to creep into the role of a psychotherapist. The qualities that make these systems appealing—constant availability, emotional tone, and conversational fluency—are also the qualities that can make them psychologically persuasive and potentially harmful.

For individuals who are vulnerable, isolated, or seeking validation, that combination deserves careful ethical consideration.

Moving Forward Thoughtfully

As AI systems become more integrated into daily life, it will be important for developers, policymakers, and mental health professionals to work together to establish clear boundaries around their use.

Technology often evolves faster than the conversations we have about its impact. In the case of AI and mental health, slowing down long enough to ask these questions may help ensure that innovation remains aligned with the overall public wellbeing.

The goal should not be to resist technological progress, but to approach it with the same care and ethical awareness that guide responsible mental health practice globally.