What do Woebot, Wysa and Youper have in common? These are all AI agents that use therapeutic techniques to help users improve mental well-being, guide meditation and even help with managing anxiety.
In this article, AI mental‑health agents are goal‑directed conversational systems that sit with you in a chat or voice interface to support specific wellbeing tasks; for example, walking through CBT‑style exercises, practicing coping strategies, or checking in on mood over time.
In the broader AI literature, these would be considered agents because they are built around particular goals and workflows, whereas “agentic” AI usually refers to more autonomous systems that can independently plan multi‑step actions, call tools, and adapt their behaviour with relatively little human steering.
Translating that distinction into the mental‑health space, the tools we discuss here behave more like tightly scoped, therapeutically scripted companions than fully agentic systems that roam across apps and channels on your behalf.
The mental health industry has been completely disrupted by these agents, as AI no longer plays the role of a simple symptom checker or static content library. Millions of users make good use of these conversational users for a support that “feels” human.
When AI agents adopt therapeutic styles, from CBT‑inspired coaching to companion‑like reassurance, their interaction patterns start to shape how people think, feel and act over time. For teams building these products, the therapeutic style is therefore not an aesthetic choice; it is a safety decision with regulatory, reputational and ethical consequences
When an AI agent remembers past conversations, uses empathic language, or checks in unprompted, many people will treat it less like a feature and more like a therapist, coach or confidant.
Three dynamics are particularly important in this case:
If a system sounds like a therapist, users can infer that it must be clinically tested, supervised and suitable for “people like them”, even when it is positioned as non‑clinical.
Repeated late‑night chats, daily mood logs and personalised reflections create continuity and perceived caring, even when the system is driven by generic models and prompts.
As agents suggest coping strategies, reframe thoughts or encourage disclosure, they quietly move into roles traditionally held by therapists, peers or family, but without the training, accountability and boundaries those roles normally carry.
There is a plethora of psychotherapy styles, and since practitioners need to tailor the style to the individuality of the patient, there might even be as many styles as people on earth.
examples of
Psycotheraphy Styles
These systems focus on helping users understand what they are experiencing: “What is anxiety?”, “Why am I feeling this way?”, “What can people do in this situation?”. They echo the educational, rationale‑providing components that appear across many therapies; especially cognitive‑behavioural approaches, which often start by linking symptoms to understandable models and offering a coherent treatment rationale. Typical behaviours of these AI agents include: Short explanations, normalising statements, evidence‑aligned self‑help tips, clear signposting to human resources, limited emotional mirroring. The main risks associated with this approach is the generic or culturally misaligned content which can misinform or invalidate; developmental appropriateness is often an afterthought for children and adolescents.
Coaching‑style agents most visibly mirror cognitive‑behavioural therapies (CBT) and related approaches, which focus on the links between thoughts, feelings and behaviours and use structured tasks to support change. Usually, behaviours. of these agents encompass: Thought‑records, behavioural activation tasks, problem‑solving steps, goal‑setting, “homework” prompts and progress summaries. However, when stripped from case formulation and clinical judgment, CBT‑like tasks can become rigid, decontextualised and subtly blaming (“you just need to change your thinking”), especially for users facing structural constraints or trauma. This constitutes a major risk on users, especially the most vulnerable ones.
Relational agents emphasise warmth, continuity and open‑ended conversation. They often borrow cues from humanistic and psychodynamic traditions that foreground the therapeutic relationship, empathy and the client’s subjective experience. These AI agents usually rely on small talk, empathic statements, following up on earlier conversations, “I‑language” (“I’m here for you”), apparent unconditional positive regard or a form of sycophancy. It appears that one of the reasons these agents were developed was to reduce loneliness, among other things. However, this creates risks of high dependency, blurred boundaries and unrealistic expectations from peers for example when it comes to emotional support.
Once teams recognise how their systems echo psychotherapy traditions, the task is to design interaction patterns that preserve value and reduce harm.
Design Principles for Safer Therapeutic Styles
Role Clarity and Boundaries
Role and limits should be legible in the interaction, not only in the terms of service.
- Use plain, repeated statements of what the system is and is not (“I’m a digital tool, not a therapist; I can’t diagnose or respond to emergencies”).
- Avoid claims that overstate agency, care or expertise, especially those that mimic humanistic or psychodynamic language of deep understanding and unconditional acceptance.
For child‑reachable systems, boundaries need to be developmentally appropriate, visually reinforced and consistent across features.
Interaction Guardrails and Safe Exits
Guardrails should be visible and testable.
- Define and test escalation patterns for risk cues, including when the agent stops generative conversation and surfaces crisis resources.
- Provide obvious exit points: options to change topic, reduce intensity, contact a human or remove content, without penalty.
These patterns should be monitored and refined, not treated as one‑off compliance tasks.
Calibrated Empathy
Empathy scripts drawn from humanistic and integrative traditions need calibration when implemented in AI.
- Use validating language that acknowledges experience without promising outcomes, unconditional availability or personal care.
- Avoid self‑disclosure and “best friend” positioning, especially with young users; keep the agent’s status as a tool visible.
The goal is to support users while keeping expectations in line with what the system can reliably and ethically deliver.
Preserving User Agency
Interaction should support, not replace, users’ own reasoning and decision‑making.
- Offer options and branching paths (“Would you like to understand what might be happening, explore coping strategies, or talk about support from people around you?”) rather than single, pre‑selected routes.
- Use prompts that invite reflection (“How does that suggestion fit your situation?”) instead of treating outputs as prescriptions.
In CBT‑like and embedded styles, this is central to avoiding a dynamic where the agent quietly becomes the main decision‑maker.
This is the text area for this paragraph. To change it, simply click and start typing. Once you've added your content, you can customize its design by using different colors and fonts.
Governance:
From Styles to Accountability
Therapeutic styles emerge from content, product and UX decisions, but they require explicit governance if they are to remain safe over time.
Organisations need clear ownership for role boundaries, escalation thresholds, data practices and acceptable trade‑offs between engagement and risk.
These responsibilities should span the lifecycle: initial design, deployment, model or prompt updates, and decommissioning.
The most significant risks often appear only after launch. It is of utmost importance to involve clinicians, psychotherapy researchers and people with lived experience when reviewing logs and making changes to styles and guardrails.
Work on common factors and “what works for whom” shows that different therapeutic approaches need to be adapted continuously to the people using them. For AI systems, that adaptation has to be engineered and governed rather than assumed.
Practical Checklist
Here is a brief checklist for teams who would like to apply what has been discussed so far:
Which psychotherapy‑inspired elements (e.g., CBT‑like structure, humanistic‑like empathy, psychodynamic‑style exploration, mindfulness practices) are present in your agent?
How might users reasonably overestimate the level of care, expertise or crisis support these elements imply?
Where and how do you state the system’s role and limits (e.g., inside the interaction itself)?
How are you monitoring real‑world use for dependency, misuse, misfit and unintended behavioural effects across different user groups?
Who has the authority to change styles, guardrails and escalation patterns when new risks emerge, and how often do you revisit those decisions?
If you’re building (or scaling) an AI mental-health agent and want to pressure-test your therapeutic style, boundaries, interaction patterns, and escalation design, we’re here to help.
Reach out for a short review session to identify risk hotspots and translate them into an implementable safety & governance pack, your team can ship and maintain.
References
Biswas, M., & Murray, J. (2024). “Incomplete Without Tech”: Emotional Responses and the Psychology of AI Reliance. In Lecture notes in computer science (pp. 119–131). https://doi.org/10.1007/978-3-031-72059-8_11
Chandra, S., Shirish, A., & Srivastava, S. C. (2022). To be or not to be . . .Human? Theorizing the role of Human-Like competencies in conversational artificial intelligence agents. Journal of Management Information Systems, 39(4), 969–1005. https://doi.org/10.1080/07421222.2022.2127441
Darcy, A., Beaudette, A., Chiauzzi, E., Daniels, J., Goodwin, K., Mariano, T. Y., Wicks, P., & Robinson, A. (2023). Anatomy of a Woebot® (WB001): agent guided CBT for women with postpartum depression. Expert Review of Medical Devices, 20(12), 1035–1049. https://doi.org/10.1080/17434440.2023.2280686
Dwivedi, Y. K., Helal, M. Y. I., Elgendy, I. A., Alahmad, R., Walton, P., Suh, A., Singh, V., & Jeon, I. (2025). Agentic AI Systems: What it is and isn’t. Global Business and Organizational Excellence, 45(3), 253–263. https://doi.org/10.1002/joe.70018
Johnson, C., Egan, S. J., Carlbring, P., Shafran, R., & Wade, T. D. (2024). Artificial intelligence as a virtual coach in a cognitive behavioural intervention for perfectionism in young people: A randomised feasibility trial. Internet Interventions, 38, 100795. https://doi.org/10.1016/j.invent.2024.100795
Jones, B., Stemmler, K., Su, E., Kim, Y., & Kuzminykh, A. (2025). Users’ expectations and practices with agent memory (pp. 1–8). https://doi.org/10.1145/3706599.3720158
Khawaja, Z., & Bélisle-Pipon, J. (2023). Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots. Frontiers in Digital Health, 5, 1278186. https://doi.org/10.3389/fdgth.2023.1278186
Kouros, T., & Papa, V. (2024). Digital Mirrors: AI companions and the self. Societies, 14(10), 200. https://doi.org/10.3390/soc14100200
Malmqvist, L. (2025). Sycophancy in Large Language Models: Causes and mitigations. In Lecture notes in networks and systems(pp. 61–74). https://doi.org/10.1007/978-3-031-92611-2_5
Maurya, R. K., Montesinos, S., Bogomaz, M., & DeDiego, A. C. (2024). Assessing the use of ChatGPT as a psychoeducational tool for mental health practice. Counselling and Psychotherapy Research, 25(1). https://doi.org/10.1002/capr.12759
Norcross, J. C., & Wampold, B. E. (2010). What works for whom: Tailoring psychotherapy to the person. Journal of Clinical Psychology, 67(2), 127–132. https://doi.org/10.1002/jclp.20764
Portell, S. (2026). Building AI Responsibly for Children: A Practical framework. HCRAI. https://www.hcrai.com/building-ai-responsibly-for-children-a-practical-framework
Sapkota, R., Roumeliotis, K. I., & Karkee, M. (2025). AI Agents vs. Agentic AI: A Conceptual taxonomy, applications and challenges. Information Fusion, 126, 103599. https://doi.org/10.1016/j.inffus.2025.103599
Short, F., & Thomas, P. (2014). Core approaches in counselling and psychotherapy. https://doi.org/10.4324/9780203773390
Zhang, Y., Zhao, D., Hancock, J. T., Kraut, R., & Yang, D. (2025). The rise of AI Companions: How Human-Chatbot Relationships Influence Well-Being. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2506.12605
Recent Posts




Terms & Policies
