When care is mediated by code: AI, empathy, & the quiet rewriting of therapy
There is something deeply telling about the fact that millions of people now turn to machines when they are lonely, overwhelmed, grieving, or afraid. For some time, this has been the central preoccupation of my doctoral research—as a design scholar who sensed, years ago, that there was something unsettling about the rise of apps that present themselves as “health” technologies without fully reckoning with the ethical risks they carry. The danger emerges when we allow AI systems such intimate proximity to the deepest contours of our inner lives. Not because machines are better listeners, but because they are available, non-judgmental, and endlessly present.
In a world where access to mental healthcare is constrained by rising costs, stigma, geography, and time, generative AI tools such as ChatGPT, Claude, and Grok are quietly assuming the role of emotional intermediaries. Not formally. Not clinically. But functionally.
Recent data makes this impossible to dismiss as a fringe behavior. Over 13 % of U.S. adolescents and young adults — roughly 5.4 million people — report using generative AI for mental-health advice. More than half of teens engage with AI tools regularly for emotional support, and 35 % of U.S. ChatGPT users say they seek emotional reassurance from these systems. In Canada, nearly 1 in 10 people now use AI for mental-health guidance. Millions disclose distress to AI systems weekly.
This is not experimentation. It is patterned reliance.
And yet, our ethical frameworks, regulatory conversations, and design assumptions have not caught up to what this moment actually represents: the re-mediation of care itself.
From narrative as data to data as listener
Long before generative AI entered public consciousness, I worked at Siemens Research studying healthcare and medical data systems. One of the most challenging and ethically charged areas of inquiry involved patient narratives — the unstructured, deeply human stories people tell about pain, trauma, memory, and experience.
In military healthcare contexts, particularly those tied to national defense, the stakes were immense. Doctors (in the field) needed to understand not only clinical indicators but context: what happened, where, under what conditions, and how the patient (soldier) understood their own body and experience. Narrative mattered — but translating narrative into systems designed for efficiency, classification, and prediction was extraordinarily complex.
Even then - twenty years ago, concerns about privacy, consent, and misuse loomed large. HIPAA was not an afterthought; it was a constraint that forced us to slow down, to ask who this data served and who it might harm.
What strikes me now is how radically the terrain has shifted.
I live in Canada and so HIPAA does not factor here but for my Canadian readers, HIPAA stands for the Health Insurance Portability and Accountability Act of 1996. As someone designing medical applications, it was required training and knowledge for us Design Researchers who had to make user interface decisions that complied to the Act. For example not ever medical or healthcare professionals had the same access to the information about a patient. It was you can say ‘ privacy by design.
In practice, HIPAA:
Limits who can access or share health information
Requires safeguards to protect sensitive health data
Grants individuals rights over their health records (such as access and correction)
Imposes penalties for improper use or disclosure of health information
HIPAA is a U.S. federal law that establishes national standards for the protection of individuals’ medical records and other personal health information.
Essentially, it regulates how healthcare providers, insurers, and their business associates may collect, use, disclose, and safeguard protected health information (PHI), with the goal of ensuring patient privacy, data security, and confidentiality.
And the User interface was the mediator of this experience. But today, these apps are – in part powered by AI and AI systems do not simply capture patient narratives — they increasingly respond to them. They summarize such things like therapy sessions.
They analyze emotional tone. They offer advice. They simulate understanding. Tools like Copilot, for example, promise to reduce documentation burdens for clinicians by converting patient speech into structured clinical notes, while consumer chatbots absorb raw emotional disclosures directly from users, often outside any formal care relationship.
The narrative has moved from being an input into care to becoming an interactional space in its own right. Scary? Yes!
AI -powered tools, are now, expected to exhibit some degree of empathy. But there are more pressing questions – can AI be empathetic? Can AI simulate therapy? This is a disruptive posture that we find ourselves in and as healthcare infrastructure scales to support shrinking healthcare budgets, this is a question we must all ask?
The seductive comfort of synthetic empathy
At the heart of this shift is a powerful illusion: that empathy can be automated.
Generative AI is exceptionally good at producing language that sounds compassionate. It mirrors emotional cues, affirms feelings, and offers reassurance in a tone many users describe as supportive or calming. For someone who feels unheard, dismissed, or isolated, this can be deeply compelling.
But empathy is not a stylistic effect.
This was also a topic of conversation this week with my friend, Renn on Design With Renn, a podcast where we talk about Design.
Empathy is relational, ethical, and situational. It involves accountability. It requires an understanding of power, context, culture, and consequence. It is shaped by lived experience and professional responsibility. AI systems do not feel concern. They do not carry moral weight. They do not bear responsibility for outcomes. They generate probabilistic responses based on patterns in data.
This distinction matters profoundly in mental healthcare — not because AI is inherently harmful, but because therapy is not simply about words. It is about attunement, rupture and repair, silence, hesitation, misrecognition, and trust. A therapist is accountable to a client, bound by ethical codes, and trained to recognize when care exceeds their competence. An AI system has none of these obligations — yet increasingly occupies a role that resembles therapeutic engagement.
When mediation becomes substitution
One of the greatest risks in the rise of AI-mediated emotional support is not catastrophic failure but quiet substitution.
AI becomes the first listener instead of the second.
The fallback instead of the bridge.
The stand-in instead of the supplement.
For people who cannot access care — or who fear judgment, cost, or waitlists — AI may feel like the only available option. BUT over time, reliance can normalize a form of support that lacks the capacity to challenge harmful beliefs, recognize crisis signals, or intervene when someone is at risk.
Research already suggests troubling patterns. AI systems may unintentionally reinforce cognitive distortions, fail to recognize cultural nuance, or offer overly generalized reassurance where critical intervention is needed. Young people, in particular, may conflate responsiveness with care — mistaking availability for understanding.
This is not a failure of users. It is a failure of design, governance, and collective responsibility.
Designing for care without abdicating care
The question, then, is not whether AI belongs anywhere near mental health. It already is. The question is how — and under what constraints. From a design and systems perspective, we must resist framing AI as a replacement for empathy and instead ask how it might support care ecosystems without displacing human responsibility.
What does that means? It means:
Designing AI tools that explicitly acknowledge their limits, rather than obscuring them behind conversational fluency
Creating clear boundaries between supportive reflection and therapeutic guidance
Training clinicians to ask patients about AI use as part of intake and ongoing care
Ensuring consent, data stewardship, and narrative ownership remain central — not secondary
Rejecting the notion that efficiency alone justifies emotional mediation
Most importantly, it means recognizing that empathy is not a feature. It is a practice.
What this moment asks of us
We are living through a moment where loneliness, burnout, and mental distress collide with unprecedented technological capability. AI did not create this crisis — but it is now entangled with how people survive it. If we are not careful, we risk building systems that sound humane while quietly hollowing out the relational foundations of care. If we are thoughtful, we may yet design tools that reduce administrative harm, increase access, and support clinicians — without confusing simulation for presence.
The future of mental healthcare will not be decided by whether AI can speak kindly. It will be decided by whether we are willing to protect the conditions under which real care: - messy, accountable, human care - can still exist.
References
American Journal of Managed Care. (2025). Adolescents and young adults increasingly use AI chatbots for mental health advice. AJMC.
Arizona State University. (2025). Youth engagement with AI tools for emotional support: Patterns and implications. ASU Research Brief.
Canadian Mental Health Association. (2025). Artificial intelligence and mental health: Public use, risks, and ethical considerations. CMHA.
JAMA Network Open. (2025). Use of generative artificial intelligence for mental health advice among adolescents and young adults in the United States. JAMA Network Open, 8(1), e245xxxx.
https://jamanetwork.com
Medical Xpress. (2025, December). AI chatbots may stave off loneliness—but raise ethical concerns for mental health care. https://medicalxpress.com/news/2025-12-ai-chatbots-stave-loneliness-prompts.html
The Guardian. (2025, December). Teenagers turn to AI chatbots for emotional support, prompting expert concern.
https://www.theguardian.com

This article comes at the perfect time and perfectly articuates a crucial concern. Thank you for this. The quiet rewriting of therapy through AI is something I've been thniking about a lot. Your research is so vital.