When therapy meets the algorithm
Rethinking human futures in the age of AI
A few weeks ago, at a weekly community meetup, I found myself sharing a soft couch with someone I had never met before. As often happens in these informal networking moments, we began talking. When she learned that I study AI, her eyes instantly lit up. She told me, without hesitation, that she uses AI the way some people use a journal, a sounding board, even a therapist.
“It’s like… my best friend,” she beamed. “It just gets me.”
She couldn’t have been more than twenty-three.
As I listened, something in her admission unsettled me—not because I doubted the comfort she felt, but because of the certainty with which she attributed understanding, emotional attunement, and therapeutic presence to a machine. Her enthusiasm was sincere, but beneath it was a deeper truth: she had no sense of the complexities, risks, or psychological consequences of relying on an algorithm for care.
And I realized, sitting there on that couch, that we are entering a moment where loneliness, technological intimacy, and emotional outsourcing are converging faster than our collective understanding of them.
Her story intersected directly with my work on human futures, human-centred design, and the impact of AI on emotional and cognitive life. It echoed the themes I encounter daily in my research on responsible and safe AI—systems that must centre human dignity, reduce harm, and preserve the fragile ecosystems of care that make us who we are.
That brief conversation stayed with me. I think about her often—and about the many young people like her who have no one to turn to when they need even the simplest guidance about their mental well-being. I remembered my own twenty-three-year-old self and felt an ache in my chest for her. She told me she had come from a war-torn country, living here with only an older sister. That was her entire support system. In that context, of course an always-available algorithm might feel like a lifeline. It made perfect sense. And yet, it was precisely that sense of isolation—and her complete trust in a machine—that unsettled me most.
This all is also happening at a time when some countries are banning Social Media use for children. Recently, the Australian government introduced a groundbreaking national law banning access to major social-media platforms for users under 16. The law requires platforms such as Facebook, Instagram, TikTok, Snapchat and YouTube to deactivate under-16 accounts or face heavy fines; some companies have already begun removing accounts. But enforcement may not halt the shift — early responses show many teens migrating to lesser-known or newer apps outside the ban’s scope.
This moment underscores a deeper truth: as formal regulation pushes young people off mainstream platforms, many are turning elsewhere for connection, comfort, or “digital presence.” It’s a living, collective experiment in emotional outsourcing — and a powerful precursor to how AI-mediated therapy might embed itself in everyday life.
This moment is not lost on me. We are living through a loneliness epidemic, one where people, especially young people, are turning to AI not just for answers, but for companionship. And that is why I felt compelled to write this post both as a fellow human and a parent.
AI has quietly slipped into some of the most intimate corners of our lives, including our conversations, our memories, our identities, and now the spaces where we go to heal. Therapy is no longer a purely human encounter. Increasingly, it unfolds inside digital infrastructures governed by algorithms, data pipelines, and design decisions that most people never see.
My own pathway into this topic is personal as much as it is scholarly. I am not a clinician, but my world has always been shaped by the healing professions. All my paternal aunts are nurses; my father-in-law is a surgeon; my sister and one of my closest friends are psychotherapists. Through them, I have witnessed the emotional, ethical, and cultural stakes of therapeutic work. And I am concerned. Okay – disturbed.
At the same time, my research sits squarely at the intersection of AI, healthcare data, and rhetoric, examining how algorithmic systems participate in decisions once held exclusively by human caregivers. I speak from this vantage point: close enough to understand the emotional labour of care and deeply embedded in the computational logics now reshaping it.
This proximity raises the question at the centre of my work:
What happens when the systems that shape us emotionally also begin shaping us computationally?
Therapy has always been a profound human practice. It is rooted in trust, vulnerability, cultural nuance, and the fragile but transformative experience of being fully seen by another person. But we now stand at a crossroads where the therapist’s couch meets the algorithmic interface. And AI does not enter quietly. It becomes a mediator of meaning, a translator of emotion, a third presence in the room.
A presence that listens differently. Interprets differently. Retains information indefinitely. And reduces human complexity into patterns, probabilities, and predictions.
This shift reframes therapy from a relational practice grounded in human ethics to a computational event shaped by models, datasets, and design decisions most clients never consent to or understand. And that changes everything.
The Rise of Algorithmic Empathy
Across mental-health ecosystems, AI is marketed as the great solution: scalable, affordable, able to reduce waitlists and support overburdened clinicians. Chatbots, journaling apps, emotion trackers, sentiment-analysis tools, and predictive diagnostics now populate the mental-health landscape. But here is the rhetorical trick we need to be mindful of: These systems simulate empathy. They do not experience it. Machine-generated warmth is not relational care. It does not carry cultural understanding. It cannot be held ethically accountable.
In my research, I call this algorithmic ethopoeia—the performance of moral character through language, even when the system’s underlying logic may be biased, extractive, or fundamentally misaligned with the lived realities of clients.
The danger is subtle:
· Simulation starts to feel like care.
· Care becomes automation.
· And automation becomes harm.
Data as the New Clinical Archive
Traditional therapy protects what is shared within its walls. Digital therapy does something else: it turns healing into data. Every confession becomes training material. Every emotional pattern contributes to model optimization. Every vulnerability becomes a potential commercial asset. This is not paranoia; It is the political economy of AI. And it raises an uncomfortable question: Can healing be genuine within systems that profit from vulnerability?
Therapy Without Cultural Context Is Not Therapy
One of the most dangerous assumptions in AI development is that emotional experience is universal. That vulnerability looks the same across cultures. That healing has a single grammar. But therapy is profoundly cultural. In Caribbean, African, Indigenous, and Global South traditions, for example, healing is relational and communal. Silence holds meaning. Indirectness is care. Anger can be a form of truth. Spiritual idioms are expressions of connection—not signs of disorder.
Yet when these cultural grammars enter AI datasets, they often become:
- “anomalies”
- “risk indicators”
- “avoidance behaviours”
- “instability markers”
This is not simply a technical oversight. It is a rhetorical one.
AI quietly redefines what it means to be “healthy,” “logical,” or “well-adjusted”—often through Western, middle-class defaults that erase the diversity of human expression. And when AI mediates therapy, it risks reproducing the same clinical and colonial harms that marginalized communities have endured for generations.
Misrecognition Is Not a Glitch—It Is a Structure
A model trained primarily on white, Western data will not “hear”:
- the emotional cadence of a Black immigrant woman dealing with racism
- Caribbean indirectness as legitimate communication
- Indigenous relationality as a knowledge system
- collectivist values as interdependence rather than co-dependence
Misrecognition is not an error - It is built in. And in therapeutic contexts, misrecognition becomes clinical harm, misdiagnosis, misinterpretation, escalation, and the quiet erosion of trust. This is why my work advances Ethotic Heuristics: a framework for designing AI systems that centre dignity, cultural nuance, and interpretive ethics. We need tools that honour the fullness of human experience rather than flatten it into data.
The Future of Healing Depends on the Choices We Make Now
AI is already reshaping therapy. That part is not optional. The real question is:
· Who gets to design the emotional landscapes of the future?
· Will therapy become another site of extraction, where human complexity is reduced to behavioural metrics?
· Or can AI be designed to extend care, honour cultural identity, and remain accountable to the people it claims to serve?
The future is not predetermined. It is rhetorical. It is designed. And it is shaped by the stories we allow our technologies to tell about us. Our responsibility—my work, our work—is to ensure that those technologies do not erase what makes healing possible: dignity, nuance, cultural rootedness, and human connection.

Super important post, thank you. I wrote about how empathy in LLMs is just a marketing tool to keep up using the tool and I’m researching emotional recognition right now.
I love this! I’ve seen very few people talk about the subject of culture as it relates to LLMs and as a Latina have recognized the implications the technology has on my identity and sense of self.