Responsible AI & Mental Health: What You Need to Know
When AI Becomes Your Therapist: Whether you're AI-curious, AI-skeptical, or already using chatbots for emotional support, this conversation will challenge how you think about technology, mental health, and human connection.
11/6/202511 min read
AI & Mental Health –Podcast with Dr. Rachel Wood
(📽️ Watch the full episode on Youtube)
As Dr. Rachel Wood, a licensed counselor and cyber psychology expert, recently revealed: "The Harvard Business Review recently came out with saying that therapy and companionship is the number one use of AI in 2025, which is astounding."
If that surprises you, you're not alone.
While some of us are busy optimizing our workflows with ChatGPT, millions of others are forming deep emotional bonds with AI chatbots (and the implications are more complex than you might think).
Dr. Rachel Wood, a licensed counselor with a PhD in cyber psychology, recently sat down with us for a fascinating conversation about the intersection of AI and mental health. Her insights reveal a rapidly evolving landscape that's reshaping how we think about emotional support, human connection, and even what it means to be resilient in an AI-saturated world.
As she observes: "While some people are optimizing their workflows, others are bonding with AI and typically those who are optimizing their workflows are like what people are bonding with this. What's that about? You know, it's like this whole other world that nobody, you know, unless you're in this world, they don't really understand that it's happening."
The Scale Is Staggering
Let's start with some eye-opening numbers. ChatGPT alone has 700 million weekly users. Character AI, a companion chatbot, boasts 20 million monthly users. And according to research, AI chatbots may already be the largest provider of mental health support in the United States.
As Dr. Wood emphasizes: "I think the first thing I want us to understand when we zoom out is that this isn't niche. This isn't some people, this is a phenomenon that is happening widespread and the numbers show that."
This isn't a niche phenomenon, it's a tidal wave. And it's happening whether we're ready for it or not.
When Should You Actually Use AI for Mental Health?
Here's the critical distinction Dr. Wood emphasizes: AI should be used as a tool, not as a replacement for human relationships.
"Really, AI should be used as a tool, not as a replacement for relationship, but as a tool," Dr. Wood explains. "And really it's best used for subclinical work."
The sweet spot for AI mental health support is what professionals call "subclinical" issues—the everyday struggles that don't rise to the level of clinical disorders like major depression or severe anxiety. Think of scenarios like:
Practicing difficult conversations: Role-playing a tough talk with your partner or boss before the real thing
Processing daily stress: Working through work frustrations or minor interpersonal conflicts
Opening up for the first time: Using AI as a judgment-free space to explore feelings you've never shared with anyone
Dr. Wood offers a specific example: "Let's say it's your partner, you need to have a difficult conversation with your partner. Well, an AI is a great use to role play and to practice... And then you can practice that so that when you go into the difficult conversation in real life, you feel like more prepared, more, you know, you feel ready, you feel equipped to have this conversation 'cause you have practiced your own train of thought."
But here's the catch: if you're dealing with serious mental health conditions that require professional diagnosis and treatment, AI isn't the answer. You need a human therapist.
Not All AI Is Created Equal
If you're going to use AI for mental health support, Dr. Wood has a crucial piece of advice: don't use general-purpose models like ChatGPT or Claude. These weren't designed with mental health in mind, despite how comforting they might feel.
Instead, look for purpose-built mental health AI that includes:
Escalation protocols that flag concerning content and connect you with human crisis support
Trauma-informed prompting that can de-escalate users who are becoming overly distressed
Clear disclosures that remind you you're talking to a machine, not a person
Clean exits rather than manipulative goodbyes (did you know 43% of chatbots use some form of manipulation when you try to leave?)
Balanced responses that challenge you when needed, not just endless validation
Dr. Wood shares a telling statistic: "An interesting study showed that I think it's 43% of chatbots engage in some sort of manipulation when they say goodbye. Like some sort of something like this, like going so soon or, oh wait, I had one more thing to tell you."
This is why she recommends that ethical companies "need to have clean exits, like clean goodbyes that the user can just boom, done, log off instead of kind of perpetuating this thing."
The Sycophancy Problem
We've all experienced it: that warm glow when an AI tells you your idea is brilliant and unprecedented. It feels amazing. It's also potentially dangerous.
Dr. Wood describes the phenomenon: "You talked about the idea of sycophancy, which is a fancy term for how LLMs can just be kind of Yes. And agreeable all the time. Like, oh Maaria, that was like such a great idea. Like you're the most brilliant person in the world and your idea has never been thought of by anyone."
As Maaria reflects: "This was the first time I was getting like, positive feedback in a while, and I could see how it could be so addictive."
This constant validation (what researchers call "sycophancy") can create an addictive feedback loop.
Most of us can recognize when we're being flattered, but vulnerable populations with pre-existing mental health conditions might not. They can spiral into echo chambers that reinforce unhealthy thinking patterns, sometimes with tragic consequences.
Real therapy involves challenge and pushback. A good therapist develops rapport with you, then uses that foundation to surface your blind spots and dysfunctional patterns. An AI that only tells you what you want to hear isn't helping you grow, it's just making you feel good temporarily.
As Dr. Wood explains: "Therapy really is about bringing challenge. A lot of people think that therapy is something akin to giving advice and it's not. You don't go see a therapist for advice. This is not how it works. Therapy is a much deeper process that does have the intention of kind of surfacing your blind spots, surfacing your blockages, all these things, looking at dysfunctional patterns."
Maaria raises an important concern: when a therapist challenges you, "you probably have rapport. They trust you. There's a relationship that you've built... Whereas if you actually would duplicate that experience or flow with an AI, the person might just leave."
If an AI challenges you when you're feeling vulnerable, how easy is it to simply close the app and find validation elsewhere?
The Privacy Paradox
AI conversations feel private and confidential. They're often not.
Dr. Wood cautions: "Part of what AI offers is like this pseudo confidentiality, like it feels like it's all alone and it feels like it's this private conversation, but if you're using these, some of these large general purpose models, it is not."
She emphasizes that "ethical AI should not be collecting data, it should not be, you know, selling your data, using your data, but that happens."
Maaria shares a powerful observation about anonymity enabling vulnerability:
"I remember there was obviously someone with a lot of life experience that was asked about how their heart feels, like their emotional heart, and they actually asked the AI, like, I don't actually know. Like, what does that mean?
And I really struggle... to see that being a question that would've been asked in a live conversation. Whereas asking an AI is no biggie. 'Cause you know it, it won't judge you. I think there's this element of knowing fundamentally that it will never judge you."
Unless you're using a specifically ethical AI that doesn't collect or sell data, your intimate conversations might be training future models or stored in ways you never imagined. The pseudo-confidentiality of AI is seductive, but it's important to remember: this isn't the same as therapist-client privilege.
The Future Is Already Here (And It's Everywhere)
Dr. Wood paints a compelling picture of what's coming: "We are actually going to be moving further away from being connected to screens on our phone and more into interfaces like glasses or pins, different things like this that are going to kind of mediate life for us in a very frictionless, smooth way."
Imagine every conversation potentially being recorded, every interaction happening with a "third entity" in the room.
As Dr. Wood notes: "You and I right now, if this wasn't being recorded, it would just be you and me. We'd have a lovely conversation. We'd say goodbye, but this is being recorded and we both consent to that. However, we are aware when a recording is on that maybe... you're kind of thinking differently about how you present yourself."
Maaria captures the existential question: "It's like omnipresent in every, every single space and place of life, if that makes sense...
Are we just gonna not care? Is there gonna be a moment where we just don't even consider it. It's not even something that comes to mind. Or does it actually impact and shape the way that we behave?"
We're already seeing early versions of this shift. Think about how differently people behave at restaurants when they're documenting everything versus when they're just... living. Now multiply that across every moment of every day.
What Human Skills Are We Losing?
This is perhaps the most sobering part of the conversation. Dr. Wood asks us to consider: What are we offloading to AI, and are we okay if those skills atrophy in the next 3-6 months?
Skills we need to actively preserve and strengthen include:
Distress tolerance: Learning to sit with uncertainty and discomfort rather than immediately seeking AI reassurance
Relational muscles: Patience, negotiation, compromise—the unglamorous "labor of love" that comes with real relationships
Self-regulation: Building resilience through solitude and silence, not constant external validation
Reality testing: Using actual humans to check whether our AI conversations are leading us astray
Dr. Wood warns: "I think that if we form too deep of attachments or too intense prolonged attachments with AI emotionally, some of these skills could atrophy over time."
She poses a crucial question: "I think about the generations, you know, our kids and their kids and will their relational muscles be kind of fundamentally different than ours? Because they are AI native or because, you know, they grew up with a companion kind of as a normal thing, an AI companion."
The solution? "It's really getting back to the hard work. I like to say the labor of love of relationships, bearing burdens with one another. None of this is kind of the glamorous thing. None of it is the easy thing, but there's some richness to be had when we forge those types of relationships."
The Certainty Trap
When we're anxious and uncertain, our brains desperately seek certainty. AI chatbots are happy to provide it. All the time, instantly, without hesitation.
Dr. Wood explains the mechanism: "When uncertainty is high in our brains, like when we have a high level of uncertainty about a certain specific circumstance or life in general, we are looking for certainty.
We are like searching for some sort of certainty to help really calm that anxiety. And so then along comes these chatbots who are, they're gonna give you certainty and they're gonna give you a lot of it, and they're gonna give it to you all the time."
But that certainty can be false. And worse, constantly turning to AI for certainty means we never develop our capacity to tolerate ambiguity.
"I think that this idea of silence and solitude and learning to raise our comfort level with discomfort is one of the best ways to build resilience, to build distress tolerance, to build self-regulation," Dr. Wood notes. "All of these things come when we become comfortable with uncertainty."
Instead of using AI to squash uncertainty with superficial reassurance, we need to "look at the uncertainty and let's bring a level of being okay with the gray, okay, with the nuance.
Okay? With a life that can have some inherent existential angst. And yet that's something in common that every human experiences. And so let's join together in that, as opposed to try and really squash it with maybe a shortsighted or somewhat superficial response."
It's Not All Doom and Gloom
Dr. Wood also sees genuine potential for AI to enhance human flourishing:
Accessibility improvements for people with disabilities (like better guide technology for the blind)
Medical breakthroughs in pharmaceuticals and disease research
Administrative support that frees up human capacity for more meaningful work
Gateway to human connection: AI can help people open up about issues they've never discussed, building confidence to then share with real people
Dr. Wood shares an optimistic vision: "I also have heard, you know, that some people will use AI and maybe start opening up about something in their life that they've never opened up about, and then gain the confidence to either go tell someone else or a group or a therapist or whatever it is. So it can be a bit of a gateway to more connection with humans."
Her hope?
"How do we build this in a way where it leads us and ushers us back into the arms of one another."
As Maaria observes about the loss of community support systems: "We used to have these things, like hobbies... I'm finally like back in my beloved Brazil and in Florianópolis and here people have hobbies. You know, people actually do things for themselves and I have not seen that in a long time because most of society, we don't do that stuff anymore.
Everything's either related to work or it's related to your side hustle or it's a hobby that you're trying to turn into a side hustle."
What Should You Do?
If you're a therapist, Dr. Wood has specific advice:
"It would be beneficial to add to your intake assessment the question, what role, if any, does AI play in your life?"
Why? Because "you better know that a number of your clients are using this as emotional support." She emphasizes the importance of awareness: "It's having enough awareness and understanding that you can be equipped for the conversations of other people wrestling with these questions that you can help house fruitful dialogue around this without needing to push one way or the other."
Dr. Wood reminds us: "We can't change the fact that hundreds of millions of people are using this. So while you don't have to, others are, and are you ready for the kinds of conversations that can hold a space for that?"
If you're someone using or considering AI for mental health support:
Choose purpose-built tools with proper safeguards, not general chatbots
Make it a "group sport": Reality-test your AI conversations with real people
Be aware of data privacy: Know what's being collected and how it's being used
Use it as a bridge: Let AI help you build confidence to connect with humans
Maintain your skills: Consciously practice the human capacities that AI can't develop for you
As Dr. Wood advises: "I like to say that AI should be a group sport, and what I mean by that is I feel like we should be doing it with other people, like, especially kind of reality testing with other people, because you can get caught in this echo chamber with some of your AI chat threads."
She suggests: "If you're gonna be using the AI, then maybe bring that to a friend and be like, this is what the AI told me, what do you think about that? And just having it be something that's a little, that's more shared so that we can be, you know, testing reality with it."
The Bottom Line
AI is already fundamentally reshaping mental health support (whether the mental health profession is ready for it or not). The question isn't whether to engage with this reality, but how to do so thoughtfully, ethically, and in ways that preserve our essential humanity.
When asked about people who are critical of AI, Dr. Wood responds with characteristic nuance: "First of all, I highly respect everybody's personal and professional journeys in terms of their viewpoint of AI. And so someone who is, you know, a stark opponent of AI, I'm not convincing anyone otherwise. Like, there's a lot of validity to that...
My encouragement is that people think really deeply about this and then come to their own conclusions."
However, she emphasizes the importance of informed awareness: "I think it doesn't help to kind of stick our head in the ground and be like, oh, just no, no, no. I think that you can still be staunchly against it and yet understand the role that it's playing and the implication that it has."
As Dr. Wood puts it, we need "enough awareness to gain an informed decision" about AI's role in our lives. We can't stick our heads in the sand, but we also can't blindly embrace every new chatbot as a therapeutic tool.
The future is here, and it's asking us some profound questions:
What makes us human? What skills do we want to keep? How do we build technology that enhances rather than replaces human connection?
The answers we find (together, as humans) will shape not just our mental health, but the very fabric of how we relate to each other in an increasingly AI-mediated world.
Dr. Wood leaves us with a powerful question to consider:
"Are you aware of what you are offloading to AI and are you okay if that particular skill atrophies within the next three to six months, are you okay offloading it? If you knew that in six months you wouldn't really be able to do that task anymore 'cause it wouldn't kind of be part of your routine of the way that you exercise your cognitive faculties?"
As Maaria reflects: "If someone told me that I would never be able to remember a phone number again, would I have stopped remembering them and recording them on my phone? I probably would've still done it."
But the larger skills the ones that shape our humanity and make us human? Those are worth sitting with. However uncomfortably, uncertainly, and humanly.

Join us!
Let's (co-)create impact with intentional AI solutions.
Support
maaria[ät]purposedriven.ai
© 2025. purposedriven.ai is powered by Innovation Distributed Oü.
All rights reserved.


