close up of one hand holding phone while other taps screen

AI for Mental Health: Pros, Cons, and Expert Opinions (Part 1)

As National Mental Health Awareness Month begins in May, let’s be aware that disabled adults in the U.S. experience “mental distress” five times as often as non-disabled adults. One reason is that disabled people deal with more stress: stress from pain and fatigue, stress from medical expenses, stress from fear of being “thought less of.” Even in an age of inclusion, the “stigma shadow” haunts disability and mental illness alike.

Another ubiquitous but much-misunderstood topic: artificial intelligence (AI). AI can be either a help or a hindrance to people struggling with mental illness—and experts continue to debate when and how it should be used.

A person in a white t-shirt is gently consoling another person in a gray hoodie, who appears thoughtful and pensive. Both individuals are wearing backpacks, and they are outdoors in a grassy area with a building in the background.

When Mental Illness Meets Artificial Intelligence

One place with experts in both AI and mental health is the Division of Digital Psychiatry, Beth Israel Deaconess Medical Center (an affiliate of Harvard Medical School). They have shared their DOORS digital-literacy training program (Digital Outreach for Obtaining Resources and Skills) with us before, and were at Easter Seals Greater Houston last March for an in-person presentation on “DOORS AI for Providers: Understanding AI to Better Support Patients.”

One presenter was Mason Granof, Clinical Research Coordinator, who generously agreed to share his expertise—while emphasizing that “these are my opinions, and I would encourage anyone interested to follow the debates as they evolve, but not to take the opinions expressed here as definitive recommendations.”

The Benefits and the Bugaboos

Q: What would you say are the top pros and cons of using AI for emotional support?

Granof: In my view, the biggest “pro” is AI’s non-judgmental accessibility. An AI chatbot is always available, no matter the hour, to offer space for reflection. Traditional forms of care and emotional support simply cannot offer that: your therapist eventually has to go to sleep. The very fact that AI is not human may also make it feel like a safer, more inviting space. For individuals with SMI [serious mental illness], there are very few avenues to discuss one’s experience without fearing judgment, or structural power as with a mental health provider who has the capacity to alter your course of care. 

However, the structure of AI also affects its responses in ways that could be harmful to emotional well-being. The foremost among these tendencies is sycophancy: AI chatbots are trained to agree with the user, which can feel affirming, but could subtly reinforce harmful patterns. For example, if I go to AI every time I’ve argued with my partner, I am likely just going to get confirmation that I was right. While hearing that may feel good, it may be ultimately harmful by reinforcing whatever outlook I had at the start. Over time, it could decrease my mental flexibility and my capacity to look at my partner’s point of view.

The problem is that generative AI lacks the “productive friction” we get from human interaction. When another person pushes back on you or offers an alternative perspective, it helps recalibrate your own sense of reality by presenting another viewpoint. In my view, that productive friction is absolutely necessary for mental health. 

Q: What do you see as the pros and cons of “guardrails”—programming elements designed to keep AI from offering advice that could lead to users harming themselves or others?

Granof: As it currently stands, AI guardrails are most often triggered by one or more of the following:

  1. Direct threats of harm to oneself or others.
  2. Requests for advice on illegal activity.
  3. Topics where legal liability may be a problem for the company running the chatbot—such as a user asking for medical advice.
  4. Sexual content, at least with most of the major chatbots, such as ChatGPT. Although there are many AI-companionship platforms that allow for sexual content. 

What’s important to understand is that these guardrails are still inconsistent, and users are continually inventing new ways to get around them: the term for this is “jailbreaking.” One jailbreak that was discovered in late 2025 involved harmful requests being presented to a chatbot as “adversarial poetry.” While I believe this particular weakness has been fixed, it demonstrates that guardrails are still inconsistent, and persistent users can find ways to get around them. 

Regardless, users should be aware that no chatbot is completely “safe” in the sense of appropriately navigating every potentially unsafe situation. And even when such a situation is detected, there is still debate as to the appropriate response. For example, should a chatbot shut down when a user expresses a desire to self-harm, or should it engage the user in further conversation? Some would argue that more harm could be caused by a shutdown [especially if that reinforces the user’s “no one understands” feelings].

Coming soon: “AI for Mental Health” Part 2, where Granof discusses ways to understand AI and use it responsibly.

Leave a Reply