May is National Mental Health Awareness Month; and Easter Seals clients should know about mental health services, since disabled adults in the U.S. experience “mental distress” five times as often as non-disabled adults. Artificial intelligence (AI) can support assistive technology for mental health—but it can also trigger new problems. And as yet, there’s no large-scale expert agreement on the whens and hows of “prescribing” AI.
In our first “AI for Mental Health” post, we interviewed Mason Granof, Clinical Research Coordinator at the Division of Digital Psychiatry, Beth Israel Deaconess Medical Center. (BIDMC is an affiliate of Harvard Medical School and provider of the DOORS digital literacy program—Digital Outreach for Obtaining Resources and Skills.) Today, Granof talks about understanding AI and using it responsibly. Again, he asks the reader to remember that these are one expert’s opinions, and not official recommendations.
(Human) Knowledge Is Power
Q: What do you think are the most important things for the general public to understand about AI?
Granof: People should understand the basics of how this technology works. I believe that education is the best approach to safe implementation. Understanding something allows you to relate to it in healthier ways.
Broadly speaking, AI technology works on probabilities in language, not on “intelligence” in the way we traditionally think of it. When AI outputs a response, it is pulling together “likely words” that are associated with a user prompt. That’s different from “thinking,” “understanding,” or “feeling” in the way humans understand those terms. As psychiatrist Thomas Fuchs argues in his article “Understanding Sophia?,” AI can never replace human interaction because it doesn’t share in an embodied form of consciousness grounded in a shared, physical world. AI is grounded in language and probabilities, which are not equivalent to reality no matter how “real” they appear.
The common factor between human and artificial intelligence is language; and because humans also communicate with each other that way, it’s understandable why we’d experience AI as something alive and conscious. AI even responds in grammatical structures that imply consciousness: “I think … I feel … We can do this.” So it’s easy to forget that real consciousness is not present in AI, at least not in any way comparable to human consciousness. The effects of this, long-term, have yet to be seen.
Q: How can we keep those effects as positive as possible?
Granof: Again: education. Learning about AI, how it works, how it doesn’t work, the errors it can make, etc. All these can help people develop safer, productive, and more effective relationships with the technology.
For professionals dialoguing with clients in a mental health context [or even for laypeople trying to advise a family member or friend], I’d recommend a focus on understanding a few things:
- What platform(s) does the person use, and why?
- What is the “use pattern”? Is AI used primarily for functional information, companionship, metaphysical inquiry, or something else?
- What is the “attributed role” of the AI? That is, how does the individual understand [and relate to] the AI: as simply a knowledgeable source? Or as a sentient entity or higher power?
- What is the duration, frequency, and trajectory of use? [“Trajectory” refers to the ongoing evolution of use patterns.]
- Have there been visible functional shifts, e.g. social substitution [replacing human interaction with AI use]?
Understanding how a person already relates to AI technology will allow for more comprehensive conversations about how to establish healthy use patterns, for meeting specific individual needs.
Want to test your knowledge of AI and how to use it safely? Check out these DOORS AI Activities for a hands-on overview.

