For our series, A Fordham Focus on AI, we’re speaking with faculty experts about the impact of this rapidly evolving technology and what to expect next. In this last installment, we sat down with Lauri Goldkind, PhD, a professor at the Graduate School of Social Service. Goldkind has been at the forefront of technology in the social services sector. She recently spearheaded an initiative to help Bronx nonprofits identify ways to use AI, and she’s currently working on a book about how colleges can train future social workers to use AI in their practices. Now that more and more individuals are turning to AI for therapy, we spoke with Goldkind about where the technology can help and where it may fall short.

Why are so many people turning to AI for mental health support?

The simplest answer is that people can’t get an appointment. If you’ve ever tried to find a psychologist, social worker, or psychiatrist who has availability and takes insurance, you know how hard that is. ChatGPT never shuts down or tells you to wait three weeks. And for some people, there’s another factor—it’s actually easier to be vulnerable with a chatbot than with a human practitioner. Recent statistics indicate that one in six Americans now uses generative AI for mental health information, including therapy. That tells me that the mental health treatment landscape is getting more complicated. 

Could relying on AI for mental health advice ever make things worse?

Yes, I think it’s a real concern. There’s a phenomenon known as sycophancy bias, when a chatbot is overly agreeable and tells you what you want to hear. Large language models are designed to keep you engaged, much like social media platforms. So the model is built to be “sticky” or to continually engage you, rather than to tell you the truth or disagree with you. 

Let’s say I use ChatGPT every day when I come home from school because it helps me navigate a complicated relationship, and it tells me, “Yes, breaking up with your girlfriend is a great idea.” If I were to visit a human therapist the next day and they said to me, “Do you really think breaking up with your girlfriend is a good idea, even though she supports you in X, Y, Z ways?”—that friction could become a much harder conversation. And the repercussions can be quite serious; there are documented instances of people who committed suicide or developed or experienced worsening psychosis after engaging with these models for an extended period of time.

How are the mental health professionals you work with actually using AI in their practices?

The thing they’re most excited about is documentation platforms. Every time they have a clinical encounter or a therapy session, they have to write notes, a treatment plan, and/or treatment goals. 

These systems have the potential to improve services by allowing therapists to be more present in their sessions—they’re not trying to do two things at once. We know that therapists often do six to eight sessions a day and complete their documentation after work hours. So practitioners really like it because it reduces that administrative burden.

Are there risks to that, too?

Yes, if they use these programs, they need to watch out for automation bias. This can happen because humans are predisposed to think a computer’s output is more accurate than their own. So, for instance, we have found  that a person is more likely to trust a note written by a computer than one written by hand by a person.

That can be risky for practitioners, because they may not thoroughly check the notes AI generates. These systems do make mistakes, and those errors can affect a client’s treatment.

What is one thing mental health professionals should know about AI?

I would love it if therapists started asking clients whether they’re using large language models for therapy, and, if so, what they’re getting out of them. That opens up a conversation and helps destigmatize it for someone who may have never been in therapy before and might not tell a therapist right away that they’re using a chatbot. For mental health professionals, these tools can be useful brainstorming partners—but they should always ask, “How does this advice or suggestion align with my intuition?”

What would you say to someone who’s considering using AI as a substitute for therapy?

It’s really important to understand that frontier large language models, like ChatGPT, Gemini, and Claude, are not designed to give mental health advice and support or to deliver treatment. They are built with all the biases that the model’s developers bring to their practice, and they are trained on data from a range of sources we don’t fully understand.

The shortage of practitioners is a real problem worth solving, but the answer isn’t to hand that work off to a system that isn’t built for it. I’d use ChatGPT for recipes and meal planning—but psychotherapy? Not so much.

Learn more about AI for the greater good at Fordham.

Share.

Patrick Verel is a news producer for Fordham Now. He can be reached at [email protected] or (212) 636-7790.