Psychologists weigh in on benefits, pitfalls of AI

By Ami Albernaz
July 1st, 2024
Nicholas Jacobson, Ph.D., is an assistant professor of biomedical data science and psychiatry at Dartmouth College whose research explores technology-based assessment and treatment for anxiety and depression.
Nicholas Jacobson, Ph.D., is an assistant professor of biomedical data science and psychiatry at Dartmouth College whose research explores technology-based assessment and treatment for anxiety and depression.

Can it improve assessment, diagnosis?

Five, 10, 20 years from now, what role will AI play in psychology? Chatbots-as-therapists may come to mind for some (indeed, several commercially available apps already offer AI-based mental health support), but psychologists also see considerable potential in other areas.

New England Psychologist spoke with a few psychologists to understand some of the ways they see AI advancing assessment, diagnosis, and treatment, as well as the potential pitfalls of using AI.

AI’s ability to analyze large amounts of data in real time is opening new possibilities for more robust patient monitoring and greater precision in assessment.

Nicholas Jacobson, Ph.D., is an assistant professor of biomedical data science and psychiatry at Dartmouth College whose research explores technology-based assessment and treatment for anxiety and depression. He sees promise in pairing data collected from patients’ daily lives (via wearable devices, for instance) with ongoing, longitudinal assessments.

“[Data that shows] how we are engaging in our day — that’s pretty seamless and easy to track,” he said. Being able to combine vast amounts of this real-time data with periodic behavioral assessments can provide a more complete picture of patient’s well-being that can help providers know when and how best to intervene, he added.

“Psychologists have been talking about this as the gold standard for many decades,” he said.

Rachel Sava, Ph.D., is program director at the McLean Institute for Technology in Psychiatry at McLean Hospital in Belmont, Mass.

Rachel Sava, Ph.D., is program director at the McLean Institute for Technology in Psychiatry at McLean Hospital in Belmont, Mass.

Rachel Sava, Ph.D., program director at the McLean Institute for Technology in Psychiatry at McLean Hospital in Belmont, Mass., sees potential for AI algorithms in predicting future mental health status or even psychiatric conditions.

She noted that algorithms are now being developed that can process changes in data from wearables (for example, in sleep patterns) as well as in speech patterns, typing, facial movements, phone use, and more and make predictions based on those changes. Early detection can give “clinicians and patients time to implement early intervention strategies,” she said.

Psychologists also see potential for AI-based assessment to make diagnosis and care more accessible. Researchers at Massachusetts General Hospital recently identified an AI model that accurately screens for childbirth-related post-traumatic stress disorder (CB-PTSD), and hope that it might one day be used in routine obstetric care.

With roughly 8 million women worldwide affected by CB-PTSD each year and potentially adverse health effects for both mothers and children when the condition goes untreated, having an easily accessible and affordable screening tool could have an enormous impact, said Sharon Dekel, Ph.D., M.Phil, senior author of the research, which was published in “Scientific Reports.”

“Traditional screening methods that rely on extensive clinical evaluation can be a challenge — for example, for people living in rural areas,” said Dekel, who also leads MGH’s Postpartum Traumatic Stress Disorders Research Program. “So, our question was how can we overcome these barriers and develop screening tools that would be low-cost and accessible, especially as early screening which means prevention.”

Dekel and her colleagues used an OpenAI model in combination with a machine learning algorithm to accurately detect CB-PTSD from short, written narratives from women who had recently given birth. While acknowledging that more research needs to be done, Dekel said she hopes that the model — which worked with narratives as short as 30 words — could someday be incorporated into an app and even used at home.

Some psychologists also see a potential role for AI to help determine the best course of treatment for a patient. It could be used in predictive models geared toward matching treatment types with patients, Jacobson said — for instance, determining whether someone would benefit most from an SSRI or traditional talk therapy for depression treatment.

Sava said she also sees enormous potential for AI to reduce administrative burden for mental health professionals.

AI “scribes” that transcribe sessions with patients and summarize visits and AI-powered scheduling assistants are just a couple of tools that can reduce administrative burden and even help psychologists avoid burnout, she noted.

“Clinicians spend so much time on paperwork when they want to spend time with patients,” Sava said. She added that she expects these tools will be among the earliest-adopted AI tools in healthcare, and are already being adopted at some hospitals.

Sava said that she also expects to see more AI-based chatbots and conversation agents in healthcare, but — despite the hype around these tools — in an auxiliary rather than a leading role.

“While some companies may be looking to offer chatbots as a replacement to clinicians, I see their niche as being supplementary and enhancing care alongside a clinician,” she said. Chatbots could support patients between clinical visits or assist people on long waitlists or in areas with limited healthcare access, she added.

The downsides of AI

Reaping the benefits of a technology often involves tradeoffs, and AI is no exception. Given that some of AI’s greatest potential benefits stem from its vast data processing capabilities, privacy is a key concern, said Jacobson.

“There are major upsides [for using AI], but there is the potential for bad actors to become involved,” he said. “The space is highly unregulated, and there’s a market for [personal] data.”

When AI is used in a clinical setting — either in an administrative or a therapeutic capacity — psychologists (and patients) should understand where personal information is stored as well as how long AI systems will retain data, or if it can be trained to “forget” data, Sava added.

When it comes to chatbots used for therapeutic purposes, it’s important to keep in mind that generative AI is capable of “hallucinations” that sound convincing but are inaccurate, Sava said. These can be particularly dangerous for young people, among whom chatbots for therapy have gained traction.

“I worry about there not being not enough guardrails in place to make sure that youth aren’t provided with bad advice or advice that might be triggering,” she said, noting the case of a chatbot developed by the National Eating Disorders Association that was taken offline after it recommended weight loss and counting calories.

Though chatbots can be useful in providing some level of companionship and an outlet for expressing their thoughts, “one can imagine scenarios when they do more harm than good.”

Posted in Articles, Leading Stories | Comments Off on Psychologists weigh in on benefits, pitfalls of AI

Comments are closed.