World’s first therapy chatbot proves AI can provide ‘gold-standard’ care

New study shows AI chatbot Therabot significantly reduces depression, anxiety, and eating disorder symptoms in clinical trial.

Geisel School of Medicine professors Michael Heinz, left, and Nicholas Jacobson led a trial of their virtual therapist, Therabot.

Geisel School of Medicine professors Michael Heinz, left, and Nicholas Jacobson led a trial of their virtual therapist, Therabot. (CREDIT: Katie Lenhart)

The number of people living with depression, anxiety, and eating disorders has grown fast over the last 30 years. Even as mental health issues rise, treatment remains out of reach for many. Fewer than half of those affected get the care they need. Traditional therapy, while effective, is expensive and hard to scale. There simply aren’t enough trained professionals to meet demand.

For every licensed therapist in the U.S., there are an estimated 1,600 patients in need of support. That leaves millions with few or no options. In this growing gap, a new kind of care is emerging—one powered not by people, but by machines.

Digital Tools Offer a Scalable Alternative

Digital therapeutics (DTx) are software-based tools designed to diagnose or treat medical conditions. For mental health, these tools can bring evidence-based care to people who might never otherwise see a therapist. Yet despite their promise, many digital tools struggle to keep users engaged. Dropout rates are high, and users often stop interacting before they see benefits.

Key design features of the Therabot application: (A) Therabot login screen; (B)main chat interface; (C) emergency module deployed in response to model detection of high-risk content (e.g., suicidal ideation); (D) conversation thread interface for users to initiate a thread or return to a prior thread. (CREDIT: New England Journal of Medicine AI)

One reason may be the lack of a human touch. In traditional therapy, connection matters. Factors like empathy, shared goals, and a strong bond between therapist and patient—known as therapeutic alliance—help people feel seen and supported. Digital tools often lack these qualities.

Generative AI, a fast-growing field of artificial intelligence, may change that. These systems can respond with open-ended, natural-sounding replies that adapt to what a person says. Unlike rule-based chatbots, which rely on pre-programmed decision trees, generative AI can learn, evolve, and even mimic parts of human conversation. That opens the door to more personal, engaging, and human-like interactions.

Enter Therabot: A New Kind of Therapist

In 2019, a team at Dartmouth College set out to build a different kind of chatbot—one that could act like a therapist and deliver meaningful support. The result was Therabot, a generative AI chatbot trained on thousands of therapist-patient dialogues based on the latest cognitive behavioral therapy (CBT) methods.

Therabot was developed with over 100,000 hours of expert input. The chatbot was designed to be available at any time, offering support in the moment, not weeks later. Using a smartphone app, users could talk to Therabot by typing messages, just like texting a friend.

The idea was simple: use AI to deliver therapy-like conversations that help people manage depression, anxiety, or concerns about eating and body image. But would it work?

To find out, the Dartmouth team launched a rigorous clinical trial. It was the first randomized controlled trial (RCT) of its kind to test whether a generative AI chatbot could treat mental health conditions safely and effectively.

Strong Results Across Multiple Conditions

The trial included 106 participants from across the U.S. All were diagnosed with either major depressive disorder (MDD), generalized anxiety disorder (GAD), or were at high risk for an eating disorder (CHR-FED). Participants were randomly assigned to either use Therabot or join a control group without access to the tool.

For four weeks, people in the Therabot group could use the app as often as they liked. Nearly 75% of them were not receiving any other form of mental health treatment at the time.

After just one month, the results were clear.

People with depression reported an average 51% drop in symptoms. For many, this meant a significant improvement in mood and everyday functioning. Those with anxiety showed a 31% average reduction in symptoms—enough to shift from moderate to mild anxiety, or from mild anxiety to no longer meeting the clinical threshold.

Even among users at high risk for eating disorders—a group often harder to treat—Therabot led to a 19% reduction in worries about weight and body image. That improvement outpaced the control group, showing the AI tool’s real potential.

All of these results were based on standard clinical scales used by doctors to track progress. When researchers followed up after eight weeks, the gains remained. Across all groups, people had better outcomes than expected from most self-guided tools.

More Than a Bot: Building a Therapeutic Relationship

What surprised researchers most was how deeply people connected with Therabot. Users didn’t just respond to questions—they started conversations on their own. Many interacted with the app late at night, a time when feelings of distress can rise and support is often unavailable.

Therabot users, all previously diagnosed with a mental health disorder, experienced significant improvements in symptoms after eight weeks. (CREDIT: LaDarius Dennison)

One of the study’s leads, Nicholas Jacobson, said, “We did not expect that people would almost treat the software like a friend. It says to me that they were actually forming relationships with Therabot.” Jacobson is a professor of biomedical data science and psychiatry and helped lead the development of Therabot.

People reported levels of trust and engagement similar to what they might feel with a human therapist. That sense of bond—therapeutic alliance—is a key part of successful mental health treatment.

“Users engaged with Therabot for an average of six hours, the equivalent of eight therapy sessions,” Jacobson said. “Our results are comparable to what we would see for people with access to gold-standard cognitive therapy.”

Risks and Oversight Remain Critical

Still, the researchers are quick to point out that Therabot is not a replacement for trained clinicians. While AI can offer support, it also carries risks.

“Patients can say anything to it, and it can say anything back,” said Michael Heinz, the trial’s lead author and a psychiatry professor at Dartmouth. That unpredictability is part of what makes generative AI powerful—but also potentially dangerous.

If someone mentions suicidal thoughts, for example, the chatbot must respond appropriately. Therabot was trained to detect such high-risk statements and direct users to emergency services. But safety must always be monitored closely, researchers said.

Heinz added, “No generative AI agent is ready to operate fully autonomously in mental health. We still need to better understand the risks.”

The trial team set strict safety rules. Conversations were reviewed to ensure they followed best therapeutic practices. When needed, the team was ready to step in and intervene.

More than two years before the trial, early versions of Therabot had already shown that over 90% of responses followed accepted therapy guidelines. That gave the team confidence to move forward with human testing.

Looking Ahead: AI as a Mental Health Partner

Therabot’s success points to a future where generative AI may act as a partner to human therapists, not a replacement. In places where therapists are scarce or expensive, AI could offer critical support. With ongoing monitoring and input from trained mental health experts, tools like Therabot might help close the gap in care.

“There are a lot of folks rushing into this space since the release of ChatGPT,” Jacobson said. “But safety and efficacy are not always well established. This is one of those cases where diligent oversight is needed, and providing that really sets us apart.”

The team sees Therabot as part of a larger mental health system—one where person-to-person and software-based support work side by side. As long as safety and standards are prioritized, generative AI could become a vital tool for reaching people who need help.

“We would like to see generative AI help provide mental health support to the huge number of people outside the in-person care system,” Jacobson said. “My sense is that people felt comfortable talking to a bot because it won’t judge them.”

And that comfort, combined with accessibility and evidence-based design, could finally bring timely, affordable care to millions.

Research findings were published in the New England Journal of Medicine AI.

Note: The article above provided above by The Brighter Side of News.


Like these kind of feel good stories? Get The Brighter Side of News' newsletter.


Joseph Shavit
Joseph ShavitSpace, Technology and Medical News Writer

Joseph Shavit
Head Science News Writer | Communicating Innovation & Discovery

Based in Los Angeles, Joseph Shavit is an accomplished science journalist, head science news writer and co-founder at The Brighter Side of News, where he translates cutting-edge discoveries into compelling stories for a broad audience. With a strong background spanning science, business, product management, media leadership, and entrepreneurship, Joseph brings a unique perspective to science communication. His expertise allows him to uncover the intersection of technological advancements and market potential, shedding light on how groundbreaking research evolves into transformative products and industries.