New groundbreaking AI helps identify patients at risk for suicide

AI-driven clinical alerts improve suicide risk screenings, helping doctors identify high-risk patients in routine medical settings.

A Vanderbilt study shows how AI-powered clinical alerts significantly improve suicide risk assessments, offering a new tool in prevention efforts.

A Vanderbilt study shows how AI-powered clinical alerts significantly improve suicide risk assessments, offering a new tool in prevention efforts. (CREDIT: CC BY-SA 4.0)

Suicide remains a major public health crisis, claiming the lives of approximately 14.2 per 100,000 Americans annually. Despite its prevalence, many individuals who die by suicide have interacted with healthcare providers in the year leading up to their death, often for reasons unrelated to mental health.

This underscores a critical gap in routine risk identification and the need for innovative solutions to enhance suicide prevention efforts.

A recent study conducted by researchers at Vanderbilt University Medical Center offers promising insights into how artificial intelligence (AI) can bridge this gap.

Published in the journal, JAMA Network Open, the research focused on the Vanderbilt Suicide Attempt and Ideation Likelihood model (VSAIL), an AI system designed to analyze routine data from electronic health records (EHRs) to calculate a patient’s 30-day risk of suicide. By leveraging AI-driven clinical decision support (CDS) systems, the study aimed to improve suicide risk assessments during regular healthcare visits, particularly in neurology clinics.

The study was a randomized controlled trial (RCT) involving 7,732 patient visits over six months across three neurology clinics at Vanderbilt. It compared two CDS approaches: interruptive alerts, which actively disrupted a clinician’s workflow to prompt suicide risk assessment, and noninterruptive alerts, which passively displayed risk information in the patient’s electronic chart.

Risk Model-Guided Clinical Decision Support for Suicide Screening (CREDIT: JAMA Network Open)

These clinics were selected due to their patient populations, which included individuals with neurological conditions associated with higher suicide risk, such as Huntington’s disease and certain movement disorders.

Researchers hypothesized that interruptive alerts would be more effective in prompting in-person suicide screenings. The study’s primary aim was to assess whether interruptive CDS led to higher screening rates compared to noninterruptive CDS. A secondary aim examined how these alerts compared to screening rates from the previous year.

The research team’s focus on neurology clinics was strategic. Unlike high-risk settings such as emergency departments, these clinics lack universal screening protocols. However, certain neurological disorders are linked to increased suicide risk, highlighting the need for targeted interventions in these settings. This trial marks one of the first attempts to evaluate suicide-preventive CDS in a randomized clinical framework.

The study’s results highlighted the potential of AI-driven CDS systems to enhance suicide prevention in medical settings. Interruptive alerts led to suicide risk assessments in 42% of flagged visits, significantly outperforming the noninterruptive system, which prompted screenings in only 4% of cases.

While the automated system flagged approximately 8% of all patient visits, its selective nature was deemed feasible for implementation in busy clinical environments.

Dr. Colin Walsh, the study’s lead author and an associate professor of Biomedical Informatics, Medicine, and Psychiatry, emphasized the importance of targeted interventions. “Universal screening isn’t practical in every setting,” Walsh explained. “We developed VSAIL to help identify high-risk patients and prompt focused screening conversations.”

Despite the effectiveness of interruptive alerts, the study acknowledged potential downsides, such as alert fatigue, where frequent notifications could overwhelm clinicians. Future research is needed to balance the benefits of these alerts with their impact on workflow. “Health care systems need to balance the effectiveness of interruptive alerts against their potential downsides,” Walsh added.

Suicide risk screening has traditionally relied on clinical judgment and validated instruments, such as the Patient Health Questionnaire and the Columbia Suicide Severity Rating Scale.

However, gaps in reliable screening persist, particularly in non-mental health settings. Studies indicate that 77% of individuals who die by suicide have seen a primary care provider in the year prior to their death, underscoring the importance of improving risk identification in these encounters.

Participant Flow Diagram (CREDIT: JAMA Network Open)

The VSAIL model represents a shift toward computational risk estimation, which can complement traditional methods. By integrating predictive modeling into EHR systems, the study demonstrated that AI could bolster clinicians’ ability to identify and assess at-risk patients.

Earlier testing of the model, which ran silently in the background without triggering alerts, confirmed its accuracy in identifying high-risk individuals. Among flagged patients, one in 23 later reported suicidal thoughts.

While the study primarily focused on neurology clinics, its implications extend to other healthcare settings. Primary care, for example, remains a critical point of contact for individuals at risk of suicide. With 77% of suicide victims visiting a primary care provider in the year before their death, implementing AI-driven CDS in these settings could significantly enhance prevention efforts.

The researchers also noted that the model’s selective approach, flagging only 8% of patient visits, made it more practical for busy clinics. This selective process ensures that clinicians are not overwhelmed with alerts, allowing them to focus on high-risk individuals without sacrificing the quality of care.

The study’s findings have significant implications for suicide prevention efforts across diverse medical settings. While neurology clinics were the focus of this research, the researchers suggested that similar AI-driven systems could be tested in primary care and other specialties. Expanding the application of such models could help address the broader challenge of timely suicide risk assessment.

Flowchart of Trial Outcomes by Arm (CREDIT: JAMA Network Open)

During the study’s 30-day follow-up period, no patients flagged by either alert group experienced suicidal ideation or attempts. While this outcome is encouraging, the researchers cautioned against complacency, emphasizing the need for ongoing evaluation of CDS systems to ensure their effectiveness and minimize unintended consequences.

The research also highlighted the importance of human-centered design in developing CDS systems. By tailoring alerts to clinicians’ workflows and considering their feedback, the study aimed to create tools that are both effective and minimally disruptive.

“The automated system flagged only about 8% of all patient visits for screening,” Walsh said. “This selective approach makes it more feasible for busy clinics to implement suicide prevention efforts.” This practicality ensures that clinicians can focus on meaningful interactions with patients rather than being burdened by excessive alerts.

Furthermore, the researchers underscored the need to address alert fatigue in future studies. While interruptive alerts proved more effective, excessive notifications could potentially diminish their impact over time. Future research should explore strategies to mitigate this issue, ensuring that alerts remain effective without overwhelming healthcare providers.

The study also provides a framework for integrating AI-driven tools into existing healthcare systems. By leveraging predictive models like VSAIL, medical professionals can enhance their ability to identify and support at-risk patients. These advancements could play a pivotal role in reducing suicide rates and improving overall patient outcomes.

As suicide rates continue to rise, innovative approaches like the VSAIL model offer a promising pathway to enhance prevention efforts. By integrating AI-driven tools into routine healthcare encounters, clinicians can more effectively identify and support at-risk patients.

While challenges such as alert fatigue remain, the study’s findings underscore the potential of technology to transform suicide prevention strategies in medical settings. With further research and refinement, these systems could play a pivotal role in reducing suicide rates and saving lives.

Note: Materials provided above by The Brighter Side of News. Content may be edited for style and length.


Like these kind of feel good stories? Get The Brighter Side of News' newsletter.


Joshua Shavit
Joshua ShavitScience and Good News Writer
Joshua Shavit is a bright and enthusiastic 18-year-old with a passion for sharing positive stories that uplift and inspire. With a flair for writing and a deep appreciation for the beauty of human kindness, Joshua has embarked on a journey to spotlight the good news that happens around the world daily. His youthful perspective and genuine interest in spreading positivity make him a promising writer and co-founder at The Brighter Side of News. He is currently working towards a Bachelor of Science in Business Administration at the University of California, Berkeley.