The Edge
The Edge

Facebook is using A.I. to help predict when users may be suicidal

Key Points
  • The number of Facebook users who see support content for suicide prevention has doubled since the company switched on a detection system.
  • Facebook has hosted summits for tech company employees to talk suicide prevention for several years.
  • The company has been deploying the technology in more languages.
Facebook is using A.I. to help predict when users may be suicidal
VIDEO1:0101:01
Facebook is using A.I. to help predict when users may be suicidal

Joseph Gerace, the sheriff of New York's Chautauqua County, has seen a lot of suicides. As a child, well before his years in public service, his best friend's father took his own life.

So when Gerace heard about a call last July from a Facebook security team member in Ireland to a dispatcher at the 911 center in his county, it struck a familiar chord. The Facebook representative was calling to alert local officials about a resident who needed urgent assistance.

"This is helping us in public safety," Gerace, who's been in law enforcement for 39 years, told CNBC. "We're not intruding on people's personal lives. We're trying to intervene when there's a crisis."

The Chautauqua County case, first reported in August by the local Post-Journal, was pursued by Facebook because the company had been informed that a woman "had posted threats of harming herself on her Facebook page," the newspaper said.

For years, the company has allowed users to report suicidal content to in-house reviewers, who evaluate it and decide whether a person should be offered support from a suicide prevention hotline or, in extreme cases, have Facebook's law enforcement response team intervene.

Land of algorithms

But this is Facebook, the land of algorithms and artificial intelligence, so there must be an engineering solution.

About a year ago, Facebook added technology that automatically flags posts with expressions of suicidal thoughts for the company's human reviewers to analyze. And in November, Facebook showed proof that the new system had made an impact.

"Over the last month, we've worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts," the company said in a blog post at the time.

Facebook now says the enhanced program is flagging 20 times more cases of suicidal thoughts for content reviewers, and twice as many people are receiving Facebook's suicide prevention support materials. The company has been deploying the updated system in more languages and improving suicide prevention in Instagram, though tools there are at an earlier stage of development.

On Wednesday, Facebook provided more details on the underlying technology.

"We feel like it's very important to get people help as quickly as we possibly can and to get as many people help as we can," said Dan Muriello, a software engineer on Facebook's compassion team, which was formed in 2015 and deals with topics like breakups and deaths.

While posts showing suicidal thoughts are very rare — they might make up one in a million — suicide is a pervasive threat. It's one of the top 10 causes of death in the U.S., and is second among those between the ages of 15 and 34, according to the Centers for Disease Control and Prevention, behind only "unintentional injury."

For Facebook, which has over 2.1 billion monthly active users, there's a clear role to play in mitigating the problem.

Facebook has built up considerable AI talent and technology since 2013, when it established an AI research lab and hired prominent researcher Yann LeCun to lead it. The lab has created an impressive roster of new capabilities, including technology that recognizes objects in users' photos, translates text posts into other languages and transcribes conversations in video ads.

Meta AI Chief Scientist Yann LeCun says people move on.
Getty Images

All the tech giants, including Amazon, Apple, Google and Microsoft, have been investing in AI for use across their services and platforms. But suicide prevention hasn't been a hot topic among AI researchers.

"I haven't heard it externally," said Umut Ozertem, a Facebook research scientist who previously worked on AI at Yahoo and Microsoft. "It's very interesting for me to hear about. I have a personal interest in this."

Ozertem said he's lost three friends to suicide.

Facebook has been exploring suicide prevention for more than a decade. It started with information in the social network's help center, where people in need could go to find resources, said Lizzy Donahue, an engineer on the compassion team and a member of the LGBTQ community, which has been disproportionately affected by suicide.

The next step for Facebook was making it possible for users to report suicidal content to the company for human review.

That's basically as far as Google and Microsoft are today. For example, a Google search for "I want to kill myself" brings up a phone number for the National Suicide Prevention Lifeline. Microsoft shows relevant crisis center information in search results and lets users report self-harm on its Xbox Live services.

For Facebook, this was just the start. The company now has more than 7,500 community operations staffers reviewing cases of potential self-punishment, as well as other sensitive issues like bullying and sexual violence.

"Anything that is safety related, like a threat of suicide or self-harm, is actually prioritized, so it's sent for faster review," said Monika Bickert, Facebook's head of global policy management.

Facebook employees also sought help from Dan Reidenberg, the executive director of SAVE (Suicide Awareness Voices of Education). More than a decade ago, employees struggling with deaths of people they knew were reaching out because they felt they had to take action, Reidenberg said.

The first thing Reidenberg did was deliver a list of phrases that are most commonly used by people at risk of suicide. Reidenberg started working with the company on technology summits that Facebook facilitates every two years, where representatives of smaller and larger companies discuss challenges and issues in the field.

"Tech companies, they're so global in nature," Reidenberg said. "They really do need to be monitoring on a continual basis."

But the compassion team saw a bolder opportunity to make a difference, taking advantage of Facebook's vast engineering resources. It turned to the company's AI lab.

Early last year, Muriello and others from the compassion team gave an internal talk at the company's Silicon Valley headquarters on how they were starting to use AI in the realm of suicide prevention. Ozertem attended the talk with a colleague from the Applied Machine Learning group, which helps various teams implement core technology from Facebook's research lab.

Facebook hires the scientist who helped build IBM Watson to lead its A.I. expansion
VIDEO0:3900:39
Facebook hires the scientist who helped build IBM Watson to lead its A.I. expansion

"We asked them a lot of questions like, 'How exactly do you do this?'" Ozertem said. The discussion went on for an hour or more.

Muriello said that the goal was to create an automated system that understands context. For example, a post that says, "If I hear this song one more time, I'm going to kill myself," needs to be understood not as suicidal, he said. But the technology also needs to grasp the subtleties in behavior when a person is actually contemplating suicide.

The compassion team sent data to Ozertem, including posts that had been reported to Facebook and validated by content reviewers. There were also reported posts that, following human review, didn't meet the threshold of suicidal content.

The data was limited. There were about 500 posts with real suicidal content and about the same number without it. After exploring the data for three days, Ozertem asked his managers for permission to spend more time on the project.

A key objective for Ozertem was making the system more effective at immediately flagging suicidal messages, like when someone expresses extreme sadness or threatens to take action. That meant not depending so much on things like reactions and comments from friends, which can take a while to accumulate, and because some at-risk users don't have many friends.

Ultimately, Ozertem said, time is the most critical consideration.

"Time it took since the creation — that's really important for the real-life impact," Ozertem said. The faster the response, the more lives can be saved.

'Big brother'

Reidenberg from SAVE is optimistic that Facebook's use of AI can help prevent suicide.

"AI not only helps more quickly identify somebody who's at risk — at least that's the hope, that it can get there," he said. But "it can do it in much closer to real time."

Facebook has plenty of hurdles to clear in scaling the technology. Some people might not be forthcoming about they're feelings once they've learned that Facebook is always listening for suicidal thoughts, said Aileen Cho, a therapist in San Francisco.

Sheriff Gerace pointed out that one person who commented on the Post-Journal article invoked the phrase "big brother." That type of concern has become more prevalent as public confidence in Facebook has eroded, after the platform was manipulated by outside forces during the 2016 presidential election.

Still, the AI detection software has had quantifiable results. And a Facebook spokeswoman said its intention is simply to provide help to people who are sharing on the social network.

For law enforcement, the hope is that the rewards far outweigh the risks.

Some people, Gerace said, may imagine that agencies like his have the time and manpower to sit and peruse social media.

"It just can't happen," he said.

If you're facing distress or suicidal crisis in the U.S., you can immediately talk with someone at the National Suicide Prevention Lifeline (800-273-8255, suicidepreventionlifeline.org) or the Crisis Text Line (text HOME to 741-741).