How Facebook uses AI for suicide prevention

For Facebook, this was just the start. The company now has more than 7,500 community operations staffers reviewing cases of potential self-punishment, as well as other sensitive issues like bullying and sexual violence.

"Anything that is safety related, like a threat of suicide or self-harm, is actually prioritized, so it's sent for faster review," said Monika Bickert, Facebook's head of global policy management.

Facebook employees also sought help from Dan Reidenberg, the executive director of SAVE (Suicide Awareness Voices of Education). More than a decade ago, employees struggling with deaths of people they knew were reaching out because they felt they had to take action, Reidenberg said.

The first thing Reidenberg did was deliver a list of phrases that are most commonly used by people at risk of suicide. Reidenberg started working with the company on technology summits that Facebook facilitates every two years, where representatives of smaller and larger companies discuss challenges and issues in the field.

"Tech companies, they're so global in nature," Reidenberg said. "They really do need to be monitoring on a continual basis."

But the compassion team saw a bolder opportunity to make a difference, taking advantage of Facebook's vast engineering resources. It turned to the company's AI lab.

Early last year, Muriello and others from the compassion team gave an internal talk at the company's Silicon Valley headquarters on how they were starting to use AI in the realm of suicide prevention. Ozertem attended the talk with a colleague from the Applied Machine Learning group, which helps various teams implement core technology from Facebook's research lab.

https://www.cnbc.com/2018/02/21/how-facebook-uses-ai-for-suicide-prevention.html