Can Facebook prevent suicides using AI?

Facebook claims that its AI technology can spot potential suicides using complex algorithms, tackling a problem that is widespread on social media.

facebookcommunityhelp
Facebook

For many people considering suicide, intervention from any source can bring the welcome relief and sense of worth that could prevent someone from making an attempt on their own life. With modern developments in artificial intelligence, a person may not necessarily have to play a role in detecting and acting on warning signs. Facebook thinks that it can be that intervening force for suicidal users.

In a blog post dated March 1, Facebook announced that it would build new suicide prevention tools into the website, including integrated suicide prevention tools on Facebook Live, live chat support through Messenger and AI-assisted reporting for potentially suicidal people.

Facebook’s current tools allow users to file reports on potentially suicidal users by visiting their profile, clicking the report button and selecting the option, “I want to help (Name).” But the new features promise to integrate suicide prevention options into Facebook Live, the live streaming video service, and Facebook Messenger as well. 

Facebook has received ample media attention in recent months after several suicides were attempted or committed on Facebook Live, prompting a stronger response and more widely integrated suicide prevention tools.

The social media site plans on using AI via pattern recognition to identify posts that show typical characteristics of suicidal users, which are then reported and sent to a Community reviewer to determine if further action should be taken.

Pattern recognition

Pattern recognition AI will look for references to sadness and pain, as would messages from friends along the lines of “I’m worried about you.” Users suspected of considering suicide will be shown a message giving advice and suggestions such as reaching out to a friend or contacting a suicide helpline.

AI will also identify posts that are obvious flags for suicidal behavior and more prominently recommend suicide prevention tools to other users seeing the post, so that they can be reminded of Facebook’s built-in prevention tools and prodded to flag the post. The program is currently limited to the United States, but Facebook expects to expand the technology internationally once they complete an initial test run.

The problem with relying on AI technology is twofold: it’s still relatively easy to trick, and an AI is not a person. Even Google, at the forefront of public AI use, found that some of its AI programs could be easily fooled or manipulated by choice punctuation or typos, reminding us that for all its development and coding, AI cannot offer the intuitive and sensitive emotional support that a human being can offer.

Suicidal people seeking help don’t want a computer to recognize they need help; they want a person to recognize it. An auto-generated message recommending someone take particular actions won’t replace a live person speaking to someone and making a connection.

Although Facebook has a human monitoring team that will individually review posts to identify potentially suicidal users from misflagged users, its new AI technology cannot function without a team of people to respond to users, not a flagged message.

Facebook currently has some remarkably intuitive tools in place to help suicidal users, including identifying close users and suggesting the suicidal user contact them, providing pre-filled text to begin the painful conversation of admitting you’re experiencing suicidal thoughts, and offering connections to emergency help lines. Programs like this, which try to put users in touch with real people, are more along the lines of what Facebook should focus on.

Perhaps if the program directly connected users or initiated conversations with suicide prevention ambassadors when a user is flagged, rather than directing that user to take actions themselves, the AI program could be a significant advantage. But as of now, Facebook’s options mostly continue to rely on a suicidal person choosing to seek help, which is often an impossible decision to make without an active, intervening force.

Facebook has made significant strides in providing suicide prevention tools, well beyond expectations for a social media site. But the site has a great opportunity to take the final step and connect users directly to helplines that can talk to them. This AI development has potential, but it doesn’t make that great next step needed.

This article is published as part of the IDG Contributor Network. Want to Join?

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Must read: 10 new UI features coming to Windows 10