Using Artificial Intelligence to fight Cyber Bullying

Bullying is becoming a large problem in society with a quarter of parents reporting their kids experience it. Just this past week, a young girl names Amanda Todd committed suicide because she was bullied for several years online and offline. She posted a video on YouTube describing her experience and her pain. It was a sad account of what she went through and a strong indication of this growing problem of cyber bullying. This incident has ignited the debate on the subject of bullying, especially cyber bullying and how to prevent it. Facebook is already doing some education and providing tools for reporting bullying but I think it can be taken further.

It got me thinking about possible solutions for prevention and one idea that came to mind was inspired by some of my previous work in the banking industry. The banking industry is challenged with debit & credit card fraud where criminals use a technique called debit card skimming among many other techniques, to steal millions of dollars from banks.  In order to combat this growing trend, banks and credit card companies are using very advanced systems that leverage some very complex algorithms to help estimate, detect and prevent undesirable behaviors.

These systems are capable of dynamic profiling that condenses enormous amounts of historical data, capturing the most predictive variables for real-time analytics. Updated with every transaction, these dynamic profiles enable analytics to “learn” behavior patterns as they evolve. It’s this technology that I think can be used to help find bullying as it happens and educate those involved.

Proposed Solution

Social networks such as Facebook and Twitter could leverage these same artificial intelligence technologies to estimate, detect and prevent cyber bullying behavior.

Like credit and debit card fraud, it’s important to first understand the mechanisms of identifying bullying conversations or actions on social networks. Natural Language Processing and Natural Language Understanding techniques could be used to identify conversations that could be considered to have bullying intent between users. Looking for curse words, insults and specific phrases based on a scoring model for severity could flag conversations and then take a specific course of action based on the type of bullying that is occurring.

These course of actions should first be in the form of education but could include anything from a warning to the user to terminating the account and reporting to authorities.

For example, lets say a user starts to bully another user by first posting an abusive statement on their Facebook timeline. The bully detection system would recognize the statement and warn the offending user that this type of interaction is not acceptable and suggest that they review some bulling education videos. If the offender continues abusive behavior and the system detects another incident, the offenders account access could then be suspended and force the user to complete a bullying education course that they have to pass in order to have their account restored. If the the offending user completes this and still continues to bully other users, their Facebook account would be suspended indefinitely. The user could then enter an appeal process if they wish to get  their account reinstated.

The process looks something like this:

These course of actions could be expanded to account for many different situations and severities which could include reporting to authorities or other bullying management groups. The goal would be to educate offenders as much as possible and the consequence  is being suspension from using Facebook.

This type of model enforces education and consequences for unacceptable actions.

It would be good if Facebook was working on something like this. They already enforce many of their policies but I’m not sure on the extent of their detection and prevention strategies. From this statement, they do rely on their users for reporting specific behaviors.

We maintain a robust reporting infrastructure that leverages the 500 million people who use our site to keep an eye out for offensive or potentially dangerous content.  This reporting infrastructure includes report links on pages across the Facebook site, systems to prioritize the most serious reports, and a trained team of reviewers who respond to reports and escalate them to law enforcement as needed.  This team treats reports of harassing messages and impostor profiles as a priority. Facebook

Either way, more needs to be done to detect, prevent and educate everyone on bullying.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s