Using Artificial Intelligence to Tackle Malicious Content- Facebook’s Paves the Way

Of late, the issue of hate speech and offensive images on the internet has been a topic of concern for policy makers, internet giants and general public across the world. On one hand, while sharing of information now happens with one touch and one click, it has also unleashed a devil in terms of people sharing offensive images, videos and messages, to disturb or torture others.

Offensive posts that violate Facebook’s or Twitter’s terms of service can include content that is hate speech, threatening or pornographic; incites violence; or contains nudity or graphic or gratuitous violence.

facebook-twitter-635

Typically, content has to be seen and assessment of whether it is malicious and inappropriate is done by at least one human, either a user or paid worker. Previously, Twitter and Facebook had relied extensively on outside human contractors from startups, who would sift through the content and assess which content is malicious and inappropriate. This work is, needless to say, terrible, psychologically traumatizing workers who have to sift through some terrible content, from child porn to beheadings. Because of constant exposure to negative content, burnout amongst the workers happens quickly and the symptoms resemble post-traumatic stress disorder.

130529121243-facebook-women-hate-story-top

However, recently, Artificial Intelligence is helping Facebook avoid having to subject humans to such a terrible job. Instead of subjecting human beings to a multitude of negative content and also relying heavily on human subjectivity, AI could unlock active moderation at scale by having computers scan every image uploaded before anyone sees it. There are a multitude of practical uses of AI for Facebook, where 25 percent of engineers now regularly use its internal AI platform to build features and do business. Facebook analyzes trillions of data samples along billions of parameters.

Facebook’s Joaquin Candela, presented on Artificial Intelligence at a conference in MIT and said “One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people. The higher we push that to 100 percent, the fewer offensive photos have actually been seen by a human.” Facebook is working at using AI technology for assessing hate speech as well.

It is also important to remember here that AI is not completely fool-proof or error-free, and stringent coding might remove creative and artistic images and content from online portals, severely hampering creative freedom. However, these are issues which can be sorted out, and psychological stress on humans can be minimized, if not totally prevented. The usage of AI, if adopted by the various online platforms, could prove to be a giant leap of development for all of cyber space. Hate speech and malicious content are in some ways, weapons of our times, and preventing them is definitely better than reacting to them once they are out in the open.

Our program #SocialSurfing in collaboration with Facebook, covered a total of 36 educational institutions last year, aiming to increase information regarding online safety, as well as counter speech. The second phase of this program #SocialSurfing 2.0, will now target 70 universities across the country, with the same aim of spreading the message of safe internet usage, and eradicating hate speech. Only when the youth of the country are enlightened about the importance of counter speech and how to maintain their privacy online, can internet truly be a safe place to be.

IP-College


Discuss this article on Facebook