An increase in hate speech, extremism, fake news and other content which violate the community standards has seen them strengthening policies, re-working algorithms and adding staff to curb the menace. In this article, we examine what works with algorithms and what doesn’t as relates to social moderation.

Online platforms are plagued by inappropriate texts, images, and videos which somehow manage to slip through the cracks. In many cases, online media platforms respond by implementing smarter algorithms to help identify inappropriate content. But what is artificial intelligence capable of capturing and where does it fail miserably.


With the help of natural language processing, A.I. can be trained to recognize text across multiple languages. This means they can be trained to identify posts which violate community guidelines such as racial slurs or texts related to extremist propaganda.

A.I. can also be trained to identify images, to curb nudity or recognize symbols such as the swastika. To a large extent, these algorithms are getting the job done well. However, there have been some grievous fails by A.I. For example, Google Photos was criticized for tagging images of dark-skinned people with the keyword “gorilla.” Several years after, Google still hasn’t found a solution to this problem and instead they have removed the programs ability to tag gorillas entirely.

Also, algorithms have to be updated as the meaning of certain words evolve or to understand their contextual usage. For example, the LGBT community recently noticed that search results for #gay and #bisexual did not come up and they felt Twitter was censoring the words. However, Twitter apologized saying it was an outdated algorithm which falsely identified posts tagged with these terms as offensive. Twitter said their algorithm was supposed to consider the contextual usage of these terms, but it didn’t.




The Gorilla example from before is an illustration of A. I.’s bias coming to play. A. I. is trained by watching people complete tasks and then inputting the results of those tasks. For example, programs aimed at identifying objects in a photograph are trained by feeding the system thousands of images which were tagged by hand.

The human element makes it possible for A. I. to complete tasks. However, this human element inadvertently gives human bias to a computer. An A.I. is only as good as the training data so if it were fed images with white males mainly, it would have problems identifying persons with other skin tones.

Also, once a training set is developed the data is shared among developers which mean this bias spreads across other programs.


A.I. can identify the swastika, but it cannot determine how it’s being used. Quite recently, Facebook removed a post showing swastika but was accompanied by a plea to stop the spread of hate.

This is an example of the failure of AI to determine intent. A.I. helps with initial screening, but the human element will also be needed to determine if the content violates community standards.


Although human brain will still be needed now and again, A.I has made the process much more efficient as they can help humans determine which posts require review and even prioritize them. For example in 2017, Facebook shared an A.I. designed to spot suicidal tendencies, and it resulted in 100 calls to emergency responders in a month. The idea behind this A.I. was to prioritize these posts on our newsfeeds.


Technology grows at a rate faster than laws and ethics can keep up, and social media moderation is no exception. This means an increased demand for employees with a background in the humanities or ethics, something most programmer’s lack.

We are at a juncture where the pace of technology is so fast that we have to ensure the ethical component doesn’t drag too far behind.

Do not hesitate to contact us, subscribe to our blog for free, click here to arrange a FREE Consultancy meeting, send me an email at or Follow me below on Facebook, Twitter, LinkedIn and Instagram.


Nicholas is a social entrepreneur, passionate marketeer, career + life coach, consultant, speaker, and community builder. He does this through 1-on-1 coaching, non-profit and businesses consulting, and on a larger scale as Co-founder + Managing Director of CFM Group. He is an internationally recognized strategist, coach, speaker and in the process of writing his 1st book. Possessing over 13 years’ experience in helping clients realise their potential through clarifying their vision, message and market to design the strategies and roadmaps needed to succeed. Utilising this extensive background in strategic planning, pitch and message design, marketing and communications, executive and speaker coaching was his pathway to founding His knowlegde was fundamental in building the company with an investment capital of £1 and a large social impact community and professional development hub in Cambridge, UK. Feel free to comment on any of our articles that interests you or message our CEO directly at ! We hope you enjoy our blog !

Write A Comment