An increase in hate speech, extremism, fake news and other content which violate the community standards has seen them strengthening policies, re-working algorithms and adding staff to curb the menace. In this article, we examine what works with algorithms and what doesn’t as relates to social moderation. Online platforms are plagued by inappropriate texts, images, and videos which somehow manage to slip through the cracks. In many cases, online media platforms respond by implementing smarter algorithms to help identify inappropriate content. But what is artificial intelligence capable of capturing and where does it fail miserably. A.I. CAN READ BOTH TEXT AND IMAGES BUT ACCURACY VARIES With the help of natural language processing, A.I. can be trained to recognize text across multiple languages. This means they can be trained to identify posts which violate community guidelines such as racial slurs or texts related to extremist propaganda. A.I. can also be trained…