Tag

A.I.

Browsing

In some interesting news last month, latest research from the Massachusetts Institute of Technology (MIT) has demonstrated that ‘blind spots’ in the artificial intelligence (AI) of self-driving cars could be adjusted and redressed using input from humans. The Massachusetts Institute of Technology team in collaboration with Microsoft, developed an ingenious model where AI learns any changes in behaviour it needs to make as it observes the human under the scenario. In order for this to be achieved, they make sure the AI system is put through simulation training before putting a human through the same scenario in the real world hence allowing it to pick up on visual and reactive signals by humans to accordingly amend its behaviour in similar circumstances. So far, the system has only been tested in video games but nonetheless study author (and  graduate student in MIT’s computer science and artificial intelligence lab) Ramya Ramakrishnan, said:…

An increase in hate speech, extremism, fake news and other content which violate the community standards has seen them strengthening policies, re-working algorithms and adding staff to curb the menace. In this article, we examine what works with algorithms and what doesn’t as relates to social moderation. Online platforms are plagued by inappropriate texts, images, and videos which somehow manage to slip through the cracks. In many cases, online media platforms respond by implementing smarter algorithms to help identify inappropriate content. But what is artificial intelligence capable of capturing and where does it fail miserably. A.I. CAN READ BOTH TEXT AND IMAGES BUT ACCURACY VARIES With the help of natural language processing, A.I. can be trained to recognize text across multiple languages. This means they can be trained to identify posts which violate community guidelines such as racial slurs or texts related to extremist propaganda. A.I. can also be trained…