Facebook is monitoring the situation where there are tons of misinformation circulating on social media along with hateful and disrespectful memes. In order to deal with this situation, Facebook is using Artificial Intelligence and human based fact checking moderators to impose its guidelines. Now Facebook is relying more on its AI Software than human force.
In the month of April, Facebook put warning labels on more than 50 million posts related to coronavirus. Due to this, there is a 95 percent chance that a user seeing a post with a warning label will not open the link. In the text or article links it is quite easy to put warning labels but difficult on images that contain text and on videos.
Memes having text and images together are a more complicated task for AI to handle. Because it takes a lot of time to analyze the context of the image or text on it due to pun, kind of humor and language barrier. Also, there is another hurdle which they are facing is duplicate content which is slightly different than the original one. That’s where SimSearchNet came which Facebook was working on for years; it trains the AI model to recognize the original and copied version.
After the checking of independent fact checkers whether the image contains misleading information, SimSearchNet detects whether the content is duplicated or not, and then it’s easy to apply warning labels. This system is incorporated on both social media sites Instagram and Facebook. Millions of images being checked per day.
The main aim behind all this process is to limit the duplicate images which cause misinformation while protecting the genuine post. There are so many creators who take out screenshots of the original post and alter it, due to that the meaning of a post gets completely changed so an AI model which is programmed particularly for this problem can label one as misinformation and the other one a genuine. It is a very important task for the system because one single mistake can lead to action against those content creators which does not violate the company policy.
Moreover, Facebook is also focusing on hate speech. In the first quarter of 2020 through AI, Facebook removed 88.8 percent of hate speech material.
However, detecting hate speech through memes is very hard, even though with the latest tools. So now the company is building a data set of hateful memes which contains at least 10,000 examples in order to check whether someone is uploading hateful meme or not by analyzing the image and text.
Facebook is also offering $100,000 for researchers to create models that can effectively analyze the hateful speech and help Facebook to turn it down.
Image source link