Because Facebook uses artificial intelligence to mitigate misinformation Covid-19, marketers continue to face so as to promote a crisis during the crisis both on the platform and outside of it. This week the social network reported that in April, it placed labels warning of misinformation to 50 million pieces of content associated with the coronavirus, and removed hundreds of thousands of messages. During a conference call with reporters on Tuesday, Facebook CEO Mark Zuckerberg said the company puts Covid-19 misinformation into two categories: one to remove the content, which can lead to physical harm, and the other for a total disinformation, based on the verification of the facts by third persons.
Facebook uses machine learning to identify the content and labeling, and Zuckerberg said that users do not click on the messages they see in 95% of cases, when they were a warning relating to misinformation. The company also has removed 2.5 million pieces of content related to the sale of masks, cleaning wipes and test kits Covid-19.
“The problem is that anyone who tries to use disinformation, there are also many people who are actually trying to help and bring masks and other equipment to people who really need it”, – said Zuckerberg.
The disclosure was part of a larger half-yearly report, which detailed the efforts and actions of Facebook Post in a ratio of 2 billion pieces of content that violate the company’s policy on various issues – from fake accounts to spam and suicide. The Facebook report also showed how it uses AI to detect and eliminate incitement to hatred, which has risen sharply in recent months.
For example, the company has removed 9.6 million. Posts in the first quarter of 2020 compared to 5.7 million. In the fourth quarter of 2019. She also removed 4.7 million. Posts related to the organization of hate, as compared to 1.6 million. In the same period last year. But artificial intelligence is still difficult to make memes. To cope with this, the company has created a set of data based on 10,000 hated memes that Software Company uses machine learning to hide ratio of image and meaning, so that it can improve the moderation system.
There is also the looming question of whether people believe what they read, misinformation or not. According to Robert Bratton, associate professor of psychology at Barnard College who has studied conspiracy theorists, people are often more concerned about what may fall in love with other than about their own beliefs susceptible. He said he is also hard to tell whom to a greater or lesser extent influenced by misinformation, and added that the participation of anyone in the publication on the Internet is “an imperfect measure of belief.” “I think that everything that you feel more than anything else, is that we do not trust other people”, – said Bratton. “We all think that we are good enough to recognize dubious information, but we do not trust others to do the same.”