Facebook has clarified in a blog post that its policies for moderating content on the platform are not “secret”, are carefully considered and not “ad hoc” responses, after a New York Times article claimed Facebook’s rulebooks have “numerous gaps, biases and outright errors”.
The NYT claims to have accessed 1,400 pages from rulebook used by moderators to monitor posts on Facebook and tackle issues of extremism, hate, etc in countries. However, the article goes on to say that at Facebook, highly complex issues are distilled into simple yes-or-no rules that lead to errors in moderating content properly.
For instance, moderators were being mistakenly told to take down comments critical of religion in India. The article cited another example from Myanmar where a prominent extremist group was allowed to stay on the platform for months due to some error in paperwork. This error was admitted by Facebook.
“The Facebook employees who meet to set the guidelines, mostly young engineers and lawyers, try to distil highly complex issues into simple yes-or-no rules,” the article reads. “Those moderators, at times relying on Google Translate, have mere seconds to recall countless rules and apply them to the hundreds of posts that dash across their screens each day,” it adds.
Facebook said in a blog post that its gathering “over breakfast” is, in fact, a global forum that is attended by “experts from around the world with deep knowledge of relevant laws, online safety, counter-terrorism, operations, public policy, communications, product, and diversity”. In addition to lawyers and engineers, the meeting that is held every two weeks also includes human rights experts.
The social media giant explained that it has close to 15,000 content reviewers around the world, who are “supplied with training and supporting resources” instead of relying on Google Translate. Facebook said it, reviewers, content in more than 50 languages.