Going forward, people who search for terms associated with white supremacy will be directed to resources that combat hate groups, such as Life After Hate, an organization founded by former violent extremists that provides crisis intervention, education, support groups and outreach, Facebook said.
Facebook also acknowledged that it needs to get «better and faster» at identifying and taking down hate from its sites. Machine learning and artificial intelligence help, but the company said «we know we have a lot more work to do.» The company will begin enforcing the new policy beginning next week.
Facebook built a content scanning system that over the years has added rules based on reactions to changes in user behavior or public uproar after an incident such as the New Zealand mass shooting. When the website’s users or computer systems report posts as problematic, they are sent to one of the company’s 15,000 content moderators around the world, who are allowed to take content down only if it violates a rule.
Civil rights group Color of Change said it is «glad to see the company’s leadership take this critical step forward in updating its policy on white nationalism.» The group also called upon Twitter, Google’s YouTube, and Amazon.com to «act urgently to stem the growth of white nationalist ideologies.»
Twitter prohibits hateful conduct and imagery, but has been criticised for removing some white supremacists and extremists, while allowing others to remain. For instance, the site has permitted Richard Spencer, the head of the white nationalist group the National Policy Institute, and Jason Kessler, the main organizer of the «Unite the Right» rally in Charlottesville, to keep their accounts.
The problem of white nationalism on Facebook was magnified in 2017 after a deadly neo-Nazi rally in Charlottesville, Virginia, where white nationalists used the social networking site as a way to fuel hatred and to connect far-right groups in a protest of the removal of a Confederate statue.
The killer responsible for the deaths of 50 people in New Zealand earlier this month also used Facebook to post a racist manifesto before recording the massacre at two mosques live on the platform.