Just days after top officials at four international technology companies appeared before a Congressional committee, one of these companies—Facebook—responded to complaints that the social media giant is still not doing enough to protect users from bad actors online.
Users continue to complain that violent, racist and other objectionable content continues popping up on their timelines, no matter what the company says or does. Facebook, in response, said the current viral pandemic has made policing its platform even more difficult. In a recent blog post, a company representative, Guy Rosen, said:
“Today’s report shows the impact of COVID-19 on our content moderation and demonstrates that, while our technology for identifying and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology…”
The company added that COVID-19 had “affected Facebook’s ability to remove harmful and forbidden material…” saying that having to “send content moderators to work from home” led to a reduction in the company’s ability to “remove material around suicide, self-injury, child nudity, and sexual exploitation…” according to media reports.
More technology, rather than more people, was searching for offending posts and other content that violates the platform’s rules
As an addendum, Facebook released a statement saying they were beginning to bring some employees back into offices considered safe, where a smaller number of workers made protections against the spread of COVID easier and more effective.
Facebook also claims to have improved in its ability to detect and eliminate “hate speech” and other objectionable content, citing a company-provided statistic that detection of hate speech had increased from 89 percent to 95 percent, which includes responding to 22.5 million pieces of individual content. The company cites advances in languages and word-detection software for this increase.
And the company also announced that it is tightening the focus of what will be allowed on the platform, banning images considered to be racial caricatures or otherwise “dehumanizing” images of people groups.
No stranger to controversy
Questions and criticisms related to Facebook’s handling of racially-sensitive or derogatory content have been a hot consumer and technology PR topic on both sides of the Atlantic in recent years. From depictions of classic literature characters to costumes representing traditional holiday figures to modern representations of political figures, as well as messages of these and other related topics, continue to divide users between those who want Facebook to censor its content and others who feel the platform is unfairly censoring their freedom.
Publicly, the company has continued to stress it will crack down on any content it finds objectionable, while also strongly maintaining that this curating does not constitute editorializing. If deemed to be the latter, Facebook could become seen, legally, as a publisher, rather than a technology platform, which would shift where and how the company fits as a business.
This means that every time Facebook wades into a controversial position regarding censorship of objectionable material, the company has to consider both its business position and the PR message it’s sending. Challenges as to which side of the divide the company is actually on are likely to continue.