Three things are certain in life: death, taxes, and Facebook getting embroiled in a controversy. After a slew of leaked internal documents shed light on the social media giant’s dark side, a recent report related to the leak claimed that Facebook’s AI tools weren’t sufficient to deal with hate speech on the platform. Now, the Mark Zuckerberg-led company has come forward to defend its case on AI moderation of online content.
Arguably, the biggest story in tech right now is The Facebook Files investigation led by WSJ. The exposé implies Zuckerberg’s company prioritizes profits over public safety and puts the firm in a tough spot. Previously, the investigation uncovered the leeway VIP accounts get on Facebook, its inaction on Instagram’s known negative effect, among other insider information.
In its latest report, WSJ revealed that Facebook mainly relies on an incompetent AI algorithm to remove hate speech. It mentioned that the company’s own employees didn’t trust the AI to sufficiently tackle posts violating the fabled Community Guidelines. Importantly, the appalling stats shared by the publication justified this lack of trust. As a retort, Facebook today came out with its own report in support of its automated content moderation.
Facebook Ranks Prevalence Higher Than Removal For Curtailing Hate Speech
As per Guy Rosen, VP of Integrity, the amount of hate speech people come across has dipped by about 50%. According to him, the prevalence of hate speech is a better metric to measure the AI’s effectiveness than how many such posts are removed.
Clearly, this remark is in response to WSJ’s statement that claims when the AI isn’t sure what to do, it doesn’t remove hate content but only reduces its visibility. This leads to the responsible accounts not facing any consequences for violating Facebook’s rules.
Coming to facts and figures, Rosen quoted the latest Community Standards Enforcement Report and highlighted the decreasing hate speech prevalence. As per this report, 0.05% of content viewed on the platform carried hate speech. To emphasize improvement, he revealed that this stat went down by around 50% as compared to the last three quarters.
The leaks showed that Facebook reduced the time reviewers spent detecting hate speech, shifting more workload to AI. The algorithm reportedly has over 97% proactive detection rate – the amount of hate speech detected before someone reports it. However, as told by certain users, this stat never translated into their actual experience on Facebook.
Implying that Facebook’s AI takes extra measures, Rosen reasoned that the proactive detection rate doesn’t tell the whole story. It only accounts for removed content and not the content whose visibility the AI decreases upon suspicion of carrying hate speech. He once again stressed upon a metric supposedly more accurate than content removal when measuring response to hate speech – prevalence.