The recent incident involving Facebook’s AI putting the ‘primate’ label on a video featuring black men is objectionable and offensive. Afterward, Facebook apologized for the incident, calling it an “unacceptable error.” In addition, the tech giant said it is looking into ways to prevent it from happening again.
As first reported by the New York Times, the incident happened with users who watched a video posted by the British tabloid, the Daily Mail, on June 27th, 2020. Users who watched the video reported receiving an auto-prompt asking them if they wanted to “keep seeing videos on primates.”
Since the said incident, Facebook has disabled the complete recommendation feature, which is clearly broken at this point. Facebook spokesperson Dani Lever said in a statement, “As we have said, while we have made improvements to our A.I., we know it’s not perfect, and we have more progress to make. We apologize to anyone who may have seen these offensive recommendations.”
Unfortunately, Not the First one
What’s more worrying than Facebook’s AI labeling the video of black men as primates is that the incident is not the first one. In the past, Google and Amazon have been under the radar for biases in their AI systems, particularly concerning racial discrimination towards people of color.
In 2015, Google, which apparently has the biggest repository of photos on the web, apologized following its photos app labeling photos of black people as “gorillas.” These incidents make you wonder, are these algorithms trained using AI really biased, or are they terrible at what they’re supposed to do?
Condemning the incident in April, the U.S. Federal Trade Commission warned against using AI tools. Saying AI tools have shown racial and gender discrimination. This could violate consumer protection laws if used for credit, housing, or employment. In addition, FTC privacy attorney Elisa Jillson wrote, “Hold yourself accountable— or be ready for the FTC to do it for you.”