Facial Recognition Still Struggles To Identify Black Faces: Report

Share on twitter
Tweet
Share on whatsapp
WhatsApp
Share on facebook
Share
Facial recognition was used by US Customs and border security to screen the passangers on an international flight

Facial Recognition has been put to use by local law enforcement bodies throughout the USA. The FBI uses it to scan through the DMV database, and the police applies it to identify and arrest individuals at protests.

An analysis of the U.S. National Institute of Standards and Technology (NIST) benchmarks by VentureBeat shows the discrepancies of facial recognition. The report points out that it has a higher error rate when it comes to identifying black faces. Such errors lead to bias when it comes to individuals of color.

The NIST benchmarks analyzed in the report measure the rates at which “Black men are misidentified as white men,” “White women misidentified as black women,” and vice-versa. These measures determine the false match rate (FMR) or the error rate. The benchmark uses 17 years of mugshot corpus to test the algorithms.

The vendors submit optimized algorithms for the NIST benchmarks. However, the versions operating in the real words are different and not that accurate altogether. An FMR of 0.0001 means one mistaken identity in every thousand, while an FMR of .1 means one in every ten.

Portland Bans Facial Recognition

Facial Recognition banned in Portland

One of the recent developments in the use of facial recognition is a ban on its use imposed by Portland authorities. Through a pair of ordinances, the authorities have banned local law authorities and private companies from using it in public areas.

The draft ordinances also mention the chances of “biases against Black people, women, and older people.” Jo Ann Hardesty, City Council Commissioner said “No one should have something as private as their face photographed, stored, and sold to third parties for a profit.”

Errors in Facial Recognition

Facial Recognition FMR in RankOne's algorithms
Facial Recognition FMR in RankOne’s algorithms

The above-mentioned parameters come handy when we look at vendors supplying facial recognition technology to different parts of the world. Providers of facial recognition services have increased to fill the space left by tech giants like Microsoft, Amazon, and IBM.

Companies like TrueFace, which is set to deliver facial recognition to a U.S. Air Force base next year, are part of the analysis. This company’s algorithm had an FMR of 0.15 to 0.20, misidentifying black women.

Another company named RealNetworks deploys facial recognition for body cameras and drones through its subsidiary SAFR. High error rates were recorded in their algorithm too, especially in cases of misidentifying one black male as another.

Is there a Pattern?

The answer is, yes. There is a definite pattern when you look at the FMR in facial recognition algorithms of different companies. This pattern may lead to a new form of bias against persons of color. All algorithms registered an error when they “misidentified one black female as another black female.” The algorithms also “misidentified one black man as another black man.”

Such errors may look trivial when seen as one mistake in a thousand instances, but the scale of such mistakes also escalates with the populations of entire cities.

Manik Berry

Manik Berry

With a Master’s degree in journalism, Manik writes about big tech and has a keen eye for political-tech news. In his free time, he’s browsing the Kindle store for new stuff read. Manik also adores his motorcycle and is looking for new routes on weekends. He likes tea and cat memes. You can reach him at [email protected]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top