Short Bytes: A paper published by a team of researchers has revealed that an AI system trying to learn a human language can adapt implicit race and gender bias, as observed in humans. The system can associate women with words pointing to family or house instead of work, or white names linked to more pleasant words than black names.
We have always wanted our modern computer programs or AIs to replicate human intelligence. A system trained using machine learning could understand the language we speak. But we didn’t realise we have also given them our less important behavioral traits, racism and gender bias.These things have existed for ages and are now hard-wired into our brains. And so into the brains of the AI systems. A new research published in Science journal reveals that AIs have started to sponge up these entrenched beliefs in their quest to acquire human-like language abilities.
Also Read: Genius AI Steals $290,000 In A Chinese Poker Competition, Defeats World Series Winner
“A lot of people are saying this is showing that AI is prejudiced. No,” said Joanna Bryson, a computer scientist at the University of Bath who has co-authored the paper. “This is showing we’re prejudiced and that AI is learning it.”
The research involved testing an AI model, trained to understand words using a statistical approach called word embedding, for implicit bias. The researchers created a test scenario similar to the IAT (Implicit Association Test), where people have to establish a relation between entities. For instance, people can be asked to tag images of white and black people as pleasant and unpleasant.
Now, in word embedding, words are mapped against vectors of real numbers. A language space so created may have the words for flowers in close proximity to the words related to pleasantness. Similarly, insects may be associated with unpleasantness.
The system was trained using a dataset of around 840bn words sourced from various publications on the web. According to the paper, the system tends to adapt implicit biases from humans.
The words man or male were better linked with engineering and maths. On the other hand, for women or female, it was arts, humanities, or home. Similarly, the possibility was higher for the system to ties European American names with words such as happy or gift, and African American words with unpleasant words.
However, the team was only able to determine bias associated with single words. The research would be extended to include phrases and words from other phrases.
There is an optimistic side of the picture elaborated by an Oxford researcher Sandra Watcher. Talking to the Guardian, Watcher said she wasn’t surprised with the biased results because the historical data is biased.
But the existence of biases in the case of algorithms is less bad than humans who can lie about their beliefs. Such behavior is less expected from AI systems, at least, until the time they aren’t smart enough.
It would be a testing situation to reduce the level of biasedness without compromising on learning abilities. Systems could be made which “detect biased decision-making, and then act on it,” Watcher said.
If you have something to add, drop your thoughts and feedback.
Also Read: Prison Inmates Built And Hid DIYs Computers In Ceiling, Hacked The Prison Network