IBM’s New Software Explains Bias And Automated Decisions Taken By AI
IBM has released a software service that detects bias decisions taken by AI systems, and also explains the factors behind the automated decisions.
The software addresses an important issue of the AI-based systems’ credibility. It has been noticed that the decisions taken by such systems are not fair. For instance, the infamous facial recognition software from Microsoft and IBM was not as precise in identifying dark skinned females as it worked with light-skinned women.
Powered by IBM cloud, the software is compatible with popularly deployed machine learning frameworks and AI-Build environments such as Tensorflow, SparkML, AWS SageMaker, AzureML, and IBM’s own Watson tech.
According to IBM, the open-source software tracks the decision making of AI systems in real time which means it “potentially unfair outcomes as they occur.”
In order to help AI systems in case a bias is detected, the software will also recommend data to add to the model. The explanation will include the factors that resulted in a particular bias decision.
Besides detecting automated decisions, it will also keep track of the accuracy, fairness, performance, and lineage of the system. The software will feature visual dashboards for providing the breakdown of automated decisions.
The AI flaw detection software is a major push considering the lack of transparency in the systems. IBM is not the first company working in this field. Earlier this year, Accenture launched a similar tool for detecting the underlying problems with the decision making algorithms of AI systems.
Also Read: Facebook Accused Of Allowing Gender Discriminating Job Ads