Faced with the great fear that artificial intelligence perpetuates (or even enhances) real-life biases, IBM has created software that automatically detects biases and provides explanations about the decisions AI makes based on the data it has collected obtained.
Something more than a year ago, Ruchir Puri, chief architect of IBM Watson, said in Las Vegas that “an artificial intelligence is worth the same as the data it has behind it”. It was his particular starting point to talk about the great fear of big companies that work in intelligent algorithms and machine learning: the creation of biases (biases), which alter the correct proportion of people of different races or genres and that can even to perpetuate the stereotypes and bad practices of the flesh and blood society, but without the reasoning and empathy inherent in humans. “Detecting flaws in the bias of that information is fundamental, but complex when they are unstructured data. Luckily, it is easier to remove the biases of a machine than a man”Said the good of Puri.
Now, IBM has gone a step further in its fight against biases in artificial intelligence, moving from fears and internal projects to mitigate the problem, to a more holistic approach (and, what the hell, also with a commercial approach). This means the launch of a new range of cloud services of the Blue Giant, among which is a software capable of automatically detecting biases and provides an explanation of the decisions made by the AI based on the data it has obtained.
The new IBM developments work with models built from a wide variety of machine learning frameworks and custom AI environments such as Watson, Tensorflow, SparkML, AWS SageMaker and AzureML. That is, they do not remain in the exclusive environment of the North American multinational, but rather they approach the phenomenon in the main existing platforms in the market.
The software service can also be programmed to monitor the specific decision factors that are taken into consideration for any business workflow, allowing you to customize it for the specific use of each organization. The software service, fully automated, describes decision making and detects biases in AI models at runtime-as decisions are made. That is, it captures potentially unfair results as they occur (see demo here). It also recommends, automatically, data to add to the model and help reduce any bias that has been detected.
On the other hand, IBM Research will make available to the open source community a tool kit for reducing and detecting biases in the AI to promote global collaboration on how to deal with the bias in artificial intelligence. In that sense, the AI Fairness 360 is a library of innovative algorithms, codes and tutorials that will provide academics, researchers and data scientists with tools and knowledge to integrate bias detection mechanisms during the construction and application of machine learning models.