Friday 21 September 2018

IBM Launches Tool Aimed at Detecting AI Bias

IBM Launches  Tool Aimed at Detecting AI Bias


IBM is launching a tool which will analyse how and why algorithms make decisions in real time. The Fairness 360 Kit will also scan for signs of bias and recommend adjustments.


There is increasing concern that algorithms used by both tech giants and other firms are not always fair in their decision-making.

For example, in the past, image recognition systems have failed to identify non-white faces.

However, as they increasingly make automated decisions about a wide variety of issues such as policing, insurance and what information people see online, the implications of their recommendations become broader.

Often algorithms operate within what is known as a "black box" - meaning their owners can't see how they are making decisions.

The IBM cloud-based software will be open-source, and will work with a variety of commonly used frameworks for building algorithms.

Customers will be able to see, via a visual dashboard, how their algorithms are making decisions and which factors are being used in making the final recommendations.

No comments:

Post a Comment