Building Trust Into AI
Artificial intelligence and machine learning are becoming foundational technologies used to inform decisions that make a big difference in the world. As a result, addressing issues of bias and fairness in these systems and applications is essential. “AI is now being used in many different consequential applications, from natural language interaction to flagging compliance challenges. The issue is in building machine learning models that we trust,” says Kush Varshney, IBM researcher and founding co-director of IBM Science for Social Good.
One of IBM’s core Trust and Transparency Principles is that new technology, including AI, must be transparent and explainable. IBM’s AI Fairness 360 contains more than 70 fairness metrics and 10 state-of-the-art bias mitigation algorithms designed to translate algorithmic research from the lab into practices as farreaching as finance, human capital management, healthcare, and education.
Lack of trust and transparency in machine learning and AI models can impede their ability to deliver significant and measurable benefits for enterprise at scale. The AI Fairness 360 toolkit and other IBM Trusted AI efforts aim to bring more fairness and accountability into the equation and enable businesses to tap into historic levels of opportunity while remaining aligned with our core human values.