October 30th, 2020

The Importance of Explainable AI

RSS icon RSS Category: Community, Machine Learning Interpretability, Responsible AI
Fallback Featured Image

This blog post was written by Nick Patience, Co-Founder & Research Director, AI Applications & Platforms at 451 Research, a part of S&P Global Market Intelligence

From its inception in the mid-twentieth century, AI technology has come a long way. What was once purely the topic of science fiction and academic discussion is now a widespread technology being adopted by enterprises across the world. AI is versatile, with applications ranging from drug discovery and patient data analysis to fraud detection, customer engagement, and workflow optimization. The technology’s scope is indisputable, and companies looking to stay ahead are increasingly adopting it into their business operations.

That being said, AI systems are notorious for their ‘black-box’ nature, leaving many users without visibility into how or why decisions have been made. This is where explainable AI comes into play. Explainable AI is employed to make AI decisions both understandable and interpretable by humans. According to 451 Research’s Voice of the Enterprise: AI and Machine Learning Use Cases 2020, 92% of enterprises believe that explainable AI is important; however, less than half of them have built or purchased explainability tools for their AI systems. This leaves them open to significant risk; without a human looped into the development process, AI models can generate biased outcomes that may lead to both ethical and regulatory compliance issues later.

So why haven’t more companies incorporated explainability tools into their AI strategy to mitigate this risk? One reason for this may simply be a lack of available tools, features, and stand-alone products. The industry has been slow to adapt to this critical issue, in part due to the long-standing belief held by many data scientists that explainability is traded for accuracy in AI models. This however is a misconception; visibility into the AI decisioning process allows users to screen their data and algorithms for bias and deviation, thus producing accurate and robust outcomes that can easily be explained to customers and regulators.

Many AI implementations – particularly in the healthcare and financial sectors – deal with personal data, and customers need to know that this data is being handled with the utmost care and sensitivity. In Europe, the General Data Protection Regulation (GDPR) requires companies to provide customers with an explanation of decisions made by AI, and similar regulations exist in countries across the globe. With explainable AI systems, companies can show customers exactly where data is coming from and how it’s being used, meeting these regulatory requirements and building trust and confidence over time.

As companies map out their AI strategies, explainability should be a central consideration to safeguard against unnecessary risk while maximizing business value.

For more information on explainable AI, check out our recent report ‘Driving Value with Explainable AI’.

Tags

Leave a Reply

+
Developing and Retaining Data Science Talent

It’s been almost a decade since the Harvard Business Review proclaimed that “Data Scientist” is

May 12, 2022 - by Jon Farland
+
The H2O.ai Wildfire Challenge Winners Blog Series – Team Too Hot Encoder

Note: this is a community blog post by Team Too Hot Encoder - one of

May 10, 2022 - by H2O.ai Team
+
The H2O.ai Wildfire Challenge Winners Blog Series – Team HTB

Note: this is a community blog post by Team HTB - one of the H2O.ai

May 10, 2022 - by H2O.ai Team
+
Bias and Debiasing

An important aspect of practicing machine learning in a responsible manner is understanding how models

April 15, 2022 - by Kim Montgomery
+
Comprehensive Guide to Image Classification using H2O Hydrogen Torch

In this article, we will learn how to build state-of-the-art models in computer vision and

March 29, 2022 - by H2O.ai Team
+
H2O Wave Snippet Plugin for PyCharm

Note: this blog post by Shamil Dilshan Prematunga was first published on Medium. What is PyCham? PyCharm

March 24, 2022 - by Shamil Prematunga

Start Your Free Trial