October 30th, 2020

The Importance of Explainable AI

RSS icon RSS Category: Community, Machine Learning Interpretability, Responsible AI
Fallback Featured Image

This blog post was written by Nick Patience, Co-Founder & Research Director, AI Applications & Platforms at 451 Research, a part of S&P Global Market Intelligence

From its inception in the mid-twentieth century, AI technology has come a long way. What was once purely the topic of science fiction and academic discussion is now a widespread technology being adopted by enterprises across the world. AI is versatile, with applications ranging from drug discovery and patient data analysis to fraud detection, customer engagement, and workflow optimization. The technology’s scope is indisputable, and companies looking to stay ahead are increasingly adopting it into their business operations.

That being said, AI systems are notorious for their ‘black-box’ nature, leaving many users without visibility into how or why decisions have been made. This is where explainable AI comes into play. Explainable AI is employed to make AI decisions both understandable and interpretable by humans. According to 451 Research’s Voice of the Enterprise: AI and Machine Learning Use Cases 2020, 92% of enterprises believe that explainable AI is important; however, less than half of them have built or purchased explainability tools for their AI systems. This leaves them open to significant risk; without a human looped into the development process, AI models can generate biased outcomes that may lead to both ethical and regulatory compliance issues later.

So why haven’t more companies incorporated explainability tools into their AI strategy to mitigate this risk? One reason for this may simply be a lack of available tools, features, and stand-alone products. The industry has been slow to adapt to this critical issue, in part due to the long-standing belief held by many data scientists that explainability is traded for accuracy in AI models. This however is a misconception; visibility into the AI decisioning process allows users to screen their data and algorithms for bias and deviation, thus producing accurate and robust outcomes that can easily be explained to customers and regulators.

Many AI implementations – particularly in the healthcare and financial sectors – deal with personal data, and customers need to know that this data is being handled with the utmost care and sensitivity. In Europe, the General Data Protection Regulation (GDPR) requires companies to provide customers with an explanation of decisions made by AI, and similar regulations exist in countries across the globe. With explainable AI systems, companies can show customers exactly where data is coming from and how it’s being used, meeting these regulatory requirements and building trust and confidence over time.

As companies map out their AI strategies, explainability should be a central consideration to safeguard against unnecessary risk while maximizing business value.

For more information on explainable AI, check out our recent report ‘Driving Value with Explainable AI’.

Tags

Leave a Reply

+
10 Consejos para Convertirte en un Científico de Datos Exitoso

En este mundo que no deja de cambiar y sorprendernos, como científicos de datos debemos

January 19, 2023 - by Favio Vázquez
+
Explaining models built in H2O-3 — Part 1

Machine Learning explainability refers to understanding and interpreting the decisions and predictions made by a

December 22, 2022 - by Parul Pandey
+
H2O.ai at NeurIPS 2022

H2O.ai is proud to participate in the 36th Conference on Neural Information Processing Systems (NeurIPS)

December 6, 2022 - by Marcos V. Conde
+
A Brief Overview of AI Governance for Responsible Machine Learning Systems

Our paper “A Brief Overview of AI Governance for Responsible Machine Learning Systems” was recently

November 30, 2022 - by Navdeep Gill, Abhishek Mathur and Marcos V. Conde
+
H2O World Dallas Customer Talks

After three long years of not having an #H2OWorld, we finally held our first one

November 24, 2022 - by Vinod Iyengar
+
New in Wave 0.24.0

Another Wave release has arrived with quite a few exciting new features. Let's quickly go

November 21, 2022 - by Martin Turoci

Request a Demo

Explore how to Make, Operate and Innovate with the H2O AI Cloud today

Learn More