October 30th, 2020

The Importance of Explainable AI

RSS icon RSS Category: Community, Machine Learning Interpretability, Responsible AI
Fallback Featured Image

This blog post was written by Nick Patience, Co-Founder & Research Director, AI Applications & Platforms at 451 Research, a part of S&P Global Market Intelligence

From its inception in the mid-twentieth century, AI technology has come a long way. What was once purely the topic of science fiction and academic discussion is now a widespread technology being adopted by enterprises across the world. AI is versatile, with applications ranging from drug discovery and patient data analysis to fraud detection, customer engagement, and workflow optimization. The technology’s scope is indisputable, and companies looking to stay ahead are increasingly adopting it into their business operations.

That being said, AI systems are notorious for their ‘black-box’ nature, leaving many users without visibility into how or why decisions have been made. This is where explainable AI comes into play. Explainable AI is employed to make AI decisions both understandable and interpretable by humans. According to 451 Research’s Voice of the Enterprise: AI and Machine Learning Use Cases 2020, 92% of enterprises believe that explainable AI is important; however, less than half of them have built or purchased explainability tools for their AI systems. This leaves them open to significant risk; without a human looped into the development process, AI models can generate biased outcomes that may lead to both ethical and regulatory compliance issues later.

So why haven’t more companies incorporated explainability tools into their AI strategy to mitigate this risk? One reason for this may simply be a lack of available tools, features, and stand-alone products. The industry has been slow to adapt to this critical issue, in part due to the long-standing belief held by many data scientists that explainability is traded for accuracy in AI models. This however is a misconception; visibility into the AI decisioning process allows users to screen their data and algorithms for bias and deviation, thus producing accurate and robust outcomes that can easily be explained to customers and regulators.

Many AI implementations – particularly in the healthcare and financial sectors – deal with personal data, and customers need to know that this data is being handled with the utmost care and sensitivity. In Europe, the General Data Protection Regulation (GDPR) requires companies to provide customers with an explanation of decisions made by AI, and similar regulations exist in countries across the globe. With explainable AI systems, companies can show customers exactly where data is coming from and how it’s being used, meeting these regulatory requirements and building trust and confidence over time.

As companies map out their AI strategies, explainability should be a central consideration to safeguard against unnecessary risk while maximizing business value.

For more information on explainable AI, check out our recent report ‘Driving Value with Explainable AI’.

Tags

Leave a Reply

+
Enhancing H2O Model Validation App with h2oGPT Integration

As machine learning practitioners, we’re always on the lookout for innovative ways to streamline and

May 17, 2023 - by Parul Pandey
+
Building a Manufacturing Product Defect Classification Model and Application using H2O Hydrogen Torch, H2O MLOps, and H2O Wave

Primary Authors: Nishaanthini Gnanavel and Genevieve Richards Effective product quality control is of utmost importance in

May 15, 2023 - by Shivam Bansal
AI for Good hackathon
+
Insights from AI for Good Hackathon: Using Machine Learning to Tackle Pollution

At H2O.ai, we believe technology can be a force for good, and we're committed to

May 10, 2023 - by Parul Pandey and Shivam Bansal
H2O democratizing LLMs
+
Democratization of LLMs

Every organization needs to own its GPT as simply as we need to own our

May 8, 2023 - by Sri Ambati
h2oGPT blog header
+
Building the World’s Best Open-Source Large Language Model: H2O.ai’s Journey

At H2O.ai, we pride ourselves on developing world-class Machine Learning, Deep Learning, and AI platforms.

May 3, 2023 - by Arno Candel
LLM blog header
+
Effortless Fine-Tuning of Large Language Models with Open-Source H2O LLM Studio

While the pace at which Large Language Models (LLMs) have been driving breakthroughs is remarkable,

May 1, 2023 - by Parul Pandey

Request a Demo

Explore how to Make, Operate and Innovate with the H2O AI Cloud today

Learn More