Return to page

Responsible Machine Learning

Actionable Strategies for Mitigating Risks & Driving Adoption

Overview

What is Responsible AI?

Explainable AI and the pursuit of using technology and statistical methods to explain Machine Learning models, quickly became a much larger question. Best practices in applying AI is not just a statistical question, but a people and process question as well, which forms the key elements of Responsible AI. In order to achieve maximum transparency and understanding of AI, it is imperative to address and understand the full view of models and their impact. There are six categories that comprise the most critical themes in Responsible AI: Explainable AI, Interpretable Machine Learning technology, Ethical AI, Secure AI, Human-Centered AI, and Compliance.

Responsible-AI-ven-diagram-wide Responsible-AI-ven-diagram-wide
Group of young successful creative multiethnic team smile and brainstorm on project together in modern office with post note or sticky note. Caucasian man point on glass wall sharing idea. Group of young successful creative multiethnic team smile and brainstorm on project together in modern office with post note or sticky note. Caucasian man point on glass wall sharing idea.

The Importance of Responsibility

The last few years have brought to light many failed cases of companies deploying AI without adequately considering or analyzing the risk of their models. In order to mitigate the risks of models overfitting, perpetuating historical human bias, or failing to adapt to data drift (among others), AI deployers should consider our core themes of Responsible AI:

  • Explainable AI (XAI): The ability to explain a model after it has been developed
  • Interpretable Machine Learning: Transparent model architectures and increasing how intuitive and understandable ML models can be
  • Ethical AI: Sociological fairness in machine learning predictions (i.e., whether one category of person is being weighted unequally)
  • Secure AI: Debugging and deploying ML models with similar counter-measures against insider and cyber threats as would be seen in traditional software
  • Human-Centered AI: User interactions with AI and ML systems
  • Compliance: Making sure your AI systems meet the relevant regulatory requirements whether that’s with GDPR, CCPA, FCRA, ECOA or other regulations

We’ve written a primer blog on key terms and ideas in Responsible AI to help define a list of critical industry terminology as we view them at H2O.ai with respect to our research and products.

An Introduction to Machine Learning Interpretability Second Edition

Download this book to learn to make the most of recent and disruptive breakthroughs in debugging, explainability, fairness, and interpretability techniques for machine learning. In this report you’ll find:

  • Definitions and examples
  • Social and Commercial Motivations for Machine Learning
  • A Machine Learning Interpretability Taxonomy for Applied Practitioners
  • Common Interpretability Techniques
  • Limitations and Precautions
  • Testing Interpretability and Fairness
  • Machine Learning Interpretability in Action
Untitled design (68) Untitled design (68)

Our Most Current Thoughts & Resources on Responsible AI