A Brief Overview of AI Governance for Responsible Machine Learning Systems
November 30, 2022 AI Governance Machine Learning Responsible AIOur paper “A Brief Overview of AI Governance for Responsible Machine Learning Systems” was recently accepted to the Trustworthy and Socially Responsible Machine Learning (TSRML) workshop at NeurIPS 2022 (New Orleans). In this paper, we discuss the framework and value of AI Governance for organizations of all sizes, across all industries and domains. Our paper […]
Using AI to unearth the unconscious bias in job descriptions
January 19, 2021 Responsible AI Wave“Diversity is the collective strength of any successful organization Unconscious Bias in Job Descriptions Unconscious bias is a term that affects us all in one way or the other. It is defined as the prejudice or unsupported judgments in favor of or against one thing, person, or group as compared to another, in a way […]
H2O Driverless AI 1.9.1: Continuing to Push the Boundaries for Responsible AI
January 18, 2021 H2O Driverless AI Responsible AIAt H2O.ai, we have been busy. Not only do we have our most significant new software launch coming up (details here), but we also are thrilled to announce the latest release of our flagship enterprise platform H2O Driverless AI 1.9.1. With that said, let’s jump into what is new: Faster Python scoring pipelines with embedded […]
The Importance of Explainable AI
October 30, 2020 Community Machine Learning Interpretability Responsible AIThis blog post was written by Nick Patience, Co-Founder & Research Director, AI Applications & Platforms at 451 Research, a part of S&P Global Market Intelligence From its inception in the mid-twentieth century, AI technology has come a long way. What was once purely the topic of science fiction and academic discussion is now a […]
Building an AI Aware Organization
October 26, 2020 Business Explainable AI Machine Learning Machine Learning Interpretability Responsible AIResponsible AI is paramount when we think about models that impact humans, either directly or indirectly. All the models that are making decisions about people, be that about creditworthiness, insurance claims, HR functions, and even self-driving cars, have a huge impact on humans. We recently hosted James Orton, Parul Pandey, and Sudalai Rajkumar for a […]
The Challenges and Benefits of AutoML
October 14, 2020 AutoML H2O Driverless AI Machine Learning Responsible AIMachine Learning and Artificial Intelligence have revolutionized how organizations are utilizing their data. AutoML or Automatic Machine Learning automates and improves the end-to-end data science process. This includes everything from cleaning the data, engineering features, tuning the model, explaining the model, and deploying it into production. AutoML accelerates your AI initiatives and can help make […]
3 Ways to Ensure Responsible AI Tools are Effective
October 7, 2020 Explainable AI H2O Driverless AI Machine Learning Machine Learning Interpretability Responsible AISince we began our journey making tools for explainable AI (XAI) in late 2016, we’ve learned many lessons, and often the hard way. Through headlines, we’ve seen others grapple with the difficulties of deploying AI systems too. Whether it’s: a healthcare resource allocation system that likely discriminated against millions of black people data privacy violations […]
5 Key Considerations for Machine Learning in Fair Lending
September 21, 2020 Financial Services Machine Learning Machine Learning Interpretability Responsible AI ShapleyThis month, we hosted a virtual panel with industry leaders and explainable AI experts from Discover, BLDS, and H2O.ai to discuss the considerations in using machine learning to expand access to credit fairly and transparently and the challenges of governance and regulatory compliance. The event was moderated by Sri Ambati, Founder and CEO at H2O.ai. […]
From GLM to GBM – Part 2
July 9, 2020 Data Science Explainable AI GBM GLM Machine Learning Interpretability Responsible AI ShapleyHow an Economics Nobel Prize could revolutionize insurance and lending Part 2: The Business Value of a Better Model Introduction In Part 1, we proposed better revenue and managing regulatory requirements with machine learning (ML). We made the first part of the argument by showing how gradient boosting machines (GBM), a type of ML, can […]
From GLM to GBM – Part 1
June 9, 2020 Data Science Explainable AI GBM GLM Machine Learning Interpretability Responsible AI ShapleyHow an Economics Nobel Prize could revolutionize insurance and lending Part 1: A New Solution to an Old Problem Introduction Insurance and credit lending are highly regulated industries that have relied heavily on mathematical modeling for decades. In order to provide explainable results for their models, data scientists and statisticians in both industries relied heavily […]