Return to page

H2O.ai Blog

Filter By:

30 results Category: Year:
Explaining models built in H2O-3 — Part 1
by Parul Pandey | December 22, 2022 Explainable AI, H2O-3, Machine Learning Interpretability, Python

Machine Learning explainability refers to understanding and interpreting the decisions and predictions made by a machine learning model. Explainability is crucial for ensuring the trustworthiness and transparency of machine learning models, particularly in high-stakes situations where the consequences of incorrect predictions can be signi...

Read more
AI-Driven Predictive Maintenance with H2O AI Cloud
by Parul Pandey, Asghar Ghorbani | August 02, 2021 AutoML, H2O AI Cloud, Machine Learning Interpretability, Manufacturing

According to a study conducted by Wall Street Journal , unplanned downtime costs industrial manufacturers an estimated $50 billion annually. Forty-two percent of this unplanned downtime can be attributed to equipment failure alone. These downtimes can cause unnecessary delays and, as a result, affect the business. A better and superior al...

Read more
Unwrap Deep Neural Networks Using H2O Wave and Aletheia for Interpretability and Diagnostics

The use cases and the impact of machine learning can be observed clearly in almost every industry and in applications such as drug discovery and patient data analysis, fraud detection, customer engagement, and workflow optimization. The impact of leveraging AI is clear and understood by the business; however, AI systems are also seen as b...

Read more
Shapley summary plots: the latest addition to the H2O.ai’s Explainability arsenal
by Parul Pandey | April 21, 2021 AutoML, H2O Driverless AI, Machine Learning Interpretability

It is impossible to deploy successful AI models without taking into account or analyzing the risk element involved. Model overfitting, perpetuating historical human bias, and data drift are some of the concerns that need to be taken care of before putting the models into production. At H2O.ai, explainability is an integral part of our ML ...

Read more
Safer Sailing with AI
by Ana Visneski, Jo-Fai Chow, Kim Montgomery | April 01, 2021 Customers, Data Science, H2O Hydrogen Torch, H2O-3, Machine Learning Interpretability

In the last week, the world watched as responders tried to free a cargo ship that had gone aground in the Suez Canal. This incident blocked traffic through a waterway that is critical for commerce. While the location was an unusual one, ship collisions, allisions , and groundings are not uncommon. With all the technology that mariners hav...

Read more
The Importance of Explainable AI
by H2O.ai Team | October 30, 2020 Community, Machine Learning Interpretability, Responsible AI

This blog post was written by Nick Patience, Co-Founder & Research Director, AI Applications & Platforms at 451 Research, a part of S&P Global Market Intelligence From its inception in the mid-twentieth century, AI technology has come a long way. What was once purely the topic of science fiction and academic discussion is now...

Read more
Building an AI Aware Organization

Responsible AI is paramount when we think about models that impact humans, either directly or indirectly. All the models that are making decisions about people, be that about creditworthiness, insurance claims, HR functions, and even self-driving cars, have a huge impact on humans. We recently hosted James Orton, Parul Pandey, and Sudala...

Read more
Making AI a Reality
by Ellen Friedman | October 16, 2020 Business, Machine Learning, Machine Learning Interpretability

This blog post focuses on the content discussed in more depth in the free ebook “ Practical Advice for Making AI Part of Your Company’s Future”. Do you want to make AI a part of your company? You can’t just mandate AI. But you can lead by example.All too often, especially in companies new to AI and machine learning, team leaders may be ta...

Read more
3 Ways to Ensure Responsible AI Tools are Effective

Since we began our journey making tools for explainable AI (XAI) in late 2016, we’ve learned many lessons, and often the hard way. Through headlines, we’ve seen others grapple with the difficulties of deploying AI systems too. Whether it’s: a healthcare resource allocation system that likely discriminated against millions of black peop...

Read more
5 Key Considerations for Machine Learning in Fair Lending

This month, we hosted a virtual panel with industry leaders and explainable AI experts from Discover, BLDS, and H2O.ai to discuss the considerations in using machine learning to expand access to credit fairly and transparently and the challenges of governance and regulatory compliance. The event was moderated by Sri Ambati, Founder and CE...

Read more
In a World Where… AI is an Everyday Part of Business
by Ellen Friedman | July 22, 2020 Company, H2O Driverless AI, Machine Learning Interpretability

Imagine a dramatically deep voice-over saying “In a world where…” This phrase from old movie trailers conjures up all sorts of futuristic settings, from an alien “world where the sun burns cold”, a Mad Max “world without gas” to a cyborg “world of the not too distant future”.Often the epic science fiction or futuristic stories also have a...

Read more
From GLM to GBM – Part 2

How an Economics Nobel Prize could revolutionize insurance and lending Part 2: The Business Value of a Better ModelIntroductionIn Part 1 , we proposed better revenue and managing regulatory requirements with machine learning (ML). We made the first part of the argument by showing how gradient boosting machines (GBM), a type of ML, can mat...

Read more
From GLM to GBM - Part 1

How an Economics Nobel Prize could revolutionize insurance and lending Part 1: A New Solution to an Old ProblemIntroductionInsurance and credit lending are highly regulated industries that have relied heavily on mathematical modeling for decades. In order to provide explainable results for their models, data scientists and statisticians i...

Read more
Summary of a Responsible Machine Learning Workflow

A paper resulting from a collaboration between H2O.AI and BLDS, LLC was recently published in a special “Machine Learning with Python” issue of the journal, Information (https://www.mdpi.com/2078-2489/11/3/137). In “A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing...

Read more
Insights From the New 2020 Gartner Magic Quadrant For Cloud AI Developer Services

We are excited to be named a Visionary in the new Gartner Magic Quadrant for Cloud AI Developer Services (Feb 2020), and have been recognized for both our completeness of vision and ability to execute in the emerging market for cloud-hosted artificial intelligence (AI) services for application developers. This is the second Gartner MQ tha...

Read more
Interview with Patrick Hall | Machine Learning, H2O.ai & Machine Learning Interpretability

Audio Link: In this episode of Chai Time Data Science , Sanyam Bhutani interviews Patrick Hall, Sr. Director of Product at H2O.ai. Patrick has a background in Math and has completed a MS Course in Analytics.In this interview they talk all about Patrick’s journey into ML, ML Interpretability and his journey at H2O.ai, how his work has ev...

Read more
Novel Ways To Use Driverless AI
by Thomas Ott | November 14, 2019 H2O Driverless AI, Machine Learning Interpretability

I am biased when I write that Driverless AI is amazing, but what’s more amazing is how I see customers using it. As a Sales Engineer, my job has been to help our customers and prospects use our flagship product. In return, they give us valuable feedback and talk about how they used it. Feedback is gold to us. Driverless AI has evolved in...

Read more
Takeaways from the World’s largest Kaggle Grandmaster Panel
by Sanyam Bhutani | October 31, 2019 Community, Data Science, Machine Learning Interpretability, Makers

Disclaimer: We were made aware by Kaggle of adversarial actions by one of the members of this panel. This panelist is no longer a Kaggle Grandmaster and no longer affiliated with H2O.ai as of January 10th, 2020. Personally, I’m a firm believer and fan of Kaggle and definitely look at it as the home of Data Science. ...

Read more
A Full-Time ML Role, 1 Million Blog Views, 10k Podcast Downloads: A Community Taught ML Engineer
by Sanyam Bhutani | October 17, 2019 Data Science, Machine Learning Interpretability, Makers

Content originally posted in HackerNoon and Towards Data Science 15th of October, 2019 marks a special milestone, actually quite a few milestones. So I considered sharing it in the form a blog post, on a publication that has been home to all of my posts The online community has been too kind to me and these blog posts have been a method ...

Read more
New Innovations in Driverless AI

What’s new in Driverless AIWe’re super excited to announce the latest release of H2O Driverless AI . This is a major release with a ton of new features and functionality. Let’s quickly dig into all of that: Make Your Own AI with Recipes for Every Use Case: In the last year, Driverless AI introduced time-series and NLP recipes to meet the...

Read more
Mitigating Bias in AI/ML Models with Disparate Impact Analysis
by Karthik Guruswamy | August 02, 2019 AutoML, H2O Driverless AI, Machine Learning Interpretability

Everyone understands that the biggest plus of using AI/ML models is a better automation of day-to-day business decisions, personalized customer service, enhanced user experience, waste elimination, better ROI, etc. The common question that comes up often though is — How can we be sure that the AI/ML decisions are free from bias/discrimina...

Read more
Toward AutoML for Regulated Industry with H2O Driverless AI

Predictive models in financial services must comply with a complex regime of regulations including the Equal Credit Opportunity Act (ECOA), the Fair Credit Reporting Act (FCRA), and the Federal Reserve’s S.R. 11-7 Guidance on Model Risk Management. Among many other requirements, these and other applicable regulations stipulate predictive ...

Read more
Underwrite.ai Transforms Credit Risk Decision-Making Using AI

Determining credit has been done by traditional techniques for decades. The challenge with traditional credit underwriting is that it doesn’t take into account all of the various aspects or features of an individual’s credit ability. Underwrite.ai, a new credit startup, saw this as an opportunity to apply machine learning and AI to impro...

Read more
Can Your Machine Learning Model Be Hacked?!

I recently published a longer piece on security vulnerabilities and potential defenses for machine learning models. Here’s a synopsis.IntroductionToday it seems like there are about five major varieties of attacks against machine learning (ML) models and some general concerns and solutions of which to be aware. I’ll address them one-by-o...

Read more
H2O World Explainable Machine Learning Discussions Recap

Earlier this year, in the lead up to and during H2O World, I was lucky enough to moderate discussions around applications of explainable machine learning (ML) with industry-leading practitioners and thinkers. This post contains links to these discussions, written answers and pertinent resources for some of the most common questions asked ...

Read more
How to explain a model with H2O Driverless AI

The ability to explain and trust the outcome of an AI-driven business decision is now a crucial aspect of the data science journey. There are many tools in the marketplace that claim to provide transparency and interpretability around machine learning models but how does one actually explain a model? H2O Driverless AI provides robust inte...

Read more
What is Your AI Thinking? Part 3

In the past two posts we’ve learned a little about interpretable machine learning in general. In this post, we will focus on how to accomplish interpretable machine learning using H2O Driverless AI . To review, the past two posts discussed: Exploratory data analysis (EDA) Accurate and interpretable models Global explanations Local...

Read more
Key Takeaways from the Gartner Magic Quadrant For Data Science & Machine Learning
by H2O.ai Team | January 30, 2019 Gartner, H2O-3, Machine Learning, Machine Learning Interpretability

The Gartner Magic Quadrant for Data Science and Machine Learning Platforms (Jan 2019) is out and H2O.ai has been named a Visionary. The Gartner MQ evaluates platforms that enable expert data scientists, citizen data scientists and application developers to create, deploy and manage their own advanced analytic models.H2O.ai Key Highlights...

Read more
What is Your AI Thinking? Part 2

Explaining AI to the Business PersonWelcome to part 2 of our blog series: What is Your AI Thinking? We will explore some of the most promising testing methods for enhancing trust in AI and machine learning models and systems. We will also cover the best practice of model documentation from a business and regulatory standpoint.More Techniq...

Read more
What is Your AI Thinking? Part 1

Explaining AI to the Business PersonExplainable AI is in the news, and for good reason. Financial services companies have cited the ability to explain AI-based decisions as one of the critical roadblocks to further adoption of AI for their industry . Moreover, interpretability, fairness, and transparency of data-driven decision support sy...

Read more

ERROR