Search Button
RSS icon Sort by:
From GLM to GBM – Part 2
by Patrick Moran July 9, 2020 Data Science Explainable AI GBM GLM Machine Learning Interpretability Responsible AI Shapley

How an Economics Nobel Prize could revolutionize insurance and lending Part 2: The Business Value of a Better Model Introduction In Part 1, we proposed better revenue and managing regulatory requirements with machine learning (ML). We made the first part of the argument by showing how gradient boosting machines (GBM), a type of ML, can […]

Read More
From GLM to GBM – Part 1
by Patrick Moran June 9, 2020 Data Science Explainable AI GBM GLM Machine Learning Interpretability Responsible AI Shapley

How an Economics Nobel Prize could revolutionize insurance and lending Part 1: A New Solution to an Old Problem Introduction Insurance and credit lending are highly regulated industries that have relied heavily on mathematical modeling for decades. In order to provide explainable results for their models, data scientists and statisticians in both industries relied heavily […]

Read More
Modelling Currently Infected Cases of COVID-19 Using H2O Driverless AI
by Erika Kamholz March 30, 2020 AI4Good Explainable AI GLM H2O Driverless AI Healthcare Machine Learning Machine Learning Interpretability Responsible AI Technical Time Series

In response to the wake of the pandemic called COVID-19, H2O.ai organized a panel discussion to cover AI in healthcare, and some best practices to put in place in order to achieve better outcomes. The attendees had many questions that we did not have the time to cover thoroughly throughout the course of that 1-hour […]

Read More
Brain Pattern
Summary of a Responsible Machine Learning Workflow
by Patrick Moran March 20, 2020 Data Science Deep Learning Machine Learning Machine Learning Interpretability Neural Networks Python Responsible AI

A paper resulting from a collaboration between H2O.AI and BLDS, LLC was recently published in a special “Machine Learning with Python” issue of the journal, Information (https://www.mdpi.com/2078-2489/11/3/137). In “A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing,” coauthors, Navdeep Gill, Patrick Hall, Kim Montgomery, and Nicholas Schmidt compare model accuracy […]

Read More
Insights From the New 2020 Gartner Magic Quadrant For Cloud AI Developer Services
by Erika Kamholz February 26, 2020 AutoML Cloud Explainable AI Gartner H2O H2O Driverless AI Machine Learning Machine Learning Interpretability NLP

We are excited to be named a Visionary in the new Gartner Magic Quadrant for Cloud AI Developer Services (Feb 2020), and have been recognized for both our completeness of vision and ability to execute in the emerging market for cloud-hosted artificial intelligence (AI) services for application developers. This is the second Gartner MQ that […]

Read More
Interview with Patrick Hall | Machine Learning, H2O.ai & Machine Learning Interpretability
by Erika Kamholz February 20, 2020 Data Science Explainable AI H2O Driverless AI Machine Learning Interpretability Makers

Audio Link: In this episode of Chai Time Data Science, Sanyam Bhutani interviews Patrick Hall, Sr. Director of Product at H2O.ai. Patrick has a background in Math and has completed a MS Course in Analytics. In this interview they talk all about Patrick’s journey into ML, ML Interpretability and his journey at H2O.ai, how his […]

Read More
Novel Ways To Use Driverless AI
by Bruna Smith November 14, 2019 H2O Driverless AI Machine Learning Interpretability

I am biased when I write that Driverless AI is amazing, but what’s more amazing is how I see customers using it. As a Sales Engineer, my job has been to help our customers and prospects use our flagship product. In return, they give us valuable feedback and talk about how they used it.  Feedback […]

Read More
Useful Machine Learning Sessions from the H2O World New York
by Bruna Smith November 13, 2019 H2O World Machine Learning Interpretability Makers

Conferences not only help us learn new skills but also enable us to build brand new relationships and networks along the way. H2O World is one such interactive community event featuring advancements in AI, machine learning, and explainable AI. It is a platform where people not only get to connect with the fantastic community but […]

Read More
Takeaways from the World’s largest Kaggle Grandmaster Panel
by Bruna Smith October 31, 2019 Community Data Science Machine Learning Interpretability Makers

Disclaimer: We were made aware by Kaggle of adversarial actions by one of the members of this panel. This panelist is no longer a Kaggle Grandmaster and no longer affiliated with H2O.ai as of January 10th, 2020. Personally, I’m a firm believer and fan of Kaggle and definitely look at it as the home of […]

Read More
A Full-Time ML Role, 1 Million Blog Views, 10k Podcast Downloads: A Community Taught ML Engineer
by Bruna Smith October 17, 2019 Data Science Machine Learning Interpretability Makers Personal

Content originally posted in HackerNoon and Towards Data Science 15th of October, 2019 marks a special milestone, actually quite a few milestones. So I considered sharing it in the form a blog post, on a publication that has been home to all of my posts 🙂 The online community has been too kind to me […]

Read More
1 2 3 4