From GLM to GBM – Part 2
July 9, 2020 Data Science Explainable AI GBM GLM Machine Learning Interpretability Responsible AI ShapleyHow an Economics Nobel Prize could revolutionize insurance and lending Part 2: The Business Value of a Better Model Introduction In Part 1, we proposed better revenue and managing regulatory requirements with machine learning (ML). We made the first part of the argument by showing how gradient boosting machines (GBM), a type of ML, can […]
From GLM to GBM – Part 1
June 9, 2020 Data Science Explainable AI GBM GLM Machine Learning Interpretability Responsible AI ShapleyHow an Economics Nobel Prize could revolutionize insurance and lending Part 1: A New Solution to an Old Problem Introduction Insurance and credit lending are highly regulated industries that have relied heavily on mathematical modeling for decades. In order to provide explainable results for their models, data scientists and statisticians in both industries relied heavily […]
Modelling Currently Infected Cases of COVID-19 Using H2O Driverless AI
March 30, 2020 AI4Good Explainable AI GLM H2O Driverless AI Healthcare Machine Learning Machine Learning Interpretability Responsible AI Technical Time SeriesIn response to the wake of the pandemic called COVID-19, H2O.ai organized a panel discussion to cover AI in healthcare, and some best practices to put in place in order to achieve better outcomes. The attendees had many questions that we did not have the time to cover thoroughly throughout the course of that 1-hour […]
Summary of a Responsible Machine Learning Workflow
March 20, 2020 Data Science Deep Learning Machine Learning Machine Learning Interpretability Neural Networks Python Responsible AIA paper resulting from a collaboration between H2O.AI and BLDS, LLC was recently published in a special “Machine Learning with Python” issue of the journal, Information (https://www.mdpi.com/2078-2489/11/3/137). In “A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing,” coauthors, Navdeep Gill, Patrick Hall, Kim Montgomery, and Nicholas Schmidt compare model accuracy […]
Insights From the New 2020 Gartner Magic Quadrant For Cloud AI Developer Services
February 26, 2020 AutoML Cloud Explainable AI Gartner H2O H2O Driverless AI Machine Learning Machine Learning Interpretability NLPWe are excited to be named a Visionary in the new Gartner Magic Quadrant for Cloud AI Developer Services (Feb 2020), and have been recognized for both our completeness of vision and ability to execute in the emerging market for cloud-hosted artificial intelligence (AI) services for application developers. This is the second Gartner MQ that […]
Interview with Patrick Hall | Machine Learning, H2O.ai & Machine Learning Interpretability
February 20, 2020 Data Science Explainable AI H2O Driverless AI Machine Learning Interpretability MakersAudio Link: In this episode of Chai Time Data Science, Sanyam Bhutani interviews Patrick Hall, Sr. Director of Product at H2O.ai. Patrick has a background in Math and has completed a MS Course in Analytics. In this interview they talk all about Patrick’s journey into ML, ML Interpretability and his journey at H2O.ai, how his […]
Novel Ways To Use Driverless AI
November 14, 2019 H2O Driverless AI Machine Learning InterpretabilityI am biased when I write that Driverless AI is amazing, but what’s more amazing is how I see customers using it. As a Sales Engineer, my job has been to help our customers and prospects use our flagship product. In return, they give us valuable feedback and talk about how they used it. Feedback […]
Useful Machine Learning Sessions from the H2O World New York
November 13, 2019 H2O World Machine Learning Interpretability MakersConferences not only help us learn new skills but also enable us to build brand new relationships and networks along the way. H2O World is one such interactive community event featuring advancements in AI, machine learning, and explainable AI. It is a platform where people not only get to connect with the fantastic community but […]
Takeaways from the World’s largest Kaggle Grandmaster Panel
October 31, 2019 Community Data Science Machine Learning Interpretability MakersDisclaimer: We were made aware by Kaggle of adversarial actions by one of the members of this panel. This panelist is no longer a Kaggle Grandmaster and no longer affiliated with H2O.ai as of January 10th, 2020. Personally, I’m a firm believer and fan of Kaggle and definitely look at it as the home of […]
A Full-Time ML Role, 1 Million Blog Views, 10k Podcast Downloads: A Community Taught ML Engineer
October 17, 2019 Data Science Machine Learning Interpretability Makers PersonalContent originally posted in HackerNoon and Towards Data Science 15th of October, 2019 marks a special milestone, actually quite a few milestones. So I considered sharing it in the form a blog post, on a publication that has been home to all of my posts 🙂 The online community has been too kind to me […]