November 12th, 2018

New features in H2O 3.22

RSS icon RSS Category: H2O Release

Xia Release (H2O 3.22)

There’s a new major release of H2O and it’s packed with new features and fixes! Among the big new features in this release, we introduce Isolation Forest to our portfolio of machine learning algorithms and integrates the XGBoost algorithm into our AutoML framework. The release is named after Zhihong Xia.

Isolation Forest

Isolation Forest is an unsupervised machine learning algorithm used for anomaly detection. Anomaly detection is applicable to a variety of uses cases, including Fraud Detection or Intrusion Detection. The Isolation Forest algorithm is different from other methods typically used for anomaly detection: it directly identifies the exceptional observations instead of learning the pattern of the normal observations (as is done in the H2O deep learning based autoencoder). The H2O implementation of Isolation Forest is based on the Distributed Random Forest algorithm, so it is capable of analyzing large datasets in multi-node clusters. Note that Isolation Forest is currently in a Beta state. Additional enhancements and improvements will be made in future releases. A blog post is available for more information.

Inspection of Tree-based models

During the development of H2O-3 version 3.21.x, an API for tree inspection was introduced for both the Python and R clients. With the Tree API, it is possible to download, traverse and inspect individual trees inside tree-based algorithms. In this release, this API can be used to fetch any tree from any tree-based model (Gradient Boosting Machines, Distributed Random Forest, XGBoost and Isolation Forest). For more details, please see our latest documentation for Python and for R. There is also a blog post available.

XGBoost in AutoML

Our AutoML framework now includes the XGBoost algorithm, one of the most popular and powerful machine learning algorithms. H2O users have been able to leverage the power of XGBoost for quite some time, however, in the 3.22 release we focused on further performance and stability improvements of our XGBoost implementation. Thanks to these improvements we were able to include XGBoost in the fully automated setting of AutoML. XGBoost models built during the AutoML process will also be included in the final Stacked Ensemble models. Because XGBoost models are typically some of the top performers on the AutoML Leaderboard and also since Stacked Ensemble models benefit from the added diversity of models, users can expect that the final performance H2O AutoML to be improved on many datasets.

Target Encoding

Feature engineering in H2O has been enhanced with the possibility of encoding categorical variables using mean of a target variable. It can be performed in two easy steps. First step is to create a target-encoding map. As mean encoding is prone to overfitting, there are several ways to avoid it included. Second step is to simply apply the target-encoding map created in the first step. New columns with target-encoding values are then added to the data. Previously, target encoding had only been available in R, but in 3.22, it’s now available in Java and Python as well. For details, please see the documentation.

Additional Highlights

Below is a list of some of the highlights from the 3.22 release. As usual, you can see a list of all the items that went into this release at the Changes.md file in the h2o-3 GitHub repository.

New Features:

  • [PUBDEV-5170] – Individual predictions of GBM trees are now exposed in the MOJO API.
  • [PUBDEV-5775] – It is now possible to combine two models into one MOJO, with the second model using the prediction from the first model as a feature. These models can be from any algorithm or combination of algorithms except Word2Vec.
  • [PUBDEV-5988] – Users can now specify a `-features` parameter when starting h2o from the command line. This allows users to remove experimental or beta algorithms when starting H2O-3. Available options for this parameter include `beta`, `stable`, and `experimental`.
  • [PUBDEV-5695] – Created an R demo for CoxPH, available here.

Bugs:

  • [PUBDEV-5746] – Improved efficiency of the `keep_cross_validation_models` parameter in AutoML
  • [PUBDEV-5903] – In AutoML, StackEnsemble models are now always trained, even if we reached `max_runtime_secs` limit.
  • [PUBDEV-5998] – Exposed H2OXGBoost parameters used to train a model to the Python API. Previously, this information was visible in the Java backend but was not passed back to the Python API.
  • [PUBDEV-6005] – When running AutoML in Flow, updated the list of algorithms that can be selected in the “Exclude These Algorithms” section.

Docs:

  • [PUBDEV-4505] – Added Scala and Java examples to the Building and Extracting a MOJO topic.
  • [PUBDEV-4590] – Added a Scala example to the Stacked Ensembles topic.
  • [PUBDEV-5756] – Added Python examples to the Cross-Validation topic in the User Guide.
  • [PUBDEV-5982] – Added documentation for Isolation Forest (beta).

Download our latest release: http://h2o-release.s3.amazonaws.com/h2o/latest_stable.html

About the Authors

Erin LeDell
Erin LeDell

Erin is the Chief Machine Learning Scientist at H2O.ai. Erin has a Ph.D. in Biostatistics with a Designated Emphasis in Computational Science and Engineering from University of California, Berkeley. Her research focuses on automatic machine learning, ensemble machine learning and statistical computing. She also holds a B.S. and M.A. in Mathematics. Before joining H2O.ai, she was the Principal Data Scientist at Wise.io (acquired by GE Digital in 2016) and Marvin Mobile Security (acquired by Veracode in 2012), and the founder of DataScientific, Inc.

michal kurka
Michal Kurka

Michal is a software engineer with a passion for crafting code in Java and other JVM languages. He started his professional career as a J2EE developer and spent his time building all sorts of web and desktop applications. Four years ago he truly found himself when he entered the world of big data processing and Hadoop. Since then he enjoys working with distributed platforms and implementing scalable applications on top of them. He holds a Master of Computer Science form Charles University in Prague. His field of study was Discrete Models and Algorithms with focus on Optimization.

Leave a Reply

+
AI in Insurance: Resolution Life’s AI Journey with Rajesh Malla

Rajesh Malla, Head of Data Engineering - Data Platforms COE at Resolution Life insurance takes

March 29, 2023 - by Liz Pratusevich
AT&T panel: AI as a Service
+
AT&T panel: AI as a Service (AIaaS)

Mark Austin, Vice President of Data Science at AT&T joined us on stage at H2O

March 22, 2023 - by Liz Pratusevich
+
[Infographic] Healthcare providers: How to avoid AI “Pilot-Itis”

From increased clinician burnout and financial instability to delays in elective and preventative care, the

March 15, 2023 - by
+
Deploy a WAVE app on an AWS EC2 instance

This article was originally published by Greg Fousas and Michelle Tanco on Medium  and reviewed by

March 10, 2023 - by Michelle Tanco and Greg Fousas
+
How Horse Racing Predictions with H2O.ai Saved a Local Insurance Company $8M a Year

In this Technical Track session at H2O World Sydney 2022, SimplyAI's Chief Data Scientist Matthew

March 8, 2023 - by Liz Pratusevich
+
AI and Humans Combating Extinction Together with Dr. Tanya Berger-Wolf

Dr. Tanya Berger-Wolf, Co-Founder and Director of AI for conservation nonprofit Wild Me, takes the

March 1, 2023 - by Liz Pratusevich

Request a Demo

Explore how to Make, Operate and Innovate with the H2O AI Cloud today

Learn More