H2O AutoML provides an easy-to-use interface that automates data pre-processing, training and tuning a large selection of candidate models (including multiple stacked ensemble models for superior model performance). The result of the AutoML run is a “leaderboard” of H2O models which can be easily exported for use in production.
The library can be interfaced with R, Python, Scala and even using a Web GUI. The talk also briefly covers R and Python code examples for getting started.
Goals and Features of AutoML
AutoML has been an active area of research for a long time and recently it has seen developments in the Enterprise level. Goals of AutoML:
Train the best model in the least amount of time. Time here refers to time spent by the user or the expert or the time spent on writing code. Essentially, we’re aiming at saving human hours.
Reduce Human Efforts & Expertise in required in ML: Efforts are reduced by reducing the manual code-writing time along with reducing the time spent on writing code. The entry-level can be reduced by creating an AutoML having performance ≥Average Data Scientist.
Improve the performance of Machine Learning models.
Increase reproducibility and establish a baseline for scientific research or applications: Running an AutoML experiment can provide us a good baseline for our experiments that we can build upon while making sure that the same is reproducible.
AutoML/ML Pipeline
The three essential steps of an AutoML Pipeline can be represented as:
The different parts of the respective aspects could be as follows. Note the ones currently supported by H2O AutoML are in bold, though most of the remaining items are in progress or on the roadmap.
The following are part of the H2O AutoML Pipeline:
Basic data pre-processing (as in all H2O algos).
Trains a random grid of GBMs, DNNs, GLMs, etc. using a carefully chosen hyper-parameter space. The team has spent a lot of time thinking about the algorithms to be used, along with how much time and parameters to be used along with it. This is a kind of-if you may-” Smart Brute-Force”, avoiding the common mistakes.
Individual models are tuned using cross-validation, to avoid overfitting.
Two Stacked Ensembles are trained: – “All Models”: Usually the best performing ensemble-contains an ensemble of all of the models trained. – “Best of Family”: Best of Each Group (e.g. best GBM, best XGBoost, best RF, etc), usually lighter weight than an all model approach, consider a case where you might have 1000 models: you’d like a better export for production rising.
All models are easily exportable for productionizing.
H2O AutoML: Web GUI
The Web GUI allows simple click and selection for all of the parameters inside of H2O-3.
Note: This is spun up by default, whenever you start an H2O cluster on a machine.
Sanyam Bhutani is a Machine Learning Engineer and AI Content Creator at H2O.ai. He is also an inc42, Economic Times recognized Machine Learning Practitioner. (link to the interviews: inc42, Economic Times) Sanyam is an active Kaggler where he is a Triple Tier Expert, ranked in Global Top 1% in all categories as well as an active AI blogger on the medium, Hackernoon (Medium Blog link) with over 1 Million+ Views overall. Sanyam is also the host of Chai Time Data Science Podcast where he interviews top practitioners, researchers, and Kagglers. You can follow him on Twitter or subscribe to his podcast.
Ready to see the H2O.ai platform in action?
Make data and AI deliver meaningful and significant value to your organization with our platform.