Return to page

BLOG Automatic Machine Learning on Red Hat OpenShift Container Platform Delivers Data Science Ease and Flexibility at Scale


By Vinod Iyengar | minute read | May 14, 2019

Blog decorative banner image

Last week at Red Hat Summit in Boston, Sri Ambati, CEO and Founder, demonstrated how to use our award-winning automatic machine learning platform, H2O Driverless AI , on Red Hat OpenShift Container  Platform.  You can watch the replay here .

What we showed not only helps data scientists achieve results, it also enables them to scale their machine learning efforts and easily deploy their models for enterprises. Sri talked about the five easy steps to do automatic machine learning with Driverless AI on Red Hat OpenShift.

www.h2o.ai2019/05/vinod-2-1.png www.h2o.ai2019/05/vinod-2-1.png
  1. Drag and Drop Data: Bring in your data, whether it’s on prem or on cloud, you just drag and drop data from various different sources. H2O Driverless AI has over 10 different connectors including Amazon S3, Google Big Query, Snowflake, HDFS, and more.
  2. Automatic Visualization: Next we can run the data through automatic visualization, which does a number of statistical checks on your data to find some most of the most interesting patterns and helps you fix your data quality as well.
  3. Automatic Machine Learning: After that you run through our automatic machine learning platform, an engine which does automatic feature engineering, modeling and ensembling for you. If you are an expert data scientist, you can select your own algorithms or tweak the parameters, otherwise, it does it all for you.
  4. Automatic Scoring Pipelines: Driverless AI automatically generates a scoring pipeline, that can easily be deployed as a Java or Python object.
  5. Interpret the results: Using Driverless AI machine learning interpretability, it is easy to understand and see the reason codes for why the model that was selected or prediction was made. It also automatically creates a document to walk through the entire workflow and record every step of the process.

Now let’s see how all of these come together on OpenShift. Using the OpenShift templates, we can train a model with one, and with the other deploy it.

www.h2o.ai2019/05/RH-Blog-Image-2.png www.h2o.ai2019/05/RH-Blog-Image-2.png
www.h2o.ai2019/05/RH-Blog-Image-3.png www.h2o.ai2019/05/RH-Blog-Image-3.png

The demonstration was focused on determining sentiment analysis  which we did on a tweet at the end of the presentation.  We started with a sentiment data set that we had pre-loaded.

www.h2o.ai2019/05/RH-Blog-Image-4.png www.h2o.ai2019/05/RH-Blog-Image-4.png

We can visualize the data to get a snapshot of how the data set looks.  Auto Visualization, part of Driverless AI, allows us to look at the data in many ways from determining data outliers, to correlations, heat maps and more.

www.h2o.ai2019/05/RH-Blog-Image-5.png www.h2o.ai2019/05/RH-Blog-Image-5.png

We want to find out whether the sentiment is positive or negative, and the only thing we need to do at this point is to optimize for the accuracy, time and interpretability, the “knobs and dials” that tell us how complex we want the model to be, how much time to train, and how interpretable the model will be.  Driverless AI automatically detected that this is an NLP problem and applied one of our NLP recipes to the problem.

www.h2o.ai2019/05/RH-Blog-Image-6.png www.h2o.ai2019/05/RH-Blog-Image-6.png

We saw about 93% accuracy, which is pretty good for a problem that was trained on a really small dataset with only 12,000 rows.

www.h2o.ai2019/05/RH-Blog-Image-7.png www.h2o.ai2019/05/RH-Blog-Image-7.png

You can look at different charts like a ROC curve, lift and gains and also look at the summary quickly. You can see we created about 352 features out of the one feature that was given originally. We only gave the text column to start. Based on the text column, we created word embeddings and those were the ones that were used by the model.

Once the models were built, you can download the scoring pipeline or you can interpret this model. We’ll show how to deploy it.

www.h2o.ai2019/05/RH-Blog-image-8.png www.h2o.ai2019/05/RH-Blog-image-8.png

Using the OpenShift console, and we now have a template to deploy this Mojo (our format for deploying models) that is optimized for low latency.

www.h2o.ai2019/05/RH-Blog-image-9.png www.h2o.ai2019/05/RH-Blog-image-9.png

We had a small web app running and typed in the following to see the sentiment based on the model:

“The Red Hat keynote was beautiful & awesome”

Which resulted in positive sentiment.  This was a fairly easy demonstration with Driverless AI on OpenShift, but it showed how easy and seamless it is to start an instance, build and interpret a model, and finally publish that model to score live data.

It was great to participate at the Red Hat Summit. We enjoyed demonstrating how and Red Hat are working together to democratize AI for the enterprise.


Vinod Iyengar, VP of Products

Vinod Iyengar is the Vice President of Product at He leads a team charged with product management and product development across the platform. Vinod has worked for since 2015. In his time with the company, he has worked as the VP of marketing & technical alliances, and VP of customer success & product. Vinod received his bachelor’s degree in engineering from the University of Mumbai and his master’s degree in quantitative analysis from the University of Cincinnati College of Business.