November 22nd, 2021

Amazon Redshift Integration for H2O.ai Model Scoring

RSS icon RSS Category: Data Science, H2O AI Cloud

We consistently work with our partners on innovative ways to use models in production here at H2O.ai, and we are excited to demonstrate our AWS Redshift integration for model scoring.

Amazon Redshift is a very popular data warehouse on AWS. We wanted to expand on the existing capacities of using data from Redshift to train a model on the H2O AI Cloud, which is a comprehensive automated machine learning platform. Now, once a model is trained, Redshift can use the model for inferencing (scoring) using standard SQL. 

Calling the model using SQL is a very convenient way to create inferences that can be stored in Redshift.  Because the model operates like an SQL function, it is easy to include the model in an SQL query that uses live data rather than the more common extract, score and upload approach.

Any application that uses Redshift to access data can now get real time predictions by leveraging current data at scoring time. Once a model is created using the H2O AI Hybrid Cloud, the following install steps will enable that model to be used in Redshift:

 

  1. Download the mojo.zip file (Download Scoring Pipeline > Download Mojo Scoring Pipeline)
  2. Unzip the mojo.zip, in the unzipped directory, and find the file pipeline.mojo. This is the model and is the only file required.
  3. Download and follow the AWS SageMaker integration steps outlined here
  4. Generate the RedShift SQL for this model. With all the files in the same directory, use the jar downloaded in Step 3 and specify the model to use and the type of artifact you want to generate (Redshift-SQL). This command will generate the Redshift function for this specific model and the SQL that can be used to call the model for inference.The result of the above command is a file (pipeline.mojo.Redshift-sql):

Notice how the function name (h2oscore_pipeline) contains the name of the model. This is because each model could have a different number of parameters and types attributed to it. If we specified a model called churn.mojo on the generation step, the function name would be h2oscore_churn.

    5. Now paste into an SQL tool and execute.

The SageMaker and IAM_ROLE need to be specified for your account. These would have been created in Step 3. 

The last part of the output shows an example SQL select statement for the specific model. This shows us how to call the model and the columns that will be passed from the specific table. The eye catcher <table-name> should be changed to the table in RedShift that you would like to use for inferencing.

Now you can execute the SQL with any SQL Editor that can connect to RedShift. In this example,  I used the AWS Query Editor.

 

One way to capture the results is to create a table with a select statement. This operation allows the original table to remain unchanged, and the results of the scoring to be written to a new table. Notice here that I used a key (customer id) so that I can use this within a join to reference the original row.

This new functionality enables scoring to be invoked from Redshift. This saves time for the operations team as data does not have to be selected and exported for scoring then reloaded, reducing the time it takes to operationalize the model.

As the model can be called using SQL, now any application that uses Redshift can get predictions, this further increases the value of the models output throughout the organization, as the predictions can use current data rather than predictions that were created days or weeks ago.

This new functionality for scoring opens the possibility of using Redshift for real time scoring! If you aren’t a current H2O.ai user, you can sign up to try the H2O AI Cloud for free today!

 

 

 

About the Author

Eric Gudgion
Eric Gudgion

Eric is a Senior Principal Solutions Architect, he is passionate about performance and scalability. Eric’s role enables him to help customers adopt h2o within their enterprises.

Leave a Reply

+
H2O Wave joins Hacktoberfest

It’s that time of the year again. A great initiative by DigitalOcean called Hacktoberfest that aims to bring

September 29, 2022 - by Martin Turoci
+
Three Keys to Ethical Artificial Intelligence in Your Organization

There’s certainly been no shortage of examples of AI gone bad over the past few

September 23, 2022 - by H2O.ai Team
+
Using GraphQL, HTTPX, and asyncio in H2O Wave

Today, I would like to cover the most basic use case for H2O Wave, which is

September 21, 2022 - by Martin Turoci
+
머신러닝 자동화 솔루션 H2O Driveless AI를 이용한 뇌에서의 성차 예측

Predicting Gender Differences in the Brain Using Machine Learning Automation Solution H2O Driverless AI 아동기 뇌인지

August 29, 2022 - by H2O.ai Team
+
Make with H2O.ai Recap: Validation Scheme Best Practices

Data Scientist and Kaggle Grandmaster, Dmitry Gordeev, presented at the Make with H2O.ai session on

August 23, 2022 - by Blair Averett
+
Integrating VSCode editor into H2O Wave

Let’s have a look at how to provide our users with a truly amazing experience

August 18, 2022 - by Martin Turoci

Start Your Free Trial