Return to page

H2O.ai Demo Center

These self-service demo videos provide a summary of the key features and functions of H2O.ai’s AI Cloud platform. H2O.ai is an end-to-end AI platform to manage your entire AI lifecycle journey.

 

In this video, you will:

  • Be introduced to the H2O AI Cloud and understand its wide range of user personas, deployment options and automatic services.

  • See how to access the H2O AI Cloud App Store and launch instances of uploaded Wave applications.

  • See how to use the H2O Command Line Interface (CLI) and access the platform using Python APIs.

 

In this video, you will:

  • Be introduced to the Label Genie Wave application capable of using AI to help annotate images and text.

  • See an example of using pre-trained "zero-shot" models to assign a positive or negative sentiment just by looking at input text examples.

  • Be introduced to the AI Engine called Hydrogen Torch that can automatically train deep learning models for image, video, text and audio use cases.

  • See how to use Hydrogen Torch to visualize and interact with your image, video, text or audio data.

  • See how to build deep learning neural networks using existing state of the art architectures and use transfer learning to calibrate them to your data and use case without writing any code.

  • See how to evaluate performance of Hydrogen Torch models across a spectrum of confidence levels and interpret the results using advanced NLP techniques.

  • See how a custom Wave application can be built for speech recognition by seeing a demo of the VoxPad: Speech-To-Text Wave application.

In this video, you will:

  • Be introduced to the Label Genie Wave application capable of using AI to help annotate images and text.

  • See how to use pre-trained models to predict labels for Object Detection, Classification, Regression and Named Entity Recognition use cases.

  • Be introduced to the AI Engine called Hydrogen Torch that can automatically train deep learning models for image, video, text and audio use cases.

  • Be able to build deep learning neural networks using existing state-of-the-art architectures and use transfer learning to calibrate them to your data and use case without writing any code.

  • Be able to evaluate performance of Hydrogen Torch models across a spectrum of confidence levels and interpret the results using advanced Image techniques such as GradCam.

  • See how a custom Wave application can be built for computer vision by seeing a demo of the Real-Time Video Scoring Wave application.

In this video, you will:

  • Be introduced to the AI Engine called Document AI that can combine Optical Character Recognition (OCR), Natural Language Processing (NLP) and Computer Vision models for Intelligent Character Recognition (ICR) and other document use cases.

  • See how to use Document AI to apply both commonly used OCR techniques, as well as additional techniques that have been developed specifically for Document AI by several Kaggle Grandmaster data scientists.

  • See how to create labels and annotate a Document Set.

  • See how to train machine learning models to perform page classification and token classification on your labeled Document Set.

  • See how to use a custom post-processor to extract exactly what is required from scoring a new Document.

  • See how to leverage Kubernetes auto-scaling and other configurations to score large sets of documents in a parallel fashion using the H2O Bulk Scorer.

In this video, you will:

  • Be introduced to Enterprise Steam, an easy and fast way to manage AI Engines like Driverless AI and H2O-3 on Kubernetes.
    NOTE: Enterprise Steam has recently been replaced by the AI Engine Manager on H2O AI Cloud.

  • Understand a few ways to load data into Driverless AI from various locations including external databases and H2O Drive.

  • See how to quickly profile your data set's characteristics, both visually and in tabular form, as well as review data quality and completeness.

  • See how to use the Predict Wizard to run guided experiments to solve the machine learning problem related to your use case.

  • See how to visualize the complete machine learning pipeline from automatic feature engineering, to individual model structures, to ensembles of predictions from many models.

  • See how to retrieve automatically generated documentation for all assumptions, insights, and results from an experiment.

  • See how to interpret your machine learning pipeline using both global and local reason codes, and run an analysis looking to test for signs of bias in the model's performance.

In this video, you will:

  • See how to create a project in the AI Cloud and share it with your colleagues and collaborators.

  • See how to access Machine Learning Operations (ML OPs) in the AI Cloud.

  • See how to add your experiment to MLOps, register it to the Model Registry, and create a deployment of this model as a REST endpoint.

  • See how to launch deployments that are advanced real-time models that return both predictions, as well as feature-specific Shapley values for both raw features and automatically engineered features identified by the AI Engine.

  • See how to monitor your deployment in terms of number of predictions returned, the average latency, as well as drift metrics for each feature in the model.

In this video, you will:

  • Be introduced to the eScorer service available in the H2O AI Cloud.

  • See how to deploy machine learning pipelines in advanced deployment patterns for batch scoring and in-database scoring.

  • See how to interact with the MLOps Model Registry and automatically generate the deployment code necessary based on the desired deployment strategy.

  • See how to optionally monitor deployments even outside of the AI Cloud.

In this video, you will:

  • Be introduced to the AI Notebook Wave Application available in the H2O AI Cloud.

  • See an end-to-end example of using a Jupyter Notebook to programmatically train advanced machine learning pipelines and put them into production.

  • See an example of the Sonar Python package which provides responsible and interpretable results for H2O models as well as third party models such as sklearn.

In this video, you will:

  • See an example of using auto-complete snippets for Wave application templates in common text editors like VS Code and PyCharm.

  • See how to deploy your Wave application locally on your laptop.

  • See how to use the H2O Command Line Interface (CLI) to bundle your application up and deploy it to the AI Cloud platform.

  • See how to launch an instance of your Wave application from both the AI Cloud Homepage as well as the App Store.