This course builds on the Level 1 overview with a deeper look at Large Language Models and practical GenAI workflows.
Learn how to work with RAG techniques, fine-tune models, prepare datasets, and evaluate performance using tools like Enterprise h2oGPTe, LLM DataStudio, EvalGPT, and the GenAI AppStore.
Led by Kaggle Grandmaster Sanyam Bhutani, the course includes Python labs, research-based materials, and guided practice using H2O.ai tools across the GenAI ecosystem.
What you'll learn
- Large Language Model Fundamentals
Build understanding of how LLMs work and their role in enterprise AI applications.
- RAG Implementation Techniques
Apply practical retrieval-augmented generation methods using Enterprise H2OGPTe.
- Fine-Tuning with LLM DataStudio
Configure and train language models using H2O's specialized fine-tuning platform.
- Dataset Preparation Best Practices
Structure and prepare data effectively for training and evaluating language models.
- Model Evaluation Methodologies
Use H2O.ai EvalGPT and assessment frameworks to measure model performance and quality.
- H2O GenAI Platform Navigation
Work with GenAI AppStore, H2O.ai Wave, and integrated ecosystem tools for end-to-end workflows.


Course Playlist on YouTube

Mastering GenAI LLMs: Hands-On Training Guide
Welcome to our hands-on GenAI LLM training! Dive into the entire life cycle of Large Language Models (LLMs) with practical exercises.

Understanding the Foundations of Large Language Models
In this video, we will dive into the basics of large language models (LLMs) pipeline. We'll explore how these models can do more than just predict ...

Practical RAG Techniques: Interacting with Enterprise H2O GPTe
Learn to summarize documents, create LinkedIn posts, and uncover insights with Enterprise GPTe and Python Notebooks.

GenAI AppStore: Your Gateway to Innovative Solutions
Discover the potential of Gen AI Apps in this brief video.

A Comprehensive Guide to Fine-Tuning Language Models
Lab Three introduces fine-tuning language models. You'll learn to fine-tune both small and large models, beginning with a simple model using Huggin...

Mastering Dataset Preparation: Techniques and Best Practices
In this fourth lab, we'll focus on dataset preparation for Downstream NLP tasks. We'll explore various techniques programmatically in Python, using...

Mastering LLM Evaluation: Metrics and Methodologies
In this final lab, you will focus on evaluating large language models (LLMs) programmatically. You will learn to compare LLMs using methods like bl...