MEETUP | MOUNTAIN VIEW, CA
Benchmarking Large Language Models: A Hands-On Meetup
FEBRUARY 22, 2024 | 6:00-8:00 PM PST
Join us for an interactive meetup focused on benchmarking Large Language Models (LLMs) using H2O.ai's cutting-edge tool, Eval Studio. This meetup is designed for data scientists, GPT, and LLM enthusiasts who want to explore the latest advancements in LLM evaluation and take their skills to the next level.
During this meetup, we'll delve into the world of LLM benchmarking, discussing the challenges and opportunities of evaluating these powerful models. You'll learn how to design and implement benchmarks using Eval Studio and discover how its dual-mode functionality allows for manual and automated evaluation.
Our experts will guide you through hands-on exercises, demonstrating how to use Eval Studio to evaluate LLMs on various tasks such as answering questions, conversations, and retrieval-augmented generation (RAG) analyses. You'll also learn how to collect and store results along with relevant metadata, enabling you to track and debug model performance over time.
This meetup provides a unique opportunity to connect with like-minded professionals, share knowledge, and learn from each other's experiences. You'll leave with a deeper understanding of LLM benchmarking and the skills to utilize Eval Studio in your work effectively.
Don't miss this chance to enhance your understanding of Large Language Models and their capabilities. Join us for an engaging and informative meetup that will help you stay ahead of the curve in the rapidly evolving field of AI.
Event Location
H2O.ai