Return to page
h2o gen ai world conference san francisco h2o gen ai world conference san francisco

Keynote

Sri Ambati, CEO & Founder, H2O.ai

AI-generated transcript

 

00:09

Welcome San Francisco. It's great to be here. This is where we started most of our journey of the last, what, 12 years now. So thank you for the community for being here right and early. Time is the only non -renewable resource. 

 

00:29

It's something we've been following as if there was no tomorrow. But I think they're made. Gratitude for everyone, both the community, the customers, the makers at HTO. Today we are here to talk about the Genai world. 

 

00:49

Let's not forget that BC is before COVID. Almost all the data that we had before COVID is more or less irrelevant. A but the path for HTO has been almost grounded deeply in customer obsession and a maker culture. 

 

01:13

And today you'll see some of it in full display as well as our continued mission for AI for good. We now have more than 10% of the world's Kaggle Grand Masters powering, helping the world and helping HTO continue to innovate. 

 

01:34

And thank you to community past, present, and the future. All makers have been part of this incredible journey. HTO movement owes a lot to the community and the customers. It's only appropriate that we respect that. 

 

01:54

Open source is really about freedom. And the freedom is not just free. And Genai is owed so much to open source just hardly nine months ago when The world was suddenly taken by storm, by LLMs that were not available for everyone or accessible to everyone. 

 

02:16

And suddenly, I don't know how many of you still remember the first time Lama, CPP, or Alpaca, Laura hit the stride. And suddenly, you had a race in open source that was here to democratize generated AI. 

 

02:33

HTO joined forces with the community and the ecosystem. And there are a few good stories to talk about today. We start with the mission saying every company needs its own GPT. I think it's been validated by most of the industry today. 

 

02:52

We also think that every community needs its own GPT, its own LLMs, and demonstrably ones that are working for you in your midst. We would love our customers to make data and AI. first -class assets. 

 

03:09

So they can actually be powering your future, not just being cost centers but actually making profit for you. This slide has lived much longer and it's time has come. Storytelling is why gendered AI is so powerful? 

 

03:30

Well, as data scientists have always believed in being so true to data, stories, rule, boardrooms, stories are what we grew up with. I think that's what truly the narrative behind gendered AI has really dominated. 

 

03:48

If storytelling is so powerful, prompt is your IP. Data used to be the mode, we used to have a lot of other modes but prompt has become a true IP. I want to roll out a big picture of how the ecosystem is coming together at H2O. 

 

04:06

You need ETL for your LLMs and that's where Data Studios and our label Genie comes together. You need the LLM studio to fine tune your LLMs or at times use rags and today we'll talk about when to use what. 

 

04:20

Prompt studios come together, how do you tune your prompts? How do you show the before and after the transformations and teach LLMs to actually figure out the best prompts to extract data from your documents? 

 

04:36

The rise of my GPT, every person will have different LLMs, each some personal, professional, and universal LLMs. Evaluation becomes really important when you have generation going on high drive, duration becomes very important. 

 

04:58

How do you deploy these LLMs in a cost efficient fashion in a way that inference doesn't put a big hole in the pocket and then how you integrate back to how H2O's landmark innovations in AutoML, in Hydrogen Torch, and Document AI come together. 

 

05:19

And then powering your generative AI applications, regular applications with a real voice, the GenAI app stores. You're going to see the entire gamut of innovation, some of it in different talks today. 

 

05:32

Hopefully, we'll be able to show some of them. I want to thank several members of the generative AI open ecosystem. Because starting with the attentionist all you need paper to PyTorch, several projects like DeepSpeed and Gouda that power most of this at the lowest layers, to Lamar2 Falcon, Mistral, many of them broke into open source foreground and have helped build a really true ecosystem that can challenge what otherwise would have been a capital intensive tech giants game. 

 

06:17

In evaluations, Raghaz, the VALGPT AI, we will talk about, we will show some of that today. The LLM evals, different embeddings obviously, and call out of a few names out there, of course, the Eric Wang who kicked off the Alpaca Laura and Jan Lecun, who has continued to push the open source and Dr. 

 

06:40

Ritthi Sam, who is a powered falcon. There's many more, and I missed quite a few. We do like most of your projects and keep, keep powering the open source out there. I want to jump into a quick show and tell from here, and then quickly invite the roster of H2O, sampling of the roster of the H2O makers. 

 

07:05

So, I think, I was asking a fun question, how many AI companies are in San Francisco? I'm sure the answer is very large. So, you probably can imagine a bunch of these answers here. More than 1000 is the right answer. 

 

07:34

I'm sure you're here kicking off the event here. There's a bunch of really fun talks today. So our belief has been that, generated AI, the regular GPTs should have rags associated with them directly. 

 

07:59

So you don't necessarily have to look for rags separately from the main GPT. And I think the rest of the world agrees with us. at this point. So, H2O GPT which basically has kind of, so the team put together some really powerful innovation at demonstration today. 

 

08:24

And this one is actually bringing more of the RAC. One of our customers, Cornwall Bank of Australia, published their half -yearly investor relations document. And in this they kind of essentially showcase how they have been using AI over the last few years. 

 

08:51

And they've been partnered very deeply with them. And I think we want to see how our RAC brings that to life. Here you go. So, a bunch of them started. I'm going to go pick up. a collection that I launched earlier, a conversation. 

 

09:18

. In behind the scenes, you're basically seeing vector database, their embeddings for the document have been created and stored. You're pulling out relevant information from the embeddings that are built. 

 

09:40

And then NLLM, that's truly giving a narrative. We'll talk about this with some level of detail at several talks today, including a talk from the community Lama Index. I think that the power starts showcasing when you start looking at how you can start highlighting almost all the core pieces and not just one of the things we found is it's actually able to go pull up information in slides that are at the reference section of the actual slide. 

 

10:27

Thank you. Now, there is a bunch of system prompts. Obviously, you can pull in different models, not just open source. You can also try GPT -4, GPT -3 -5, different variations of the theme. Now, a portion that is quite fascinating is you can essentially start asking for it to cross -check that model performance against other models. 

 

11:08

Let's go to a similar question here. We think that rags are essentially going to be the back end for a large set of applications that are going to be powered in the future. And I think that's kind of where our idea of connecting it to a real API and API keys. 

 

11:38

And powering a rich set of applications on top are going to be super important. Here's the beginning of that key, but also a rich set of applications, such as meeting summaries or translations that will come. 

 

11:57

And this ability to power your API with rags and LMS, rags are essentially giving you a very powerful way to constrain your LLMS. in a context. So it doesn't go outside the range of what you would otherwise expect it prevent hallucinations. 

 

12:17

I think what we see is the WorldWare RAG API powers most of your applications. And today we actually are also talking about GNI App Store, which got its public app store. And I think Michelle has a very good talk later today on that. 

 

12:38

Let's see if I can pull up some more of the app store for your view. Many of you are users of H2O AI Cloud. I think you can truly experience our innovation from the center of that same place. Running on any cloud, instead of, again, you're not beholden to one cloud. 

 

13:10

You can run on different platforms, get that both on -prem, on your virtual private cloud, or any cloud as a result, giving you the flexibility and the hybrid nature. Continue a couple of more slides here and then bring the team on board. 

 

13:33

Evaluation -led LLM development. I think a few really powerful ideas have emerged on how you evaluate your LLMs. You want to evaluate for both factual accuracy, but also the quality of the response. I think the team has both put some of them to work ourselves. 

 

14:04

Elo rank here, comparing different LLMs. using GPT -4 as one of the judge, so you can at least get comparable results as the best in class LLMs out there. But Raghaz is the other one that hit to mainstream recently, and our team essentially brought that to life to start using RAG benchmarks and see across similar documents how it performs across different LLMs performed in terms of accuracy. 

 

14:41

As you can see, GPT -4 is still number one, but not far behind is where LLMA2 powered applications or LLMA2 powered RAG is. Again, a ton of work went into behind this, whether you're reading the PDF, breaking it, chunking it the right way to get to that level of accuracy. 

 

15:03

But I think in general, long context will definitely help. We believe that evaluations will basically become the heart of our LLM developments. So evaluation as a service will probably be super important. 

 

15:21

It's still very prohibitively expensive for my customers to do AI in France and VLLM is a project that we are dependent on after TGI and I think capturing good feedback whether it is for reinforcement learning or for data will be super important and powerful but still I would say the latency and the cost of running in France is still very high. 

 

15:51

When generation is abundant curation becomes valuable can I use AI to curate AI and I think you're seeing some of that examples earlier to cross -check. What is the mode in this new world? I think customaries your only mode and of being obsessed with the customer experience is going to be super powerful but also co -creating with your customer is going to be the heart of the team. 

 

16:18

I think we are showcasing some of the GenAI apps here but I think powering your applications with this new narrative with this ability to kind of truly bring LLMs to life in your applications that's where I think we're going to see a lot of true innovation power. 

 

16:42

You can absolutely use not just core LLMA but also apps being built with GenAI whether through prompts or sketches such as these. Again, ability to build a rich set of applications that can essentially transform your your day to day. 

 

17:14

Most of our customers have projects going every department, every group they're working with. They're essentially driving more and more gen AI applications. Building app stores, building rich applications is going to be at the heart of generative AI. 

 

17:37

If return to office has been a big challenge, excitement around building generative AI apps has really given a boost for employee engagement across the board. I want to call out the incredible innovation power of our KGMs. 

 

18:02

They have won the science exam. We're going to talk about that in a brief moment. They have actually been the brainchild behind several of our innovation all the way from driverless AI to document AI. 

 

18:21

And they really have managed to co -create with the algorithms team, the core technology team to make AI a team sport. And I think it is true, a lot of what we do is really powered by that incredible team work coming from domain expertise all the way to business applications, but being able to bring the power of algorithms and engineering together. 

 

18:48

Some of the basic use cases we are seeing at customers are to continue to improve customer experience, contact centers, document processing, marketing, generating content, generating code. I'm pretty sure most of you have seen some of these procurement RFPs. 

 

19:05

Anywhere there is supply chain of people trying to read documents. AI for documents has come to life with RAG. And it's only a matter of time before it becomes multimodal and speech. So you can truly bring meaning to your conversations with your customers. 

 

19:25

Extraction of content from documents, labeling, translation. And most recently we're seeing a lot of data. How do you have a conversation with your table of data and BI? And these are going to come in life as app stores. 

 

19:44

And we see a world of hundreds of use cases coming to life. Connecting it all back to the AI service. This is a deployment at one of our power customers, AT &T, where almost all the innovations from the LLM world and the pre -LLM world come together. 

 

20:06

We think that customer and community love is the greatest force. you And then most of our data scientists are going to rise up to be in strategy. So if there's a data scientist on the table here, you can see them four or five years from now at the table making most of the decisions on the planet. 

 

20:25

Obviously, equity and inclusion are super important and with great power comes great responsibility. But AI will truly bring abundance to the world. There is abundance of time because you can do more with your time. 

 

20:42

Abundance of space, you're going to see a lot more exploration globally, both outer space and inner space. There is abundance of matter and energy. So AI is truly going to be that power law that brings to the next level of innovation for us. 

 

21:02

And of course, responsible AI, which is a key topic of the day, is here. I'm going to bring a lot of that to four. Then people ask, what do we do after this? Is AI going to make us obsolete and all these questions that pop up? 

 

21:17

We think that virus wars and superstitions or fake news, as they would call it today, we thought they were passed, but they all round abundantly around us. So it's a lot of real problems to solve and AI can make a difference. 

 

21:33

And I would rather focus on the problems AI can solve. And they're not superhuman powers, they're definitely powers that can be used. AI can make a difference and we can make a difference with AI. AI for good is a key theme for H2O and there's so many problems that we can solve. 

 

21:53

And we need the community's help to power most of these changes. We have deployed AI in the past to protect hurricanes and there's just AI to transform health. How do you reduce the cost of care so we can reach more people for less? 

 

22:16

And it's our generation's revolution to be able to truly bring, democratize, things that were only accessible to the world's richest or the most powerful to everyone. We are busy being born and busy dying. 

 

22:33

And I think it is with great humility I take the stage and thank you for having me here. I would say the biggest force that we've seen is the ability of bringing ourselves to the fore of bringing change. 

 

22:55

Because change doesn't happen without deep love and passion. And I think it's really truly an honor to lead such an innovative movement with the power of our community. With that I want to bring on board the makers behind the project and ask them how they have done it. 

 

23:19

So please welcome to the stage.