Return to page
H2O GenAI Day Training Singapore H2O GenAI Day Training Singapore

Opening Keynote, Singapore 2023 


Speaker Bio

Sri Ambati | Founder & Chief Executive Officer

Sri Ambati is the founder and CEO of A product visionary who has assembled world-class teams throughout his career, Sri founded in 2012 with a mission to democratize AI for anyone, anywhere, creating a movement of the world’s top data scientists, physicists, academics and technologists at more than 20,000 organizations worldwide. Sri also regularly partners with global business leaders to fund AI projects designed to solve compelling business, environmental and societal challenges. Most recently, Sri led the initiative, sourcing O2 concentrators for more than 200 public health organizations in Tier 2 cities and rural communities in India during the Delta wave of the COVID-19 pandemic, helping to save thousands of lives. His strong “AI for Good” ethos for the responsible and fair use of AI to make the world a better place drives’s business model and corporate direction.

A sought-after speaker and thought-leader, Sri has presented at industry events including Ai4, Money2020, Red Hat Summit and more, is a frequent university guest speaker, and has been featured in publications including The Wall Street Journal, CNBC, IDG and the World Economic Forum and has been named a Datanami Person to Watch.

Before founding, Sri co-founded Platfora, a big data analytics company (acquired by Workday) and was director of engineering at DataStax and Azul Systems. His academic background includes sabbaticals focused on Theoretical Neuroscience at Stanford University and U.C. Berkeley, and he holds a master’s degree in Math and Computer Science from the University of Memphis.

Read the Full Transcript



What a fantastic kickoff. Again big hand to Agus. Let me drive H2O in its journey and then walk you through some other product as well. One thing we've always believed is time is your only non -enable resource. 



Your time no matter what happens cannot come back for you. That's something we built the entire company in the ethos around. Of course this will be fun. We need to always improve the fun factor of your journey. 



Make sure that's always there. We're super grateful for the audience, for the community, for the overall movement. Every maker, past, present and the future was worked on H2O. This is our first event after COVID here and BC is before COVID. 



The world has changed dramatically and of course we get super excited by the events almost near theater of high -speed Silicon Valley stories. But the biggest events of the past more than a century is how the world truly came to a stall under COVID and there's some real if you zoom out war, virus and fake news or superstition are still the biggest threats to our revolution and our success. 



as a civilization. So while there is AI doom, there is a lot AI can do to reduce some of its problems. Data, as we know it, has changed. Not less for traffic, a lot less. A lot more work from home, a lot more work from Bali, or work from wherever you want to work from. 



So a lot more decentralization. Instead of uberification, makers everywhere are able to demand and get what they want. Impact make bigger impact, even as individuals. It stores home to about 2 million community members who use our products every day. 



Home to 30 plus Kaggle Grandmasters. We have a few in the audience here, but globally we have some incredible talent powering the innovation behind. We are obsessed with our customers. What an incredible day to start off the morning with a customer. 



On stage, customers are part of the journey. They are co -creating with us every day. They are sharpening our ideas. They are the muse. They are the reason for actual innovation. So real innovation that's transforming is coming by that empathy for customers, deep empathy for customers walking in their shoes and understanding what's really impacting. 



So as you think about Gen AI, Gen AI is a very creative process. Make sure you are understanding your customer with a real sense and obsession. So we've actually made customers part of our board, customer part of our investments. 



Every touch point you could get for the customer, we are absolutely craving for it. And of course AI for good. We started the company because we wanted to fight cancer with math. When my mom had breast cancer that triggered that germ of an idea. 



How can I use math to fight complex problems in the world? So AI is math and for us, AI for good is actually what we do. So it's not just being, don't be evil, we want to actually do good. And so that influences almost everything we do. 



We get super pumped on, and I think the next talk Gen will talk about AI, AITD which is using one of our banking customers, Common Bank of Australia, open source the model to look for science of financial and domestic abuse in banking messaging. 



Or fighting wildfires. We announced AQI, Air Quality Index and the challenges. Bringing the Kaggle community, the global data science community to fight meaningful problems. There's still a lot of problems to be solved in the world. 



And I think AI for good is a real theme for us. It's not, it's actually what we built. The community came first. company came later. So we are super super thrilled to honor all of those many folks who have basically been behind us. 



The song is paying homage to all the folks, not just during tests, it's almost a century old. Math as we know it has been passed on over the generations. So H2O represents a small blip in that million years of math that's been floating in. 



in our midst. So we are just one of those many bridges to the real destination in the journey. We've been super fortunate for Fortune 500, Fortune 2000 customers. And Singapore has actually been one of our fastest growing regions in the last 24 months, 12 months. 



We've been embraced by your largest telcos here, the agencies, the banks. And so we are here to come build a very grassroots community. So some of you who are participating in the training today, be assured that we are also looking to build a great team here. 



And this is not a one -off, one -and -done. This is an incredible space that brings East -West together. And so you could bring incredible innovation from this region. So we are looking to build teams, build customers. 



SingTel embraced us most recently on our GPT. So we are super keen to walk you through some of those examples today. Open Source is really about freedom. It's not just free. So it's about freedom for the makers. 



And freedom obviously creates innovation. So many of our peers will use Open Source and compete with us, if you will. But in that competition, actually, it creates unlocks, incredible innovation, and power for the end users. 



And so everybody can essentially build that large ecosystem if you use the power of Open Source. So every time you make something, you have a choice. Remember last night, we were demoing Eval Studio to Agus. 



And the first question he asked was, is it Open Source or not? And I think these are the kind of questions that... kind of continuously happen every time you create is open source, can it make it widespread? 



Or adoption, can it democratize AI and Gen AI? And that's what we found with Gen AI. It wasn't open even though it was named open AI, it wasn't really open source. So that's what we picked up beginning of this year and in just less than nine months you see incredible power open source large language models. 



We think that every company needs its own GPT and when we said that six months ago it was kind of one voice now everybody assumes that it's going to have it. Every community needs its own GPT as well. 



We can go one step further, every organization, every person will need multiple LLMs, some for their personal, professional and universal thinking that captures their thinking. This is just the beginning, the tip of the iceberg of how many large language models will be built in the world. 



Every purpose needs voice, and the voice is what LLM will accentuate. It will democratize your voice globally. And that's kind of what we are hoping to create a roomful of dreamers who can essentially transform and go from where we are to take us where we need to go. 



We're still very early days of AI. AI is the infrastructure AI is our world AI is our world AI is our world Look who we are We are the dreamers They make it happen Guess we believe it Look who we are We are the dreamers We make it happen Cause we can see it Catch to the door If you can visualize it You can make it happen Obviously we want our customers to make data and AI first class assets AI is not a cost center It is going to unlock profits Transforming your data to be a profit center is what AI can actually make happen This was already happening This is the postcard from the future for companies like Google which is on the poor side of the highway from Google and Montenegro or Facebook and others They have been monetizing data Every company, every organization every person, every community every tribe can start utilizing their languages and data to actually monetize it in a very democratized way so that in a more plural way in a more in a lot less centralized data data have nots, all go -habs and all go -have -nots. 



This is why LLMs are so powerful, because data scientists have historically been focused on data and less on storytelling. And when we go higher up the stack, we mostly see storytelling. And hopefully, when we are all at a point where we are talking to our grandchildren, great grandchildren, we are only telling stories, right? 



And I think LLMs really unlock stories from your data. And that's really powerful, because data scientists have hard, have had the hard task, and when Agus kind of ended with his entertainment cliche or quip, it actually meant that telling stories is actually very hard. 



So LLMs now give you that power to tell stories. What is intellectual property in JANAYI? Prompt. is your IP. The questions you're asking on a Generative AI platform is actually triggering the actual response. 



That prompt is your IP. And when you share that prompt with the rest of the world, you are sharing in some sense open sourcing your prompt. But that's why it's super important to have personalized local LLMs or organization specific on -prem LLMs. 



It's because LLMs, you're asking those prompts, those prompts are actually making the centralized platforms better and localized platforms are not getting that advantage. Data labeling, data curation and prompts, prompt basis, prompt tuning. 



These are going to be super core. We have several products and platforms in that space. Let me show, let me walk you through the product base for LLMs. So we have Data Studio, LLM Data Studio was made in Singapore. 



We have a team here. What is Data Studio doing? It's taking ETL for your data from your documents and making sure you have all the question answer pairs for feeding it to an LLM so you can find unit, make it better. 



Labeling, many of you must have heard the incredible amount of human power that was behind RLHF and various labeling techniques. We have a platform called Label Genie. It's one of our fastest growing platform. 



It uses transformers, smaller models, smaller kind of large, smaller transformers essentially. And hydrogen torch, label genie, these are powerful engines that already are there in the H2 ecosystem. This is the place where we see a lot of work going in. 



We're going to end up spending a lot of time in LLM, the data ETL for LLMs. Then there's LLM Studio. This is an open source platform that we released circa May. It's probably one of the fastest growing platform we have released. 



the other open source, one of about 20 ,000 installations, more than several thousand concurrent users for us every day. Then you have Hector database and embeddings. You'll see some of that. Prompt Studio, so how do you fine tune your prompt? 



So you can essentially deliver the right extraction and also the right kind of content and outputs context, how do you set the context? Then my GPT, how do you take, or open source GPT, make it yours? 



And of course the evaluations, we talked about the value, is all you need, so evaluation studio. And then of course, how do you connect that back to the predictive piece? Right, so the agents allow you to connect a generative AI platform to a predictive AI platform. 



And that's where the magic happens, where you can bring your traditional machine learning methods and engines, many of you. probably heard of H2O's driverless AI open source H2O, how do you bring those in sync with your generative AI kind of roadmap and powering gen AI app stores? 



So we've actually, on our website, we have opened a public app store, a gen AI app store, filled with rich, simple applications. I'm going to demonstrate most of this for us right now. But this is what you're going to get a sampling of that today, most of today. 



And I'm going to also thank a very rich set of generative AI open source ecosystem that has helped us build much of this. And of course, this is only a splash of it, of the ones that survived every week. 



Again, some of this is already, we've added more news things, even from just last week. I think Gorilla is missing there. It's the Gorilla functions, open functions. But bringing some of the best Alpacaalora, how many of you have heard of Alpacaalora? 



When they triggered that open source Spark, most of us built an LLM or Lama CPP on our computers and then that's triggered the open source Falcon, Lama 2, sort of Mistral, most of these models. And of course, this goes all the way down up. 



Kuda, obviously. But I think, I mean, this is a sampling of folks, there's several folks, Yanlec Kuhn and Tisham, they've tried to push the boundaries there. But this is still, again, early days, lot more open source build out there. 



Let me, the ETL for LLMs, you'll see some of that today. How do you take a document and break it down? Labeling as a service, we're beginning to see a need for building labeling teams and labeling... 



as a science process. Evaluations, how do you start using eval more effectively? I think you'd probably see public ELO scores, ELO ranking for evals for both public and private models. I should have some and using a GPT -4 sometimes as a curator of different models. 



The proms, I should have... Eval Studios, hot -off press team building the Eval Studio Ragaas. Have you... how do you evaluate RAG again across different dimensions? Start building. The H2O team has started to build... 



real first level expertise in trying to kind of win contests. You've probably seen the Kaggle Science exam. H2O team won the Kaggle Science exam. But what we're doing is learning from what they have done to win this contest and productizing it because a science exam is a metaphor for a contact center exam, right? 



Or an exam for your model for passing a bank teller exam or a procurement contract exam. So evaluations become how you build models and how you fine tune your rag all the way. So I think today you'll probably see some sampling of how to fine tune your own. 



It's a very small model. I ended up winning this exam using on Wikipedia data. But I think you're beginning to kind of see that very, very early kind of... This is our GPTE platform. Let's build a small... 



Let's walk through how you would go about building a collection, adding a document. Let's take a sample here. One of our customers, one of our largest customers is actually from Asia, Commonwealth Bank of Australia, out of Sydney. 



They have been using H2O for a very long time, for five years, and have essentially kind of written up in their annual presentation how they've been building H2O ML -based models. H2O is one of their strong partners. 



We are one of the strongest partners. Apps, different apps that they've been building. So what we're going to ask our enterprise platform now is how... how the CBA is using AI. What is it? You can probably see it's beginning to create embeddings. 



It's downloaded the product, the document, building chunks, breaking down the chunks by chunk, by looking at different pictures on it, and so on so forth. Again, the speed of this is proportional to kind of the compute you can throw at the problem. 



Let's see if I have a collection already with... Thank you. Just a quick summary of our Wi -Fi Reaction with us Thank you. The open source platform we have, some of you have seen that. This is essentially a platform. 



Thank you. This is running a benchmark against all public models. You can select a whole set of these models instead of running very simple. Again, most customers, the first thing they want to do is how does it compare with GPT? 



Can I reduce cost of my exposure to GPT? What is the small model better than a large model? This interface is public open source. You can fork this and immediately take that to your office and quickly show piece. 



One of the use cases we were going to look at was you can take any, it is able to take videos, audios, this is whisper integration. You can essentially pipe a regular YouTube video into the platform. 



So you can try that. Let me see how it's okay. We have a collection that's done. So let's get to business here. you List 10 ways H2O AI is helping CBA. Again, the question is pulling up embeddings. Embeddings, all you need. 



We talked about that earlier, picking up the embeddings, then capturing the response in this context and from the context creating the kind of the answers. So LLM is using the context to produce the answer for us. 



And I think one of the things that users like is how it's able to pull up where is it coming from. So I think they highlighting the actual source, right, should have reference. Again, reference accuracy becomes just as important. 



Again, you have a way to change your system prompt. Change different LLMs. Again, the same kind of concept of plurality, whether you want a small model, large model, GPT -4, or open source models. We were just looking at the open source model. 



You can change system prompt. I haven't tried it, but it would be interesting to speak in Singlish. Maybe you are a Singlish bot. Let's update it. Let's ask a question on what is the impact. The impact of the bank. 



We haven't tried it, but it's always worth trying something new on the fly. I guess there is no big impact on Singlish itself. Probably the part of the course, the second half of the day, is to actually fine tune the model speaking Singlish. 



I don't want to reveal much. But again, the ability to read tables, documents. Obviously, there are lots of open source models behind the scenes at work. So, I'm giving you the scoop of the platform. 



Going from here to the next space is how you start building app stores. So, Jinnai App Store. A large part of our apps, as you can see, are co -created with customers. customers, whether it's legal GPT or procurement or looking at earnings calls and start building kind of summaries from earnings calls. 



On a daily basis a lot of change happens on the earnings calls, then looking at customer specific models. Let me launch another powerful use of AI. Gen AI we see is in building apps itself. So if you think about what we made public a few weeks ago at our San Francisco conference is a public app store. 



This is accessible to all of you and you're able to use a freemium edition of GPTE. but also build and upload your own apps to it. So this is just using your Gmail account, you should be able to access our Genai App Store. 



When I start seeing, here's one of the apps, obviously powered by a lot of the earnings calls behind. You should have a Havas Verizon doing in terms of its business. You have a custom building apps on top of your regular rag. 



When we show drag, one of the things I failed to show is it's actually an API. You're looking at the front end of a really powerful API. So building a rich API key and then building apps around it, around your collections, that's how you're going to reach a wide audience. 



So every collection we build, you can essentially build an app around it and then that gives you the ability to kind of start building rich set of apps about your application, about your base platform. 



This is another powerful kind of use of Genai, co -pilots. Many of you are using code, code Lama. This one is actually how do you build apps? How do you use AI to do AI? So how do you use AI to build Genai to AI apps? 



Again, we have an incredible audience here. Shibam co -created this app, came up with the earliest idea and then powered through building this one. Again, BAVE is a low -code framework that we built that is open source. 



So this is generating BAVE code for a visual application. application and then you can take that application and then start building, improving the various pieces inside the app. So again, very, very powerful, simple framework. 



It's a rich set of applications for AI that are emergent from just simple ideas of RAG plus LLMs inside. Everywhere you have logic, you could use an LLM to actually create the logic for you. In the past, data was the code. 



Now, LLM can be your code creator. So RAG powered data studios, LLM studios for fine tuning. These are all kind of LLM studios. It's open source, very powerful technique out there that you can essentially use. 



I think there was one other. kind of really powerful application we're going to showcase. Let me just see if I can get the... we actually were using YouTube, Singapore, your Prime Minister had a nice speech on highlights, I think. 



Let's see if I can pull that up. But yeah, I think that's the one. Again, using... I mean, you probably played with many of these magic tricks, right, sort of... but I think the real power of AI is going to be in its applications, right, especially the gen AI. 



Let's ingest this one. It's a smallish video, so it should take relatively fast time to... sorry about that. So there was a nice video that came out, three years recently, with a bunch of takeaways. Standard flags are good flags built all over Singapore and will help. 



I'm sure you're familiar with that. So when we... it's sort of what are the highlights in the video. And I think... I think it's going to be a good one. I think it's going to be a good idea. And again, all of this API programmable, right, so you can absolutely... 



start building rich set of applications especially if you don't have time for news, right, daily news, you get a very strong output from this. And this was just a popular video that we just uploaded. 



Just hit the ingest button and you're off to the races. LLM Data Studio, another, I'm sure we're going to talk about how to ingest data, how to bring QI pairs to for, and I think this is a public CNN Daily Mail. 



So you can essentially start kind of bringing profanity checkers, quality, what the length of the response, again, very powerful toolchain, which generates a lot of the core grunt work in the ETL for LLMs. 



So Data Studio, another, very very powerful tool on your side. I think we will find a lot of these components of how to build good LLMs. Those tool chains are going to become more and more powerful than ever before. 



That, we looked at the Eval Studio at length. But again, Eval is going to be at the heart of almost all of what we are doing is going to need a lot of evaluations. As Agus gave a very strong prompt, evaluation is going to be the heart of what we do. 



Continuous evaluation is almost going to happen. This Eval framework is integrated into GitHub actions. So you can actually connect it into your framework so that you can essentially bring continuous nightly kind of development. 



The cost of AI to LLM ops, instead of how do you inference costs for LLMs is still very high. That's why small models are cheaper because they can actually fit in smaller GPUs. If they can do the job, if they can't do the job, of course you need to use a large language model. 



So inference costs are still very very high for our customers. Can I use synthetic data? That's another cost, the labeling costs. Time to get good data is still high. It should have some. So we found that synthetic data is actually a reasonable alternative for real data. 



So and how do you capture feedback? We use VLLM for example. For example, to run the things faster, but 5 milliseconds latency time is sometimes too high. So, the cost is not just in dollars and cents, also speed. 



When generation is abundant, curation becomes very valuable. So, we need to curate AI with AI because the scale of NLLMs is so large, so you would have to end up using AI to curate AI. We call AI to do AI years ago, we call driverless AI or earlier products as a way to do AI. 



I think we are beginning to see a wide spread of it. What is your real mode? Customer is the only mode. In fact, technology is changing so rapidly in gen AI that proximity to your customer is going to define how long your ideas will live. 



Co -creation with the customer, building products with our customers, understanding contract. gap analysis for procurement customers, to contact centers, to actually really building, co -creating with the customer on their problems. 



We need to go to the feature story. We talked about story earlier from feature stores. So how do you go from feature store to feature story? Then building these teams that are cross -disciplinary. Best algorithms person sitting next to the best data engineer, but with a very deep domain knowledge. 



So you can actually bring that powerful JNAI applications. The design for AI is bin -formed. It is conversations. So the design is actually quite well been captured. But it's really building really valuable applications. 



When people say teamwork, they forget that every edge of this, there is conflict. And conflict is a feature. It's not a bug. There is conflict. There's two different ideas, probably of very good quality. 



And if you can bring them together, you actually build great magic. I think bringing the teams together, especially in a world that is post -COVID, more broken up. That teamwork makes the dream work. 



Our Kagglegan Masters are global. We actually have one of the strongest contingent in Asia. The top ranking, top 10 ranking initiatives out of India. Shivam is here. Local Chunming is here today as well. 



And we have folks in Japan and potentially in Australia as well. But I think Asia is still a long way to go to be the world's top ranks. I mean, of course, we have the best fitting. It was number one from China. 



But I think the rest of Southeast Asia, there's a lot more investment to be done. I think we talked about a lot of these products earlier, right, and the science exam ended up using most of those toolchain to win. 



I asked Agus about top ragged Genea use cases. Customer experience is number one we see across all our customers. I mean Coca -Cola is using H2O GPT, right, and we're customer experience. How do you manage all their customers? 



Document processing, generative marketing, marketing content, code content, code lama .http. You can use code lama power. Procurement RFPs, how do you manage contracts, how do you manage inventory, portfolio recommendations. 



We see a lot of people. investment banks for example trying to use earnings calls to start summarizing as we were looking at the demo earlier. Building custom GPTs instead of fine -tuning your model meeting summaries many of you are probably already beginning to see some of the team summaries but how do you custom build a meeting summary for a different purpose for someone who didn't attend the meeting to someone who has already taken deep notes right sort of the action items. 



We again given that our API is very programmable much of this has actually been built in a simple Jupyter notebook right so multi -modal especially as you do OCR images and speech LLMs these are all going to be very powerful next steps for us and we're beginning to put some investment at Ask data one of our team members is working on auto generating SQL from natural language so you can do powerful BI using AI. 



I mean I think all of these come together and have a rich app store for us and how you build really rich apps from base machine learning methods is going to be at the heart of this engine. We think that customer and community law is still the greatest force in the nature and that's kind of been a driving force for us what happens to data scientists? 



Questions might be and this I get question often. Data scientists are one of the most incredible brainpower on the planet and using them to drive strategy right sort of that's going to be a place where we actually will see a lot more in strategy science emergence of strategy science. 



Every Fortune 500 company will be led by a data scientist in the next five years. There would or an AI scientist or a machine learning engineer there would not I mean it's a I call it vice versa. There probably won't be a Fortune 500 company. 



You won't be in the Fortune 500. If you are CEO, it doesn't understand AI deeply. So around the call here, around the team here, and the audience and the worldwide, folks who are going to be hands on with Gen AI with AI are going to be driving strategy for all of the largest banks, largest telcos, largest companies, and conglomerates in the world. 



So our vision is in 2030, 10, 20 people will build trillion -dollar companies with AI. So this is just still the tip of the iceberg. That's why we call AI is actually going to power all the abundance in the world. 



A hundred trillion -dollar GPT ahead is where AI is going to take us. It will assure abundance. So we are tech positive on this. and there's the abundance of time at your hands so you can focus on things that matter abundance of space both inner space in our mind as well as outer space by exploring the planets beyond and beyond where we are and abundance of how we use matter better, how we use energy better and so this is just the tip the early days of AI so as you think about embarking on this journey this is just early as much as H2O has spent good dozen years coming here we still think of this as day one, day zero so that abundance is right from the corner so get your hands wet with H2O or with AI and in the future is ahead someone as compared AI with climate change I totally disagree compared AI with water where it will be abundant it will go everywhere in the planet and you will know when it's missing you won't notice it because it will be embedded into everything and with great powers come great responsibilities so I second the concept of how to build models more carefully so that you can evaluate them continuously to I now take our edges and I was mentioning this on like long time ago there once was virus there was thousand years ago and wars there are midst that there are superstitions there are fake news there's so many problems the problems of this world are growing very fast and say AI is that superpower that can unleash the ability to do good and make a difference and AI can make a difference and with you we all can make a difference and it's really important that we use AI to solve the important problems of our time right whether it's climate change whether it's we support Wild Me one of our partners that does endangered species these sensors for endangered species supply chain, I mean the supply chain disruption of AI has been so dramatic in the last five years that that's actually been defining how we live as humans. 



I think these are complex system problems that can be solved with physics, with math, with AI, with data. And I think that's kind of what we have on our fingertips. Wildfires, bushfires, we ran a whole contest on that and some of you we welcome the community to come join some of these contests and actually be part of the change, right, should have been proud of that. 



Problem solving. AI for water, AI to democratize health. These are these are the problems of our time and education. AI's power in helping the difficulty in learning, right? So a lot of our peers on the planet don't have the same level of learning capabilities. 



Yeah, I can be a powerful, fine -tuned tutor for them. So it can explain things that a five -year -old or a 10 -year -old wants to learn at. Right? Question, answer, question. If you can come up with a question, answer model, it's a rag. 



But question, answer, question is going to be the next phase. We can fine -tune models for rag, fine -tune models for question, answer, question. So you can trigger the thinking in our students. Education is going to be completely changed with AI, it's Gen AI. 



And we are working closely with the community on that. So we would love to kind of obviously encourage some of these are the kind of the doing good is actually not necessarily counterintuitive to making profit. 



So we found that every little thing we've done, given has come back a million fold to us. So almost all of these projects are teams, both the Kaggle Grandmasters and others, work with the community, with our customers, with sometimes cancer tumor boards, nonprofits. 



And I think what we found that giving is actually the greatest way to make wealth. O2 for India is one of, for example, projects we did a few years ago, taking concentrators to about 200 small hospitals by beating supply chain. 



They had to predict hurricanes, obviously, very very topical for us and democratizing health. We think that every generation has its revolution to do and revolution has love inside it, right? To the farm. 



When you feel positive, compassionate about the planet, then AI is your co -creator, it's your co -founder to bring that change. We're all busy being born and busy dying. So every moment we're here, we've lost one last moment, right? 



of our cells, some of them survived, some of them died every minute. So we were busy being born and busy dying, so I think it's important to kind of understand that. When I created this slide, it took me 10 or a month to even stop building that slide, because silent is actually an anagram of listen. 



And I think there's a lot of noise in the AI space. I think the best ideas don't have to be the loudest ones. The silent ones are bringing change globally everywhere. And some of you are the soft spoken leaders that are yet to be discovered. 



And I think investing in yourself and having that courage to not just be always an individual, but also universal about your thoughts are going to be super important. And that is a continuous phase between individual to the universal. 



You can absolutely do anything you set your mind to. And that's something that's, mind is one of the greatest learning machines that the world has. I think we absolutely love to, whenever in doubt, triple down on yourself, not double down, triple down, because courage is a very scarce resource. 



Time is a renewable, is non -renewable, but hope and courage are very, very renewable. And to bring change, this is the single biggest resource that you need. This world is yours, right? Sort of, I think that's kind of the statement that I would probably want to kind of use. 



Whose world is this? This world is yours. So. So. So. So. So. So. So. So. And the last one I was going to leave you with is, of course, Singapore is a nation state, nation city, and every nation needs to be an AI nation. 



And it's going to be built on the shoulders and thinking of us folks here in the room, folks who are beginning to join that AI revolution. I will second what, what Agus said that humility is endless. 



And this is what Tia Siliat said that. And I'm going to, you can be in my dream if I can be in yours. And I'm so excited. The world actually took us up on it and built an incredibly rich AI movement. 



And I have a corollary to that, which is you can be in my selfie if I can be in yours. I think I didn't get all of them. So I'm going to get here. Thank you. Most, most people who know me know that selfies are actually part of our movement. 



we built. But last but not the least, extreme gratitude. I think that gratitude is also endless. It's one of those things that there's no pit bottom to it. But with that, we wish you well. Thank you. 



I wish the way that I can play for the show and set everybody free like a new cell.