Return to page
h2o gen ai world conference san francisco h2o gen ai world conference san francisco

Building LLM Solutions with Open Source & Closed Source Solutions in a Coherent Manner


Speaker Bio

Sandeep Singh is a visionary leader in applied AI and computer vision, driving innovation in Silicon Valley's mapping industry. As the Head of Applied AI/Computer Vision at, he specializes in developing state-of-the-art technology for capturing and analyzing satellite imagery, visual, and location data. With expertise in computer vision algorithms, machine learning, and applied ethics, Sandeep pioneers solutions that optimize mapping and navigation software, removing inefficiencies in logistics and mapping solutions.

His hands-on technical leadership spans diverse AI domains, from Computer Vision to NLP/NLU and Anomaly Detection. Sandeep's detailed knowledge, innovative mindset, and proven ability to lead large-scale AI models have fueled hyper-growth and a culture of cutting-edge innovation. Affiliated with prestigious organizations like Forbes Technology Council, University of San Francisco, and Georgia Tech's ML@GT, Sandeep mentors engineers globally, shaping the future of applied AI and computer vision.

Read the Full Transcript



Imagine a place where you have more technological advancements happening than the product being built. And this phenomenon we can call cultural lag where rate of innovation in tech itself is much, much higher than the utilities which are consuming those technologies. 



And that future is basically now where we have too much technical innovation happening. Hello everyone, my name is Sandeep Singh. As you have seen my very generous introduction, I'm going to talk about how you can use these latest and greatest technologies to build something meaningful for you without being heck of Ninja Coder. 



So and as this talk is a lot about building LLM products using open source technologies, my talk will be little different. It's basically about making you aware of you can do pick and choose and you can decide for yourself how to get things done faster and in more effective manner. 



Because this domain is changing much faster than what we can imagine. Two months feel like a year in this domain because excessive amount of research and innovation happening and I'm going to explain to you how you can really make meaningful decisions for yourself and build products faster. 



When I say faster, it means matter of few days rather than weeks and we are going to talk about it. All right, here is the product. approximate agenda, we'll basically talk about the ecosystem of the open source LLMs tool and significant of blending between open and closed phenomena in this domain. 



And we'll talk about benefits of one versus other and what integration strategies you should choose and how to decide like what it means to integrate these technologies for your product and few case studies from my company which I am currently engaged with and I'm going to talk about few facts which will probably will surprise you that how easy it is. 



A little bit about me of course we already talked about it, I'll skip over it. Okay, so purpose of this slide is basically to tell you that on both sides of landscape open and closed source LLMs solutions. 



There's a gazillion options available. When I say gazillions, don't take it literally. It's more to just give you idea that there are more options available than what you can probably imagine. These are a few of them. 



If you want to take a picture of it, but basically just to give you a lot of options in open source as well as closed source or vendor specific solutions, as we know. And we're going to talk about how to leverage each side of this landscape to build your products effectively. 



Because nowadays, I've seen people have which is valid sort of affinity to either this or that. But in my experience, what we have done, we have built an extremely effective support bot in matter of hours. 



When I say hours, it's less than 8 hours, which was production grade. And we have used things which I'm going to talk about. So probably it will give you idea you don't have to be much of the coder or research house researcher to build these solutions for your company. 



Let's talk about it. This slide is mainly for sake of completion. Basically, it talks about open source and LLM's main benefits, why you want to choose open source, not the closed source. And of course, one of the main reason is that you can deploy whole thing on your own premise. 



So your data or your customer's data never goes out of your own network. And it is driven by a lot of time by compliance. For example, health care, finance, they have strict guidelines. Your data cannot flow out of your network. 



And then open source is a good candidate. And cost saving, you don't have to spend too much on your API calls to vendors like ChatGPT or Anthrox. through Anthropics or similar companies. No external dependency. 



What it means, it means, suppose you start using, say, Anthropics Cloud. And after a few months, they come up with something which doesn't suit you. Excuse me. You wouldn't be able to switch it very easily if it's highly embedded in your ecosystem. 



But if it's an open source, you will be able to decide for yourself. You want to replace with other open source. You'll have a higher decision making. And again, other examples are like code transparency. 



You know exactly what code is running for your business logic. And you have a higher control on that in case of open source. And language model customizations, in the sense one of the biggest challenges in this domain is domain adaptation. 



What it means that these generalized model Models are very good at talking generalized stuff about particular domain. For example, if you want to talk about the cancer, they will understand the generic terms. 



But if you are a, suppose a doctor or physician and you are doing some fundamental research to extend the field and if you try to talk to the NLLM about that, it won't be very effective. So what you want it to do, basically, to understand the deeper meaning of medical terminologies. 



And for that, you have to probably get more data and overfit so it understands your domain of medical much better than what it does in generalization, generally available APIs. So that's one of the need where open source models do much better than closed source and of the shelf vendor specific models. 



And of course, open source have communities like this. And don't underestimate it or take it for granted. We are really moving the mountains by open source communities like this. And thanks to H2 and Arno and Jonathan and she, of course, to build communities like this and conduct conferences like these, because you can really get very complex things done in a matter of weeks, which otherwise will take significant amount of more time. 



And of course, it fosters innovations because when you are part of open source community, you get to talk to a lot of folks who are doing wonderful stuff. And of course, it's boon for startup. And we are going to talk about specific use cases why it matters a lot for startups. 



Open source LLMs are much more useful than the closed source, especially the cost perspective. On other hand, we have closed source LLMs where we can do the similar tasks, but with different types of tools. 



different benefits at hand, and one of them is support and reliability. So for example, if you are something, building something which is really mission critical, and you don't have opportunity where you can really put your heads down and do the fundamental research when you hit an issue, in cases like that, probably, close source LLMs are a much more better option. 



Because the responsibility of doing fundamental research and extending the field is not up to you. It's for the research house of close source LLMs, like OpenAI, Google's Palm, or Cloud, Anthropic, or anything like that. 



And nowadays, I'm happily able to say that all close source APIs are also providing some sort of business customization, fine tuning. and instruction tuning sort of support. So that is a great thing, which was not there, say four months ago. 



It was not there and it's freshly provided functionality. And it's a great selling point. Of course, security and data privacy performance, they are much better because they elastically scale as per your need. 



You don't have to worry about the LLM ops as a previous speaker was talking. And you can integrate. Nowadays, they're providing you to integrate to your proprietary system. They're not as effective as fine tuning, but they're still helpful. 



And next point is the guard rails. I will take a pause here and I would tell that this is probably the most important reason why you should choose closed source LLMs. Because suppose you're deploying these chatbots to some sensitive domain. 



For example, for kids education domain or health care, and all of these LLMs are stochastic machines. It means they are not deterministic fully. And depending on their temperature and temporal component, they will do a different thing if asked the same question again and again. 



So that's a good selling point for closed source LLMs. And of course, they do continuous development because they pump in a lot of research money. And they are responsible for making commercially viable for you if you use the closed source solutions. 



Of course, we have talked about both parts, but let's see what you as a user of this solution have at hand. Of course, you have a number of options available and possibility of exploiting more than one solution. 



It means you don't have to be logged in to closed source or just the open source. You can just do the mix and match and very much what we have done in our solution. Many companies are doing to build more robust applications. 



And again, you can compare each LLM for your own purpose, close source or open source, on your own task, not just the benchmarks, because benchmarks are benchmarks. They are the research -oriented entities, and they have a different way of measuring things than what you might want. 



And they have different baselines for particular domain adaptation. Amount of fine -tuning needed is not same for different models. So you can see which needs less fine -tuning. You can choose that, or you can decide for yourself. 



And possibility of using a specific LLM solution for a specific task in pipeline. For example, if you have 10 tasks, you can see maybe this task will be done by the close source LLM and other tasks will be done by open source LLM. 



And it will depend on what sort of ensemble you want to create. A lot of options are available, and ability to pick and choose is there. So you are really at good time where you can build a great LLM app. 



I just want to do a quick show of hands. How many of you have really built at least one prompt engineer app or any type of LLM app here? OK, we have a few. Thanks. And rest of you, you are at just at the right conference. 



Because this is the place where you can really build mind -boggling innovative apps without writing a single line of code. And this app will be fully browser compatible. It will have all the bells and whistles, and many amazing things you can do without writing a single line of code, and by using all the default things. 



And I'm quickly going to rush over through. So first thing, what you have to do, break your LLM initiative into LLM tasks. I'm just pausing for a second. So you just marinate in that thought, which means that just decide, like for example, you want to build an e -com site, or you want to generate SQL from natural statements. 



Just break them in your initiative into the LLM task. And then, categorize all your tasks in which task is really mission -critical versus which task has tolerance for stochasticity and the lack of deterministic behavior. 



And that will guide you in choosing which LLM to use for which task and how to build your pipeline. And of course, less tolerant tasks are candidate for proprietary off -the -shelf solutions, like where you need higher quality, higher deterministic behavior, more control, temperature parameter, and so on. 



And the more tolerant tasks are candidate for open source with no or less fine tuning, where you can really go by even if the quality is not bang on. So we at, we are basically a mapping company. 



We are much more accurate than Google Maps, Apple Maps, no offense for apartments. It's a full blown thing where it tells exactly where to park, after parking, where to walk, where to walk inside the apartment, where it's entering, and so on. 



And Google Maps and other maps don't have that. But how we have built it, we have integrated a lot of innovative computer vision and machine learning and optimization algorithms. But LLM is at the center of consumption engine for the users. 



A lot of times, the thing which they ask, it gets interpreted by LLM Engine. And we then switch it to computer vision task, search task, or. some sort of linear optimization task. And we have used combination of various types of closed source as well as prompt engineer third party LLM solutions. 



And that is where mine has to see these, all the LLM solutions and set of options. It's like you're in a shopping mall and trying to buy what will work for you. And don't go with the mindset that you just have to have either ChatGPT or LLAMA or VQRA. 



And that's a bias I see all the time. People are either too enthusiastic about open source or they are just like more corporate feel with affinity to subscription based models. So that's where we need to innovate more. 



And what we do at, those who come from the Kaggle background, they will know there is something called the ensemble where a lot of weak learners, they work together and eventually they form the stronger learner. 



And that's what I have done here at, where what I do, I take the responses from a lot of LLMs. And then I consume them in LLMs which are open source. And I've not done much of fine tuning for my domain. 



And then I take them and send those responses to the closed source LLMs where millions of dollars have been spent by open AI to have a higher quality to reprocess the responses from open source solutions. 



And then I use those solutions in my app. So what happens? Maybe 30 calls I make to my proprietary open source LLMs. But I just make one call to highly costly. As a matter of fact, it's not that pricey. 



Yesterday they reduced the price. Good news, of course. So what I'm trying to convey here is that you really have to do mix and match. Don't just be heads down on one option. And we use it for automated support. 



bot which answers a lot of our questions which our customers have and it generates sequels, insights from dashboards, automated email consumption, response, order creations and so on. And all of this is done in pretty lightweight manner where we have not done like a lot of blockage to do the research. 



We basically build solutions one after another by using the combination of open source and the close source solutions. All right. Okay data enrichment is one of the use case which we use and this is very interesting. 



I'll give you an example here. We have apps which is used by a lot of delivery drivers and they ask a lot of questions. So I'll come to that but basically your data is limited by how much you can use for your training until you're a big corporate entity and you have like limited quantity, quality and variety. 



And that's where you can use LLMs to enhance all three dimensions. And for example, delivery drivers for our app, they ask, how do I mark an address not deliverable in app? And they need some steps, like go there, touch this, enter here, save, and so on, some sort of instructional question. 



But the same question, it can be asking much more than one way. So what I do, I send that to, say, ChatGPT, ask the above question in 20 different ways. And you get this. So you get questions like these, and all of them asking exactly same thing. 



And what it means, so my one candidate answer of steps to mark an address undeliverable is answer to all of these questions, which means that you have 20 times more sample data just by asking one prompt to ChatGPT. 



It means you can increase the quantity and the variety of your data just by making one call to ChatGPT. And that's where data enrichment comes into play. And another one is, for example, if you are not sure that you have a superset of tasks, like tasks 1, 2, 3, 4, 5, 6 here, but on particular request, maybe you will be doing maybe two or three tasks. 



But some human intuition is involved there. So basically, you can make one call to ChatGPT and ask out of these five tasks for this request, how many tasks are needed to be done and in which order. And it will let you know. 



And you can choose those two or three tasks and make the call to your open source LLM and do your purpose solving whatever you have at hand. So what I'm trying to say, dynamic behavior of selection of what to do can be automatically committed through calls to ChatGPT. 



And this is what we do. We have a highly dynamic behavior where we use these things. And of course, there are other use cases where you can build insomble and pipeline of the task at hand. OK, this is probably the most useful slide you're going to see in this presentation. 



I'm already out of time. I'll keep short. Who here have really not tried LLM Studio? OK, so good thing is a lot of you have tried. So what I'm trying to say is that H2O LLM Studio is a revolution. And how I realized, thanks to Arno and Jonathan and their team, I just had a demo in their office, LLM Studio is like eclipse for Java or pie charm for Python. 



It fosters the innovation. And you can create amazing LLM apps without writing single line of code. And the quality of that solution will be good enough, probably, for you. For example, the amount of time you have spent listening to this talk is enough to do a simplistic solution to real -time world problem. 



In less than 20 minutes, you can probably fine -tune. If you have a data ready and deploy it straight to the hugging phase. And all of these benefits, which you see on screen, it has support for low -rank adaptation. 



And it has a very intuitive UI. And the best part, and thanks to H2O Team, most of the hyperparameters by default work significantly better. You don't have to touch them. Just scroll them. And sometimes, if you don't know the parameter, just go and Google it. 



And you'll probably tell you whether increase it or decrease it and how it will affect your training. So one advice I have for all of you is that go and try LLM Studio to solve something like toy example. 



And it will change your perspective. Because all of us who have not really created apps, they think that these things are to be done through very researchy or developer mindset people. And this whole phenomenon, it's like a Gutenberg project. 



When books were starting to get printed, people thought only the very educated and elite and religious people needs to learn to read and write. But we know everybody should learn to read and write. And that is what happening with LLM apps. 



Everybody in the room should have mindset that they can create something which is useful for their purpose. Because in the future, the way we interact with information is going to change fundamentally. 



In fact, it's changing at this very moment. as we talk. So go give it a try. Sneak is a sneak peek. This is how it looks like. It's not scary. AWS like very cumbersome UI. It's very intuitive AI. And we have an amazing team who supports this. 



And as I was saying, you could have created a one fancy application of LLM in the same amount of time. I am just talking less than 20 minutes. OK. And that's it. I hope I passed some enthusiasm. So go and give it a try. 



Thank you.