Return to page

ON DEMAND

Natural Language Processing (NLP) with H2O Hydrogen Torch

H2O Hydrogen Torch is democratizing AI, allowing all data scientists, from the novice to the expert, to build state-of-the-art deep learning models without code. It unlocks value from unstructured data to help teams understand it at scale and provides a powerful engine to solve complex problems in natural language processing (NLP), computer vision (CV) and audio analysis areas. In this webinar we will focus on NLP use cases. We'll provide an overview of H2O Hydrogen Torch, cover NLP use cases, and demonstrate how to rapidly build an NLP model.

3 Main Learning Points

  • Understand the capabilities of H2O Hydrogen Torch as a no-code tool for deep learning, integrated into H2O AI Cloud

  • How to train NLP models for different user levels

  • Assessing model accuracy and tuning model hyperparameters

Read Transcript

 

So today I'm going to talk about one of the new H2O products called H2O Hydrogen Torch. This is our new AI engine, which is focused on deep learning tasks. First of all, the session today will be focused on NLP, but I will mention it later that it doesn't only mean NLP. It doesn't limit you only to NLP tasks, we have way way more functionality in it.

But first, let me start with a few words about what is H2O Hydrogen Torch. What are the goals? How does it work? And what can you expect from that? First of all, it is a tool for you to apply deep learning on your data. It will help you unlock the value of the unstructured data from, from your company or from your problem. First, we're focusing on unstructured data being while being the best field of application of deep learning. With the H2O Hydrogen Torch, you will be able to build state of the art deep learning models and apply deep learning this way to innovate.

I want to emphasize that we're talking about the tool that will allow to use your own data to apply deep learning to your problems, which is another improvement over using pre trained models or pre trained and exposed APIs because you can tailor the model to your particular problem at hand. So even if there's no API which can solve the problem you're struggling with currently, with H2O Hydrogen Torch we will be still able to solve it and use state of our deep learning techniques, transfer learning, and many more, many more techniques we'll talk about later today.

 

And last but not least, with H2O Hydrogen Torch, you don't have to share your data with any external company. This is a tool you're going to use it your own environment with your own data. And you will own the data you have, you will own the model you get and all the data and all the process will be secured within your own environment. If your IT security needs are met that is. It will be great too for you.

 

Why did we develop H2O Hydrogen Torch in the first place? I know we touched it a little bit a week ago during our first session, but I want to go a little bit deeper here. As you well know, most of the data currently generated is unstructured data. These are texts, these are videos, these are images, these are audio files. So, a lot of companies - small and big companies - they have large, large data sets of unstructured data which are quite difficult to work with. And there is definitely a gap between the need to analyze this data and the talent required to do so. According to external research, we have only limited availability of the people with skills to tackle AI problems with deep learning and basically solve the business problems using the unstructured data. So H2O Hydrogen Torch is our way to democratize deep learning. To bring the power the power of deep learning to way larger audience to a broader audience. We have quite a quite a big team of senior data scientists and Kaggle grandmasters behind the scenes of Hydrogen Torch. These are people that have a lot of applied experience in deep learning and in different applications of deep learning. But the goal of this tool is actually to nicely package this experience: to cut all the difficulties of in order for novice users, or for junior data scientists to start away. Start right away with using Hydrogen Torch to deliver deep learning models, without the need to go and gain all the practical and theoretical experience, getting all the details and more importantly, to write a lot of pytorch or TensorFlow code to get things done. So the goal is of this tool is to democratize deep learning and to bring it to those companies that lack the qualified resources just to start deploying deep learning projects on their own.

Deep learning is quite a large field today. As I mentioned, we're going to be focusing on NLP. But I want to emphasize that, as of today, Hydrogen Torch supports way more than just textual data and NLP tasks. We already have support of images and video, so computer vision tasks, and audio tasks as well. As for those who used driverless CI, or some SK learn tools to create. To create machine learning models, we usually know that these are typical classification and regression tests. So depending on the type of the predictions we want to make do if we want to predict a number or a category, we're just talking in terms of either classification or regression, or just two types of the tests. But in case of deep learning and unstructured data, it gets quite complicated.

 

So depending on what we have as an input, and depending on what we have as the output, there are quite a lot of different tasks out there and different problem types as they call them. So classification, and regression is usually just the beginning. The more complicated it gets, the more the more specific problem type, you might need to solve your own problem. To solve your problem, it has to be text, data based, or computer vision based. And we cover quite a large variety of the least problem types. Today, we'll go through the problem types we have for tax.

 

So we'll start with the regression and classification use cases. But we'll also talk about classification span prediction sequence to sequence and metric learning. But yeah, but I'm going to in the end, I'm going to mention that we have also quite a variety of computer vision use cases available for you in Hydrogen Torch.

 

So what makes Hydrogen Torch special? And what are the key differences from other tools out there? First of all, it's a no code framework. As I mentioned, in the beginning, we tried to design this tool to become to become a tool for junior data scientists right away. So for those who don't have much experience in deep learning, that can be a tool to get there to get the deep learning models delivered really quite quickly. And which is I think, equally important to learn on the way while doing so. So even though we can start with a quick, quick training of a deep learning model of your task, you can still learn by doing and following our documentation tutorials and get guidelines. Learn more about deep learning techniques, learn more about NLP and get a better get better in deep learning overall as an area and get better in applied deep learning using H2O Hydrogen Torch, that would allow you to tune your model better to your needs, be that accuracy, or maybe efficiency of the model, and get a better fit model for your business use case.

We cover quite a lot of problem types, as I mentioned, but I want to emphasize that it's not an exhaustive list. It's just gonna grow quarter over a quarter. So during the last release, we added support of audio classification and regression. But we're not going to stop there. Best practices of Kaggle Grandmasters stand behind the tool not only with attempt to simplify it and make it available for all levels of the data science expertise, but they also stand behind all the functionality currently there and that will appear there by means of competing on Kaggle and following the research and trying out these research and competitive environment. This way we make sure that the best applied techniques out there are implemented in Hydrogen Torch and every quarter with every new release we try to add more new and more efficient things to Hydrogen Torch and improve the existing ones. Model tuning your you will be able to tune the existing the models you fit there. So not only you will be able just to get a quick win by running the default model and get a very good performance, but you can also maximize the performance by tuning routines we have there. I will show you a little later today. And if getting the maximum accuracy is the goal for you, then you will be able to achieve it.

 

Finally, the deployment - we took care of that. So all the models that you did develop in Hydrogen Torch, you will be able to deploy either within H2O AI platform directly to our ML Ops tool, or you can package it and transfer it to the deployment environment of your choice. For junior data scientists, we provide the no code tool. So you can start right away and gain some experience masters a little bit of deep learning, theory and practice using the tool. Or even if you're not a data scientist at all, you can try to fill junior data scientist shoes and learn a little bit and try to fit a deep learning model with it. But even if you're an experienced deep learning engineer, you will find all the variety of the techniques which we implemented and exposed through UI.

 

So you will be able to get the best performance of the model using Hydrogen Torch. Let's now switch from the slides to the live demo. So like to go through some exact few examples of how Hydrogen Torch looks works and what are the models you can build with it.

 

And how would you work with the models? How would you analyze the models and other functionality that we provide in H2O Hydrogen Torch. So I'm here I'm running an instance on the clouds, H2O Hydrogen Torch version 1.1. We're using H2O Wave as the technology to build the UI. So those who are using H2O tools might be very familiar with that by now. And it might be easy for you to navigate through the UI, the workflow of H2O Hydrogen Torch is, very straightforward. As is with the journalists CI, I usually import the data set, you create an experiment, and then you inspect the results, pick the model you would like to use in production, and you deploy that. We also ship publicly available documentation of Hydrogen Torch, which I'm going to emphasize a few times today, because we have it. We have it very detailed with, with videos or reviewing the product, with tutorials for the beginners, to explain all the details, how models, how datasets are uploaded, how models are trained, how to interpret all the parameters, what do they mean, how to work with the tool, how to interpret the results, how to export and deploy the models. So everything I'm going to talk about and way much, much more is described in the documentation in in a very user friendly format.

Now let's jump into examples. I'm showing you an instance where we have several demo datasets uploaded into hydrogen porch, and some several Dima experiments, which we have pre-arranged for you to explore how the experiments look like how the outputs look like, and what is the expected result of running an experiment. So you can browse through that to see the examples before you start working with it. But before I go through a few examples we have let's just go let's just jump in and start an experiment. Here we have an example data set. It's a public data set with a set of texts, short texts, which are curious, basically people asking questions, and a rating manually assigned to these theories to to estimate how well they're formed. Why is this? Why is this a task? It is because that frequently when you're working on NLP tasks related to Curie management, you want to figure out which are properly shaped curious, properly in in the means of grammar in the means of stating a question properly. And the good ones are the ones you would like to work with. And probably the bad ones is not something you would like to focus a lot.

 

So maybe building a model to recognize how well shaped the question is, might actually help you clean the data or maybe improve the use case you're working on. But for now, in this example, we're just going to be using it as a demo, where we are going to build a deep learning model. which tries to predict how well the question is shaped based on this data set. So the input is going to be the text column, and the output is going to be this rating, which is defined in the range between zero and one. So here we have a couple of pairs of examples. They're quite straightforward. So let's just jump into an experiment creation. So I click Create an experiment, create experiment.

 

And first thing I want to start with this experience level, at the very top. As we've talked about before, the tool is designed to range of experience levels from novice to master. So let's, let's assume I'm a novice, so like to actually get as simple experience as possible. training a new deep learning model for my task for well formed curious. So this task is called Text regression. So based on the text, when a text we're trying to predict a numerical value, um, I, I specify the experiments name, I have a generated name here I'm happy with, I have a training data frame, the one I pull pre uploaded to, to the to, we need some validation strategy. Typically, it's just the K fold cross validation, and I can pick any faults, I pick the default one, I'm fine with that. I don't have a test data frame, I only have a training data frame. So I'm not going to use this functionality to measure the performance of the model on a separate data set.

 

For now, I want to predict the rating and I want to use the text call. So all the default settings are fine. And as a novice, I'm actually exposed to three hyper parameters I want to pick from the bottom one is the metric, what would be the assessment of the accuracy of the model mean absolute error, I'm happy with that let's let's keep the default one. Number, if epochs how long they want to train, the model for default is two. And I have a model which has, which I ran before, which ran for two epochs, let's increase it to four and see what happens. And the backbone of all of the NLP tasks usually require a pre trained model. So model, which was our, which was trained on a large corpus of texts. Some of the basic ones are used on the entire Wikipedia, but some are more specific, they might be trained on some financial data. Some of them are trained on medical data, some of these models are larger or smaller. Here, we provide a relatively short list of the pre selected backbones for you to choose from. But this field is actually free text field. And if you look at the documentation, or hint over here, you can actually go to the hugging face library that contains 1000s of models there and pick the one you need for your particular use case. It might be language specific, it might be domain specific, it might be a specific size of the model. So you might want to go for a very large model or very small model, you can just use a free text over here to type the name here, here. And I used to hydrogen boards will download the model for you and start training a model with the backbone of your choice, which can go far beyond just this list of mentioned.

So I'm, at the moment, I'm happy with the default backbone choice. So it's going to be Bert model uncased, I'm fine with that. Let's just start an experiment. And I'm starting it, it's in the it's cute, it's here. And it's gonna take a couple of minutes to run and finish. While it's running, I'm just gonna take around 10 minutes. Let me open an experiment which I've already ran for this data set for text regression. Just to show you how it looks like when with when one is already done. When I go to finish the experiment, first thing I will see is the chart of how the progress how we progress over time. This one ran for just two epochs. So we have two records of the validation loss and validation, mean absolute error metric, and we have a more detailed graph of how training process developed over time with training batch losses training learning rates.

 

The other experiments we're running now are striking it live here we have a static picture after an experiment has been finished. We have metrics over here. So even though we're focusing on mean absolute error, we still can check all the bait Oh, the typical metrics Root Mean Squared Error, r squared, and so forth. We have a concept over here, which might be also interesting, which contains all of the parameters we've set, we actually set only three, but there are lots of technical ones. And there are lots of parameters, which are hidden for novice user. So later, today, I will, I will open the master level experience level. And I will show you all the parameters we have there exposed, which is quite a long list, as you can see here. And after the experiment finished, besides looking at the progress at whether the model converged and the absolute values of the metric we're aiming at, we're also showing some predictions insights. So that shows how the model worked on particular examples.

 

Here we have three sections, random examples, best examples, and worst examples. Let's pick a random one - Kathy Perry. Is Katy Perry married? And it was assessed as 0.8 - relatively high. And we predicted Oh, .86. So we're 0.6. That's the value of loss. This they were we weren't that far from the actual label. So yeah, it looks reasonable. And if we look at few other examples, usually, usually it gives you a flavor of how the model works. And if it can capture these examples well enough, we can look at the best examples. They're typically perfect.

 

But most interesting is looking at those examples, which were the most challenging for a model. And as we see here, these are raw, very typically, it's very frequently happens that you find a specific subset of the data which model fails to work on well, or you might actually question the labels themselves. So is there any plans in a cave? While it's scored zero, but we predicted as a proper question? That's a question actually to the labeling at the moment. So I'm not exactly sure that the label is correct. And maybe I'm just assuming that here, the model might be actually working better than the manual annotation we've done for the sample.

So this is, this is an example of a finished model. After we finish it, we can do the all the expected actions with a model starting with predicting it on a new dataset, we can download some logs and technical information, we can download the predictions, the outer for predictions on the training data set, the model did. So for instance, if you want to do some external analysis of the results of the model performance, or you want to supply the predictions of the model to some optimization routine, or push it to an external tool, you can just download it from here and use it use it the way you need it. And two more options will provide over here to deploy the model.

 

One is using the package for H2O ML Ops. So you will be able to to deploy directly to an H2O tool, which will expose this model as a REST API. So you will be able to send REST API calls to this model from any tool you want to consume this, this model. Or we provide even more flexible option over here to download it as the scoring pipeline, which will give you a will package with all the dependencies installed in kinit. Meaning that in order to run this module, you will just need you will just need a Python environment on any sort of machine or virtual machine or any or any deployment service you want to run. So with this option, you're pretty much able to deploy it on any virtual machine out there. You just need to install the new operating system and Python there.

 

Let's look at the experiment we're running. It's halfway through so it ran through to two epochs already. If I refresh it, we'll see that it's changing life. So it's going to progress for two more epochs. The one other thing I want to emphasize here is that we can observe the metric as well as prediction insights life So for the model, which we achieve after the two epochs, we already have some predictions, some insights over here. So even though even though it's still in progress, we can figure out certain things about the model on the way and either stop the experiment, change the settings or change a specific setting were not happy with. Or just to make sure that the progress is reasonable. And we just need to wait for this experiment to finish to get a model which we would like to deploy. This was an example of a text regression task. But as I mentioned, we have we have a few more for for NLP use cases.

 

Another classic example is text classification. Here we have a data demo dataset of Amazon reviews where we are trying to classify if a review was if a review of a product was positive for or not. So it's a simple binary classification. We have a model which ran also for two epochs with an hour of all nine, eight. So let me be more accurate over here. So our work is 0.9859. So very high, the model is performing very well. And here for the metrics we speak, we see specific metrics for binary classification. So even though target was Roc our work, we have all the accuracy, precision recall, f1 f2 score and a confusion matrix at the bottom. So you can play around to see what are the true or false positive false negatives, and check what how the matrix and the confusion matrix changes, if we change the threshold of how we classify the predicted reviews into positive and negative and the same. And it also stands for predictions insights, we can go here to see some random examples and make sure that the model works as expected.

 

Here we see quite probably simple examples. If a review starts with wonderful probably it's a positive one, which is the case and we predicted. Well, we can check some worst examples, some of more challenging ones. For instance, over here, excellent recipes. It's a little bit confusing, because if you go through this review, it looks like a positive one, but it was labeled negative.

And vice versa. For the next one, something's not going to work. But it's it was labeled positive, even though the model predicted this negative. So this all points in the direction that actual the model works better than true labels are. And all these examples we're looking for, we're looking at the moment, they seem to be just purely mislabeled, so the actual accuracy of this model might be higher than we even report. And all these examples were predicted probabilities far away from the true label might be the data points, we would like to either reconsider or drop completely from the data. Um, let's move on to some more complicated problem types we have here in hydrogen storage. We talked about text regression, text classification.

 

Let's now look at text talking classification. This is a task where we expect to predict talking about words or phrases and classify them into categories. A very typical example over here, where the category categories are names, names of organizations, or some other specific names of products, locations, and so forth. So what we're given to train for this task is short text where we were someone who pre labeled some of the words into these categories. So fronds hence suspect the key member to Spain, France is a location. Spain is location, and it is not an organization name. And we build a model that would correctly classify each word into these categories.

 

Here we have random examples, but all of them are quite perfect. Let's have a look at the worst ones. So these are more complicated texts where we make some mistakes, but we still see that they are not that terrible. So can emission regulation. So this is a specific class, we would like the model to predict, we see that the predicted label is correct only for commission but not regulation. So there are a couple of errors over here. EC is correctly classify there's an organization and so forth. So with this task, we're not predicting just a single output, but we actually predicting a sequence of outputs of multiple potential classes, we want to classify our words or tokens from the text. That's the simple example where we look at like, named entity recognition, pretty much. But this problem type has large, has wetter applications.

 

So depending on the use case, you might want to recognize certain names from, say insurance claims document where you want to check specific names of people, names of doctors, maybe names of drugs, in medical claims, examples, and you might want to extract the names of the of the drag of the drugs on from the claim to match it, for instance, to the insurance policy to see if the these drugs are covered by the insurance policy to automate this process. So that might be kind of a basis of a use case of automatically repaying automatically repaying health insurance claims, when you have a good match between drugs names and documents and drug names in the insurance policy. Here we have a couple of specific metrics for such a task, one score is quite high on line three, five, and we saw based on the random examples that usually the model performs quite well. And as we saw in, in the previous example, with the classification, sometimes the model perform, the ones can perform very close to the performance over human labeling such texts. Moving on to the next problem type, let's look at the metrical text metric learning this task, or were given a set of texts, and some of them are duplicates. And we want to teach a deep learning model to recognize duplicated texts, not in terms of the contents, but rather in terms of the meaning.

We're using a data set from I think, a boon to quite questions are going to related questions from a forum in order to find duplicated questions. And we'll remove the duplicates or give the answer which was given to the question, as before. And what the model is supposed to do is it is supposed to recognize that this exact question in terms of its meaning and content, content was asked before is there in the database. And here we have for each random example.

 

To assess the performance of the model, we have top three other questions from the data set, according to the model predictions. So these are the top three assessed by the model questions similar to the original one. So the original one, is there an easy way to limit user bandwidth usage? The top one is can we create bandwidth limit for all users looks quite good. The expected similarities oh seven nine, which is quite high, high end it doesn't match. So according to our menu will label manually labeled pairs. This is this is a duplicate, which is true as we see that. And we can look at few more examples.

 

For instance, something related to Windows 7, and here we have worst match over here. So the closest question based on our model assessment is this one with a lower estimate, and it's an old match. So here we can see that we can potentially even set a specific threshold to claim when a question is an applicant, or it is not a duplicate that can help us clean up the duplicated questions, but also can have quite a broad range of applications, including maybe some FAQ sections on your website, for instance. So if someone asks a question, you want to find the similar one from FAQ and give, give reply to that.

 

For platforms like Quora that might be exactly finding duplicated questions and just removing them or pointing them out refer to the question which looks very similar to one the author is asking. Now let's go to some are even more complicated to my opinion examples. First one would be a second to last or second to last one would be sequence to sequence. That's a problem, an NLP problem type where we have a text as an input and a text as an output. A very typical way of applied is to specify a task of text summarization. So in this example, we have a set of CNN articles describing something you can see these are long articles, well, quite long ones. But we want to, to get a summary of those in just a few sentences. So we have a pre labeled data set where for each article, we just have a couple of sentences summarizing the contents of that. And the model is trained to actually do exactly that. So the model consumes and protects and generates a new text, which is expected to have all the contents of the article summarized in it. So we see that it's kind of shrinking the size, and according to the matrix, it works quite well. And if you read the summaries, they're quite quite meaningful. We're not going to go through the articles, they're quite large over here. But I want to emphasize that this this particular application of NLP with H2O Hydrogen Torch is quite impressive in terms of the fact that the model will generate the text, it not only looks as a human readable text, but it also captures the contents of a larger article.

 

There are quite a lot of applications, you can think of how you would apply such models, starting from extracting summaries from the texts. For instance, if you need to have them summarized in some short form for say, better search, or to provide a description of certain things to, to the management or something like that, or maybe to simplify it some more domain specific tax for non expert that can also be an interesting application.

And before we finish, I want to show you one more problem type we have, which is text span prediction, and a very, very typical use case of that. And I think, typically this problem type is named after it is question answering. So in this use case, we actually have two textual inputs a question and the content, the context, sorry, and we expect the model to find the answer to this question in the context. Um, so let's, for instance, over here, okay, that's still a still a very, still a perfect answer. But let's see how it works. What short that just were caused by the blockades and we have description about a certain historical event. And the proper answer is petrol and food. And the predicted answers in order of confidence are petrol and food, which is exactly perfect.

 

Petrol and food shortage because and even the explanation extracted from the text after that. So the model is able to recognize and to consume the meaning of the question to analyze the context find the proper answer.

 

Let me jump to that and show you a couple of examples with worst examples with the worst samples. For instance, prior to what year were the reports used to assess sea level rise? The correct answer is pre 1993. But actually the model pick just 1993. More sometimes the model just takes a little bit of overall longer output as the proper labels. So it is a very impressive use case to see how well NLP deep learning models can understand the text and even answered the questions. There's also quite a variety of the applications of how you can use such a model starting from a basic question answering tailored to your content, say, if you have a website and you have kind of a q&a interaction with users, you might use that. But you can also find applications such as extracting certain information from some of the texts. I've already mentioned examples of insurance claims. So if you have insurance claims of a particular type, and they come with the, with some documents with them, which are hard to analyze with typical methods.

 

With classical statistical methods, you might want to kind of curate by asking questions, and 12, depending on the question, you will get different pieces of the text if they're there. And that will allow you to well analyze your unstructured data of insurance claims, and maybe even extract some features for further modeling. See, if you're running an actuary model, you're probably not using the full textual data from the claims, but you can extract certain certain important pieces by means of applying question answering model to these texts. Before we finish the live demo part, let me show you what is the interface for for more experienced data scientist using Hydrogen Torch. So um, we have this experiment finished, we started in the beginning, we see there was a little bit of a regularity with the validation metric, but it dropped. If we go back, we actually can see that it is even better, even more accurate than the one we were on before. So the demon, the demon experiment has to run for two epochs, we ran one for five epochs, and it got the lower validation metric. If we want to run another experiment, we can start it from the one from the one that has been already finished. So here, I'm starting a new experiment from the previous one. So it has four epochs as I specified, sir, I think it's five.

And if I gained a little bit of experience in confidence, I can go to higher experienced level, let me open the last one master. And this one exposes all the hyper parameters available in hydrogen forge for you to tune. And just as a side note, the parameters are dependent on the problem type, of course. So for different NLP use cases, they might be a little bit different. But if you can see computer vision, it's going to be quite different set quite different set of hyper parameters. So let's go and check what are you able to tune if you're an experienced deep learning data scientists, or if you want to get the most accuracy for your NLP use case, I'm gonna skip some of the smaller ones, some of the more complex ones. But let's let's just go through the through the major ones and more details. First is lowercase. So whether we lowercase text or not max length, as we know, Bert models, they use only a subsequence of the text of a of a limited number of tokens.

 

By default, we'll start at 228. But you can increase or decrease depending on the size of your data, depending on the size of your model. And depending on the size of your GPU, you can actually play with this parameter and get kind of a larger, longer text or if you don't need it, like in the some cases, I showed where the texts are small, this is actually not important. Some more experienced some more enhanced settings like reading checkpointing, or sending an intermediate dropout, which you can do by default, the model doesn't have it. Or you can choose a pooling approaches. By default, we use the classification talking of the bird model, but you can use you can apply pooling, which will change the structure of the model a little bit. The loss function, you can specify the metric but also we have a variety of loss functions to up before the gradient descent method to optimize. Of course we have a bunch of optimizers over here with learning rate and learning rate scheduling.

 

We saw that by default it's cosine scheduler, but you can play around and we have a bunch of those on warm up, way decaying so forth, gradient clipping So when gradients go to large, that might happen in certain cases of certain data types, especially when labels are extremely skewed. Read into kemeling accumulation when you want to average gradients across multiple, multiple batches of elation epochs. So you can, if the model takes quite a long time, because the model is large, or the dataset is large, it can have it evaluated more or less frequently than every epoch, and so forth. One more thing I want to emphasize is multi GPU support built in. On this machine, I have only single GPU, but I can use as many GPUs as there are attached to the machine and, and hydrogen ports will use all of them to train the model. So it will distribute the data across the GPUs and run the synchronization. So the actually the training speed will be close to winner. So if you have four GPUs, then basically you can use all four to fit a single model, and it will be running almost four times faster. Or you can use one GPU and the other three to solve another problem. At the same time, all the experiments will be running in parallel and consuming independent GPUs and therefore won't interfere. Last but not least, if you're not sure, for instance, what is the backbone for you and they have a couple of choices you would like to consider, you can run the grid search over here, so I switch it on. And now for most of the parameters, I can choose multiple values, be it backbone, be it say, pulling approach. Um, and when I click run experiments, basically agreed search will be started. So Hydrogen Torch will trigger multiple experiments for me. And after I'm back from my coffee, I will just compare them and pick the best one for for the production use. Um,

and jumping back to the documentation, everything I described and way way more details you can find in the documentation. So if we go to the experiment and experiment settings, for text regression, the one I was showing before, here, every single setting is described with with explanation of how it affects the performance of the model. So that's the way for you to learn more about particular settings and values of the settings and get better as a deep learning data scientist. Now, let me jump back to the slides. And here now we have a second quick poll for you. Let me hand it over, back to Blair.

Alright, so we're gonna do another poll here, how relevant is natural language processing to your day to day. Give everyone a few seconds to get some answers then. So far, it looks like we have a little bit of a divide. Alright, looks like most people are coming in. All right, so let's take a little bit of a divide between all sides of using NLP models all the way to not using them.

Alright seen, but we have quite some quite a few people that have a need for an LP, but there is a struggle either building or delivering them to production. So basically, each two Hydrogen Torch might be a very good fit for you exactly for these two purposes. So what we have to design for is to help you build the models on your data in your environment, and to deploy it either to your environment again, or to MLOps. So you won't have to struggle with maintaining the DevOps tool, and you will just have a REST API almost right away. Before we finish, let me give you a quick recap of what we've discussed today. Let's let's just quickly go through the classes, searching through nanopi problem types we we've covered today. And here I want to emphasize that the use cases I've showed I've shown today is just are just examples. So these are examples of how we can apply it after this problem type, but there are amazing amount of potential use cases you can solve by just applying the same techniques to a different data set at hand. Few more points as well for classification.

 

We support multiple label datasets. So if you have multiple labels, you want to predict at the same time we're looking at a single one it was whether it reviews positive or not, but you can have multiple targets at the same time, which we support out of the box. For all NLP models. its stance that multilingual and domain specific support is there, because we support almost all hugging face models. You can get them by just typing in the name into the backbone field. And that will allow you to use language specific models. So if you have some texts in Spanish in German, or even a mixture, so if you have a mixture of Spanish and English documents, you can download and use a multilingual model which will treat both Spanish and English texts equally, and domain specific, there are lots of domain specific pre trained models out there. So the mums which were not trained in general texts, like Wikipedia, but rather financial documents or medical documents, and using these backbones can actually give you a significant boost in model accuracy if your problem is very domain specific.

 

For talking classification, other examples from what we've seen today might be extracting names of the drugs, or extracting personal information from the documents. So if you want to anonymize the document, in order to share with a third party, for instance, you might want to extract all the names, locations, social security numbers, emails and so forth. So, you can set it up as a token classification task with custom entities are those which you find in your documents and you want to find and either use or remove and for either insurance use cases or an optimization use cases this this model, such models can work quite well? Span predictions or typical use case of building the question answering system. Another example would be finding relevant information and medical transcripts finding relevant information in insurance claims, as I mentioned, and whatever your current problem is, it might be spent prediction can be applicable to that sequence to sequence allowing that allows the model to generate new text. We had an example of summarization. We talked about simplification as another example. But there are way more applications out there than only that. Metric learning. We talked about finding duplicate duplicated questions for FAQ. Another example is detecting fake reviews. For instance, on websites with movie reviews, or book reviews, you might have like a spike of negative or positive reviews out not out of nowhere. So you might even detect these spikes as a groups of fixed one fake ones by running such a model and while clustering on all of them together and just removing them basically.

And the last line slide for me for today is just reminder that NLP is just one piece of functionality that H2O Hydrogen Torch provides. We have a lot of use cases for images and video data, we have use cases for audio using spectrograms. These also are more complicated examples than just classification and regression. We also have metric learning for images when you want to find applicated images. If on your website if you're selling items. Here at the bottom we have examples of the same bicycle with four different pictures, but the model was able to recognize it. We have examples that extract detect objects on images obviously but also do instant segmentation so segment images and finding for instance some clothes specific types of the clothes for some applications for retailers or cars or anything else. And last but not least, we're adding a support for explore in ability for our deep learning models here we you see an example of graph camera applications for images where we show what areas of the image drove the predictions the model did for this example. Let me switch to the questions I have. I think we have a couple of those. But before that, thank you very much for the attention.