Return to page

H2O WORLD: INDIA 2023
 

Democratized AI using H2O Talk by AT&T at H2O World India

Watch now

Read the Full Transcript

 

 

Mark Austin:

 

I'm Mark and this is Prince and we're going to tag team this a bit. Okay. So first of all, we have some AT&T folks, I think, in the crowd. Can you all stand up or just say, "Hi everybody." So this is AT&T Bangalore. Got a good showing in here. Thanks y'all for coming. Okay, so we're going to tag team this. And the theme, I think as you see, we have a common theme with H2O and Sri about democratizing AI. Now, one thing that I really never thought about is it's actually doing it together. There's a big benefit there. You think of democratization, you almost think of it, I want to do it myself, but when you do it together, you actually make everybody better. Okay? So we want to give some examples of that today.

 

AT&T history with AI

 

Now, before I go into this Andy talked a little about this. Sri talked a little about this. AT&T has had a long history in AI, right? So 1950, the first program in computer to play chess, it wasn't big blue. We actually had it first there. 1955, we actually, along with IBM, we coined the term artificial intelligence, 1955. We're all familiar with Unix, C++, S which became R in the 1970s. And then the first neural networks Yann LeCun at Bell Labs did, in the 1980s and 90s. And then in the two thousands AT&T won the Netflix prize, a million dollar hackathon for data science, if you remember that. As of 2020, I think that's the last one out. We were number six in AI patents. And at AT&T we have over 500, I think it's around 560 machine learning algorithms in production, okay?At AT&T. One more thing that you'll see today and Prince is going to talk about it, is that we're co-creating a feature store. So if you haven't heard about this, you really want to get this. This is what makes it better together in creating AI. Okay?

 

What Does Democratizing Data and AI Mean?

 

All right. So what do we mean by democratizing data and AI? So when we think about it, it's really three things. Empowering more people to do data and AI better and faster to create value for the customer. Okay? And at AT&T in the chief data office, we have about 800 data science and developers that work all across the company, okay? Whether it's fraud, whether it's network, whether it's sales, marketing, all across the company there.

 

Search is Better With Generative AI

 

Now, here's how we think about the life cycle of data and AI, okay? And you think of this as a circle, right? And comes back where you're starting again. So it starts off with finding and getting data, okay? And then you do, you engineer the data, you create amazing features to predict something, and then you create AI, you deploy it and serve it. You monitor and govern it. And if you think about this, depending on who you talk to, a lot of that upfront time is in that first piece, okay? 60 to 80% of the time is in, where is this data at? Who has this data? Where can I find this? Okay? And then you have to clean it, you have to do that. So we see a lot of things. We're putting a lot of effort there, and this is where generative AI actually helps, okay? So it's not only searching for it, it's telling me the answer, tell me everything about it.

 

Make a chat session about it. Okay? So we're doing a lot of that particular area there. Now in those regards, a feature store, and Prince is going to talk about this, I love the word store for two reasons. It's a store for storing something. I'm putting my amazing features there. It's also a store for shopping, okay? I go to the store and say, "Oh, that group had this feature. I can use that to help my predictive churn model. I can help through my predictive fraud model." And you borrow it, you go to the store, you check it out, and you try it. So fast, okay? Now, wouldn't it be cool if you could combine that with generative AI? And what do I mean by that? Okay, so we mean doing this. Okay? So you want to search for something, where is this data?

 

Remember, finding data is a hard problem. Where is this data? That's the Google type thing. I got a list. Here's all the top feature sets, but you really want to ask a question, okay? That's generative AI. So that's asking a question. Tell me the top feature sets for churn, okay? That are for wireless churn, for postpaid customers, and just gives me that a given group is already worked upon. Okay? So putting these together, this is where a better technology comes together, both the search piece, the extracted piece, and the generative AI piece. So let's look at how this might look in the feature store. Okay? All right, so here we're just typing a question. What data set names do you know about subscribers? Okay, so this is actually hitting the feature store that says, tell me the data sets. Okay? Now, just for purposes of a demo, I just put one in there. Okay? So there's this data feature set called dabbler_orian type thing. And then let's just do, show me the number of entries per month for this data set. Okay? Now here's the key piece, okay? It takes the English, it translates it to a query, sends it off to the feature store, and comes back. And it gives you the answer there. Okay?

 

Titanic Dataset

 

So let's try something else here. I want to try the Sri method with a real demo, okay? So many of you might be familiar with the Titanic dataset. Has anybody started off with a Titanic dataset? Who survived? Can you predict survivors on the Titanic, okay? So here's the dataset we all know and love, okay? Passenger who survived, what class were they in? First class, second class, third class, names, sex, etc, okay? And the goal is to predict survivors. Okay? So we loaded this up in the feature store. We did some simple stuff. What percent of customers survived in this data set? Okay, so 61% didn't survive, 38% did survive. Okay? And then you get a little bit more difficult, show the number in percentages. Okay? So now you see the number. This is in the training set, and I'm just getting more complicated, right? Show the number percentage of customers survive not survived by sex in this data set. Okay? So now I have female and male. Females had a lot better chance of survival. They put them in the boats probably, okay? But notice that I'm getting more and more complex as we go down here. And this one here, show the number of percentage of customers survived or did not survive by class, by sex in this data set, and sort it by class and then by sex. Okay? Let's take this. Let's just try something here. See if this works.

 

Okay. So instead of by sex, let's do it by age. Now, let's see, does it even know? Can I do it in buckets of 10? Okay. See if it can do that. All right, there you go. Okay. So here we see, group them 0 to 9, 10 to 19, 80+, there's an 80 year old female that survived. Okay? Isn't that cool? Look at that query. Could you write that query that fast? Is the way we did that, right? And we did it all just with a statement. Isn't that amazing? All right, so let's go back and I'm going to hand it to Prince here. Let's talk more about the feature store itself. Okay? Alright, go ahead, Prince.

 

Feature Stores Are Better Together For Engineering Data

 

Prince Paulraj:

 

Thank you, Mark. Hello everyone. So I'm going to talk about another feature store. Why? Like Mark said, 60 to 80 percent of the time a data scientist spent on engineering the features, right? And then we heard about from Sri. Then we talked, Andy and Andrew, everybody talks about the democratization of an AI, right? So you don't want features isolated and stored in your own machines or somewhere in the service. We want all those features to be available in one place where you can share those features and really democratize across different views in the organization or even across the organization. So that's the important use of the feature store.

 

So just to give you an illustration here you see things are siloed in development in, in, in, in departments. And actually when you do the data science training, if I want to go and look for new features there is no way that I can look in today, right? All I need to do is reinvent the wheel. I have to go back and create those same features. But what if, if MAs has those cool features and he published them in a feature store, I'm going to shop around those features in the store, right? So I'm not going to rewrite everything, the help of the genie, even it's makes it much, much more easier. I can just go and type it out in simple English and it's going to show me the curry. It's not only showing me in the curry, and also talking about which feature set and what features are there, right?

 

So very simple when you have the feature store in that fashion, and it's also going to help you in online and offline, in a standpoint. When I talk about the offline you can use the H2O and AT&T feature store, using a command line interface, you can actually create. At the same time if you wanted it in the form of an API for real time scoring, which I'm going to talk about the use case of a fraud because we need to score the transactions in milliseconds, okay? Millions of transactions per day on the scale of AT&T volume, we got to score everything in milliseconds, sometimes even less than 50 milliseconds, we got to score. So in the same feature store, the features are there offline. Now it's available as an API where we can access it very quickly. And in a similar fashion, you can see that the metadata, it's so important, the metadata because like we talked about these features are the machine learning assets. It's so important we maintain a good metadata about these features there. More than 50 meta points have been stored about each and every feature that's also sitting in the same feature store. Now, it is really available for us either during the training time or the inference and scoring time.

 

One Feature Store No Matter Where You Work

 

This is one of the, another beauty about the H2O and AT&T feature store. Because the enterprises, if you look at medium and large enterprises, there are multiple data pipelines happening in the company, right? Somebody would use Databricks, somebody would use Flex, somebody would use a plane on the Jupyter Pipeline. We use  say a pipeline. It could be any pipeline in the company, but I don't need to be there in the same pipeline if I want to consume the features. If someone else created it in some other pipeline. For example, I created the features in Databricks Pipeline and I saw that it's valuable features and I published them into the feature store. Now I can be in the Snowflake pipeline where I can consume the same features. I don't need to recompute, right? So we can just avoid all those things. Duplicating the data, duplicating the features, or reinventing the wheel.

 

It really gives a great productivity when you work across multiple pipelines. And then it's also available for training and also for scoring. And one classic example is you might have created these training features in a Databricks pipeline, but maybe you are scoring it as a UD of function in the Snowflake, right? H2O provides the UD of function. You can use the features to UD function to actually score it in Snowflake. That's another. So it's very agnostic to any technology or any stack that you have. All you care about is the real assets value of features and the machine. So I'll tell you, this is one of the great examples. Mark actually did that in just a couple of hours. So we were.n Small story here.

 

Model Improvement Example

 

We were just sitting with Mark in a meeting and he was going through our machine learning models. So he's our VP of data science. He was reviewing all those things. We saw that we gave a one model, actually a customer churn model. We had a 66 percentage accuracy of the model, right? And then Mark, he challenged us, let's go into the feature store and shop around and see. There might be some features. And it's all happening in just two hours. We went and looked into the feature store. We looked at some of the features, which is related to the account activity of our customers. We added them. Boom, we got 77 percent in accuracy in just a matter of two hours. We got an 11 percentage accuracy increase. Otherwise, just imagine if you want to create that sort of an accuracy lift, you need to go and find those features and do the feature engineering and get into the feature store.

 

So that's one of the greatest achievements, I would say, using a feature store. And you see that always the better together model improvements have a great impact, right? So in the same theme that Mark was talking about, better together, your features all together, it's really better, right? You see the features at one, it's just highlighted in the blue color. That was the feature importance for operating the churn when we had 66 percentage lift. But if you look at the two features are two, the red ones are the new features that we shopped around in the feature store. You can see the first features set, one. It was almost 90%. Are more than 90% ish. Now it's in the fourth spot. It's only contributing 25 percentage in the second feature set. So that shows the value of really democratizing these features, keeping it in one place. So one data scientist created, another data scientist can consume it. Okay? So with that, I will just give you the Mark to the next, all right?

 

Mark Austin:

 

So you will be better together if you share features in a feature store. I can't tell you how many times, once we put the feature store in place, how many times we've taken a model. Sometimes we've been working on a model for a year or two, and we go to the feature store and we shop and we improve it the next day between meetings. Okay? So it's an amazing thing to do better together there. All right? So we're, we're walking around this curve. We started with finding data, getting data, engineer data.

 

Pinnacle Co-Opetitions

 

Now let's talk about the models themselves. Now, the models themselves, you will be better together if you cooperate, okay? Or you compete. So we put both of those words together. We call it co-opetition, okay? So the way that we do this is much like Kaggle. We put out anything that's important that goes to production, goes into Pinnacle, into a co-opetition or a competition type thing, and we invite data scientists.

 

Remember, we have a lot at AT&T, we invite them in, okay, here it is, here's what we're trying to predict. Here's the data set, go after it. Okay? And it's not unheard of that we'll have 70/80 data scientists actually doing this type of thing. And the results are tremendous. Okay, so on average we've done 266 use cases, 266 competitions. Since we've done this, we've got about 3000 users. Remember, it's not just the data science community. We got about 3000 users that have signed up for this. They're not all competing, but a large number of them are, and we've done 5,811 submissions. Somebody's trying to compete, they're putting on the leaderboard. So that's what's happening there. Now, let me show you what this actually looks like. Okay? So I'm going to show you a particular competition.

 

So this is one that's predicting churn. The final result was 153% better, and it was an ensemble model. Ensemble by definition is combining. Okay? Now, if we go down on the list, all of these as a leaderboard, okay? And this one here, you can't see it. That is a robot that did that one, okay? It's an AT&T robot AutoML, and there's the driverless AI one. Driverless AI was 136% better. So out of the box versus the baseline model, whatever, we started with, amazing improvement there. Now, somebody called this one baseline, it's not the real baseline. That one's 97% better. The real baseline is there 113 slots down, that's where it started. Okay? Now, we don't get everyone this good, but in terms of the improvement, it's just an illustration. And we've not done a single competition where we didn't improve the baseline average improvement 29%. Okay? All right, so now I'm going to hand it back to Prince. Let's go back to the feature store. So you've built the model, you've created it, now you want to deploy it.

 

Deploying the Model

 

Prince Paulraj:

 

Thanks Mark. So like Mark mentioned we are better together from the feature store standpoint, better together with APIs, right? The reason I'm saying that these features can be available during the inference time. Remember I said I want to score these models in milliseconds, in a real time manner, right? So how do I do that? So H2O feature store, it's already integrated out of the box within API functionality. So you publish the features into the feature store immediately, it's available as an API. All you need to do just to call the API with some sort of a primary key or a composite primary key. Let's say I'm just trying to score a customer, right? I can just use that customer number as an input and I can do that. So let me try to show this a demo of it.

 

So it's not only just API, you see that there is a chat bot, right? It comes in. So now, I'm just getting in there and I'm querying about a particular feature set, which is, we use it internally to just understand the health of a customer, to understand the reputation about the customer. I'm just typing in, I'm going through some of my authentication authorization. Then it's asking, "Hey, how can I help you today?" Feature store is asking, right? And it could be anyone in the business user. So they're saying that, "Hey, I want to check the propensity to churn of a particular customer." Then it's a fake number. We are typing in the customer, and the moment we provide that customer telephone number, model is scored from the feature store, and look at their, the score has come back, right? So isn't it awesome in real time that's happening, you are not going anywhere else.

 

You are in the same feature store. And now think about democratization, right? If your business user really wants to just use this functionality, now this person wants to understand the number computation, he just types in here and immediately gets the score. Because at that point in time, it's actually scoring the model and looking for the features and feature store and comes back with an answer, right? It shows a score associated with that. So that's the power of this feature store. It's always better together when you put your features and make it available during the inference time as an API, that makes life so much easier, not only for a developer or a data scientist, but also for a citizen data scientist or citizen developer, whatever you call it from the business standpoint.

 

Monitor Data, Features, ML Models, and Processes

 

And the next one that I want to talk about it. In the same circle, Mark talked about find and get data, then we talked about the engineer data, then we talked about the Pinnacle, how great it was to get that sort of accuracy of the model left using a create AI. Then I talked about the server AI. So now I'm going to monitor and govern AI. So we do check in AT&T each and every machine learning models that we develop by check, right? Like Andy mentioned we want to be building a responsible AI across all the finder models that what we have in AT&T. So, and one more piece, if I just go a little stretch, monitoring those features and models. That whatever you deploy in production, that's equally important than how you create those AI or commute those features in the feature store.

 

The reason is data is the key, right? I mean the data, but changes it never, you know a static value. So what if the upstream system, they change the data value, but you derive the feature out of it and come to the feature store and it's available for the run time. And what if the data changed? You get the data drifting, the data drifting gives the model drifting. So there are always, we need to monitor those sort of elements that go into the, in a model. And so that we can also monitor how the model is performing. So various aspects in the AT&T, we just monitor those models. And we use the product called Watchtower. Again, it's a developer inside AT&T, like Pinnacle. And you can see that we collect the data, features, the models, and we observe each and every time, and we correlate them.

 

We look at the features importance, which model is getting impacted, all those sorts of things. All the deep correlation happens. We create those additional functions. Then based on the conditions, then we take a mitigation. Mitigation could be go and check the data, or it could be a mitigation data scientist go and retrain your model. It could be anything or maybe environmental related, go and bounce the mission, right? It could be anything, but we take care of it through the watchtower and we monitor entire data and AI environments.

 

Combating Fraud Using ML/AI

 

And now I'm just going to talk about, I'm looking at the time, use case right here. The use case that I'm going to talk about is fraud. Like anyone, any telecommunication company in the world, it's a billion dollar problem for us, right? The fraudsters are coming up with a new scheme of things every day, it's almost like we have to build the AI. The good AI is fighting against the bad AI, right?

 

So we got to do this as a daily job for us to protect our customers. Remember AT&T is we are really obsessed about the customers, so we got to protect them, right? From anything, robot calling or gaming the customer for our social engineering fraud, whatever. We got to protect them. So how are we doing it? Okay? That's putting all together to combat the fraud here. All the circles that we talked about, you can see them all in one slide here, right from engineer data, you can see that we work across all our omnichannels, either retail store or online or customer care, and then any form of electronic transactions, right? E transactions, there are multiple things happening. If you're buying a phone and/or adding a line or upgrading a line. It's not a simple API call, right? I mean, you all know that in enterprise there's multiple checks happening.

 

So either it's a login or add to cart or checkout payment or a shipment, anything happens. Then we are creating those AIs and then deploying them in the form of more than like 50 plus machine learning models available today to compacting the fraud in AT&T, right? I'm not going to call out each and every feature that we use, but the point here is you can look at the technology stack, if you are doing engineering features. We are using a feature store, right? That we code all up with H2O. And if you are doing, creating any AI model, all those features are available in Pinnacle. And also the new features come out of the driverless AI. We put them in the Pinnacle and we look at how well this model is doing. If the model is really doing a good job, then we deploy through the ML to our AI, then we get them also through the serving layer, through the feature store.

 

Like the way I explained to you or given a demo of milliseconds sort of response that we get it. And through this journey, we monitor through the Watchtower. And this is the last slide. You can see this fraud, we are showing you the last three years of a journey that what we have done, like any company compacting the fraud or fraud detection and prevention, anything, you talk about it, people always talk, it'll start with the rules. But what happened did a good job if you think about it. But over the period of time, the rules get duplicated and it's overlapping. And people forgot what rule they created, right? There are fire rules for one condition. All those challenges come. So we started the machine learning journey in the fraud. We just put up the five models for a given year.

 

And these significant fraud stops happen even though the fraud attempts are going high. But then, you look at the fraud even stopping, it gave great results. That's motivated us and AI, our machine learning is not new to AT&T as Mark showed you guys. So we went half of it, right? So today we have more than 50 machine learning models. We have, look at the way we are compacting the fraud or stopping the fraud events. We don't control the phone price, right? And at the same time, we don't control the fraudsters, right? They are coming up with a new scheme of things every day, a new pattern. They're trying to defraud us, but we have to take care of our customers. The only way we can do it is using the previous light that I talked about using all these technologies together and compacting the fraud. That's what we are doing at AT&T.

 

Summary

 

Mark Austin:

 

So let's wrap it up. Okay? So here's the journey, okay? Instead of a circle, we have it top to bottom here. And in every single piece of the journey, there's a better together story, okay? So to find data, we love putting the chat in front of it. So find data with the feature store and generative AI, we think that's huge. So that's a technology better together. We talked, I mean, Prince showed the example of going from weeks, sometimes months to hours to improve. If you contribute to the feature store, you're sharing it, you get better results there. Create AI. This is where we started. We did these hackathons. We almost had a hackathon per week. It was like a regular thing, okay? 28-29% better. It's on average what we see there.

 

Definitely better together. We love the AutoML that gets you out of the box. You need to beat the robot, right? We always win because we always ensemble on top of it. And then deploy AI that's serving APIs better together there. And the last part Prince didn't mention this, but the Watchtower watches more than AI. It watches everything around it too, okay? So it's better together when you think about the whole process, cause many times we're a piece of the puzzle there. Okay? So overall, we love the partnership with H2O. We think we're better together in terms of technology and we thank you for helping us on this journey here.

 

Prince Paulraj:

 

One last thing I just want to mention over here, like Andy mentioned here, we are stationed here in Bangalore Androbot and we are hiring the best talent from the market, right? I know there are a lot of unique concierges from AT&T and you guys are all the foundation to us. But we want more talent from the market. You want the best people from the market. And we are not only just looking at the professional data scientists, look at the stack that I'm showing here, data engineers, full stack engineers, DevOps engineers, automation engineers, including the technical manager and associate director. So we are looking for so many people here. So please reach out to me and be part of the great journey in AT&T and H2O offices. Thank you.

 

Sri Satish Ambati:

 

Up to Prince and Mark Austin. I used to joke that wherever Prince goes, the kings around the princess, right? So, Mark called, this is an incredible boom for the community to be able to spend time. So it's a privilege to be together. And I remember the first meet up we ever did in Texas, in Dallas, Plano was in Mark's conference room in AT&T. And I think since 20 15 we've just learned so much from this partnership and it has made us better. And of course and hopefully our customers together, better together. Better together. Thank you. Thanks a lot.