7 Key Elements of an Enterprise AI Strategy
Read the Full Transcript
Patrick Moran: Hello and welcome everybody. Thank you for joining us today. My name is Patrick Moran. I’m on the marketing team here at H2O.ai. I’d love to start off by introducing our speakers for today. Mike Gualtieri is Vice President and Principal Analyst for Forrester research, where he covers artificial intelligence and related data technologies. And Ingrid Burton is a business leader from H2O.ai. They’re going to be talking about the seven key elements of an enterprise AI strategy. Before I hand it over to Mike and Ingrid, I’d like to go over a few webinar logistics. Please feel free to send us your questions throughout the session via the questions tab in your console. We’ll be happy to answer them towards the end of the webinar. This webinar is being recorded, and a copy of the webinar and slides will be available after the presentation is over. Now without further ado, I’d like to hand it over to Ingrid.
Ingrid Burton: Thank you, Patrick and thank you everyone for joining us today. We’re very excited about this topic, and we are very excited to have Mike Gualtieri from Forrester Research here. He’s an expert in this topic. Before we begin and I hand it over to Mike, I just want to say a few words about H2O.ai. For those of you that don’t know us very well, we are the open source leader in AI and machine learning. Both the open source and commercial versions of our product are being used by over 18,000 organizations around the world and by hundreds of thousands of data scientists on a daily basis. We have a huge following in the data science community. We are seeing what’s now happening with AI and machine learning, and how people are looking at it in terms of making a difference in their company or in their business.
We are looking at it from the standpoint of understanding how can they get a competitive advantage or a competitive edge? We’re seeing that around the world right now. And I thought what we would do is have Mike talk about the seven key elements that you need to think about as you embark upon an AI strategy or an AI journey. With that, I’d like to hand it over to Mike, who’s going to give you a number of insights, because he talks to a lot of clients out there as well. So Mike, thank you for joining us.
Mike Gualtieri: Thank you, Ingrid. And welcome everyone. My name is Mike Gualtieri, and I’m a Principal Analyst at Forrester, where I cover AI and data-related technologies. What we’re concerned with here is enterprise AI. What does that actually mean, and what are the use cases? Generally, there’s a lot being said about AI – apocalyptic visions and things like that. But our analysis is that we believe that AI, like many technologies, is actually going to have a net positive effect – not only on enterprises but on society as well. There are certainly challenges, and we’ll go over those. But it’s enterprises that are actually going to use AI to make things like health care more effective, transportation safer, processes more efficient, more personalized customer experiences and many, many, more use cases. So that’s you – you and your enterprise are going to make that happen as well.
There are a couple of very high-level use cases, because people are talking about augmenting intelligence with AI. Certainly, that is one of the use cases. AI technologies can be used to augment employee intelligence, make them smarter, and make better decisions. That’s certainly one use case, but it can also be used to automate intelligence to improve the efficiency of operations as well. And that is certainly a use case as well. We’re also seeing it being used to acquire environmental intelligence and competitive intelligence to detect richer signals from the environment. So that’s more of a business intelligence focus. We’re also seeing it being used to hyper-personalize customer experiences. This is how the Internet giants operate every day. We’re also seeing it used to accelerate and create new products that are based upon artificial intelligence.
So when we think of enterprise, those are the generic use cases. Now what I’d like to do now is get into some of the strategy elements that we talked about. These are the seven key elements of an enterprise AI strategy. So the first one is to set the right expectations. The expectations that you set with your executive leader, or if you are an executive leader that you set with the board, – you have to set the right expectations. Otherwise, you’ll be set up for failure. People might expect too much, but you also don’t want them to expect too little. Now, at Forrester, we express two types of AI, pure AI and pragmatic AI. Pure AI is the sci fi stuff that strives to imitate humans, to imitate our very generalized intelligence. We’re really not much closer to that.
There’s research out there that shows that maybe in 125 years, we’ll get there. Maybe in 50 years we’ll have elements of it, but pure AI like you see in sci fi – that’s not what we’re talking about. Pragmatic AI is what’s realistic. That’s what we can do today. That’s what companies are doing today. So if you think of an iconic example – IBM Watson beat the human Jeopardy champions. That’s very narrow-focused. It doesn’t seem narrow. Jeopardy seems like you need to know a lot of things. But it’s one particular application, and humans, we are fantastic at filtering information and then drawing connections through our filtering process. Computers are great at analyzing enormous amount of data to find those connections. So it’s pragmatic AI that you wanted to express.
The strategy element is to make sure you’re discussing AI in the terms of concrete technologies. In our view, AI isn’t one particular technology, but rather, it’s comprised of one or more building block technologies. And we say one or more very carefully, because it doesn’t have to be all of these things, right? If you think of a self-driving car, we’ll that’s robotics with deep learning and maybe some knowledge engineering. Both front and center of the data technologies, AI is data-driven and uses machine learning or deep learning. And so those are the key use cases that we think most companies are and should pursue in their strategy. And when you think about machine learning from a business standpoint, the way that translates to business value, is the ability to make a prediction about something, or to make some kind of a decision or to identify context, right? So translating it down to the element of machine learning and making a prediction, and making a decision that in your business is augmenting intelligence, and those other elements I said – that sets the right expectations. So we should focus on pragmatic AI is powered more specifically by machine learning.
Ingrid Burton: That’s an excellent thing for us to be focused on, because I think lot of people, just like you said, they think about the scary part of AI – actually AI is for good. And machine learning is really getting to predictions, like you said. And the way I think about it when I talked to business leaders at various companies, is you have a business problem you’re trying to solve, and you’re trying to predict an outcome. And so pragmatic AI is about predicting outcomes and results and doing it in a much smarter way with all the data we have. So excellent insight there, and an excellent strategy to get started here.
Mike Gualtieri: Yes, the worst thing you can have in an enterprise strategy is people not to be on the same page, to be continuously arguing about what it is and what it isn’t. And you’re right, at its core, it translates down to making some probabilistic prediction, right? With Google ads, an ad gets served based upon a predictive model or a Netflix recommendation that’s based upon a whole number of signals and factors in those models – or a customer that’s churning. So it’s about getting people on the same page and understanding what it is realistic. You don’t want someone thinking that you’re going to build some sentient intelligent robot that’s going to become the CEO of the company; we’re not there yet.
Ingrid Burton: No, we’re far, far from that. And I always tell people that we’re far, far from the Terminator world – very far away from that world, in fact. And we’re really just trying to solve big problems that businesses are looking – customer churn, or if you’re in healthcare, what’s the better outcome for a patient, with predictive medicine and so forth. In insurance, it’s better car insurance based on your driving patterns and predicting what those will be. I characterize this as algorithms to answers, or problems to predictions.
Mike Gualtieri: Yes, and that’s an excellent segue into our next strategy element, which is about the use cases. The key word here is the word multiple. It’s tempting, when you’re using any new technology, to just decide and prioritize one particular project to do a proof of concept. And it’s not a terrible idea to do that, but you have to put that in perspective. You need to choose multiple. Obviously, they should have high ROI use cases. And the reason for that is that for machine learning work, it’s very dependent upon great, rich data and may not have all of the rich data that you need for a number of use cases.
Enterprises who are successful, pick not just one use case, but they’ll pick multiple use cases, and they’ll run those in parallel. Because the thing about machine learning is you have to analyze the data with algorithms first to know if you actually have the signal in the data. Now, the good news is that there are tons of use cases. People always ask me, “Well, what are the use cases?” I ask them, “How many business processes do you have? How many customer experiences do you have?” Because in each and every one of those, there are probably multiple use cases. It’s not that hard, actually, to surface those use cases, especially if you bring this down to a prediction or a decision or an identification, right? Because if you whiteboard a business process, and at each step you ask yourself, “Is there something I can predict there that would make this more successful? Is there a prediction I could make where I can route this to a different business process? Is there a prediction I could make in this customer experience that could create a more personalized experience?” It’s easy, you’ll come up with dozens of use cases. The good news here is that enterprise decision makers see that potential in machine learning. They see that it’s applicable in both front and back office applications.
Here are a few examples that can demonstrate the scope of possible use cases. Companies are using it to predict supply chain issues. Predict is a key word here, because if you can predict the likelihood that an issue is going to happen, you can probably do something about it to stop it. Companies are using it in IoT models to prevent operation shutdowns by predicting outages that are likely to happen, or even during storms using machine learning models to position crews based upon weather models and the age of the infrastructure – to position those crews where they’re likely to be needed the most.
It’s being used by airlines to customize their catering. One airline reduced their cost by 12% by using machine learning models to predict what catering split there is based upon the passengers, based upon the locations, and based upon the destination. Of course, it’s being used in commerce all the time. I would say that that Amazon is a machine learning company. If you look at a Jeff Bezos’ letter to the shareholders, there’s not one letter that goes by where he doesn’t talk about the importance of machine learning to hyper-personalize customer experiences. We’re seeing it being used to personalize customer service, and to allow self-service through the use of a chat bots. Also, it’s being used to provide a very personalized experience. There’s an example of augmenting an employee’s intelligence so that they’re able to speak to existing clients. It’s also being used in store experiences to offer real-time targeted offers based upon who the person is, what their shopping history is, and other factors that are occurring that day.
When you go to deep learning, which is a form of machine learning, it’s being used in image detection and medical imaging, and it can diagnose some diseases more accurately and faster than doctors. We’re also seeing insurance companies use it to automatically assess damage and repair costs. I could go on and on. But the important thing here is that there are many use cases in your enterprise. What we’ve experienced when we talk to companies, is that they’re confused; they don’t know where to start. But the way to know where to start is to look at your existing business processes and look at your existing customer experiences. Those use cases are there – just be sure that you pick multiple use cases. It’s a little bit like a venture capitalist, but venture capitalist does an analysis, and they may invest in 12 companies. They know statistically, that three, four, or if they’re lucky – half will be successful. Machine learning requires a similar investment. The implications it has for strategy is also how it’s budgeted, and how you proceed with the projects.
Ingrid Burton: Mike, I think that’s just excellent, and I think what I would add to what you just said in the very interesting use cases that you presented, is that first of all, machine learning and AI can be used extensively across every industry. We see use cases in manufacturing, in banking, capital markets, insurance, health care, retail, telcos, you name it. You can find a use case. It’s really as simple as, “What business problem am I trying to solve, where I need to predict a better outcome? What results am I trying to get to back?” Trying multiple projects is a great idea. Pick a few. Pick one part of your business, whether it’s your sales and marketing forecasting – there are use cases right there – chain optimization. We’re seeing a number of companies in every single industry that are doing some amazing use cases and getting great results.
I think what you’re saying is to try a lot of them. Sit down and work through some business problems, and you’ll find out you have a lot of data that you’ve been gathering for years, that you can now get real answers to.
Mike Gualtieri: And doing multiple projects at once, too. Sometimes we talk about as Agile AI, right? Because when you think of an agile process, it’s a very experimental process. I don’t like the term “fail fast”, but it’s sort of applicable, right? You want to understand if this use case is going to pan out. The good news is that in data science, that happens very quickly. You get at the data, but the algorithms do it for you. You can find out and vet these use cases even quicker if you’re using AutoML, which is one of the things that we’ll talk about later.
So we just alluded to the fact that it’s all about the data. Any part of an AI strategy is data-driven. So it means that you’re going to have to have access to all of that data, and that doesn’t necessarily mean that it’s some huge data project. It could just be permission to get data. Right? We hear a lot that data scientists say, “Oh, I’m spending 60, 70% of my time getting the data.” Well, you know what, when you start actually interviewing them and you ask them what’s up with that, they’ll say, “Well, I have to email someone to see what application this data exists in, and then someone else to get permission to use it.” It becomes an email challenge versus an actual data acquisition challenge.
It’s not necessarily a technical solution, although it helps that that organizations have created data lakes. The good news is that organizations have so many applications. I talked to one financial services company and asked how many application they have, and they said “3000.” Can you believe that? Well, that’s because of acquisitions and things like that, but all of those applications could potentially generate the valuable data for these use cases. So that’s the great news about enterprises. They have a lot of rich data. And if you look at some of the sources of data, the internet giants don’t have this data. Enterprises have this data, and this is why enterprises are going to be successful with AI, and that’s why they’re going to make a difference in the world with this, because they have the data. But you have to provide access to it. One of the reasons why we suggest the multiple use cases is because this is true. This is true for computer science, and it’s true for data science, garbage in, garbage out. So if you don’t have the right data, that used case might not pan out or it might be hard to get that data, so you have to do multiple use cases. Data infrastructure has to meet the requirements of machine learning. When the algorithms train a model, it’s analyzing a lot of data, and you never do just one run. It’s very, very iterative, so the data infrastructure has to be in place to do that. The good news is that most enterprises have either a data lake or a data warehouse; there’s a place where that data can gather, and then the machine learning platform they use can run that data. There’s certainly cloud infrastructure as well, but that makes it easy to be very elastic in this as well.
Ingrid Burton: So Mike, I was going to say that we’ve seen companies for the last five plus years going through at data or digital transformation and getting to a data-driven world. Now it’s the time for enterprises to take those data lakes and data warehouses and really provide actionable results, right? This is where they are going to be able to say, “Look, all that work I did to put that data lake together or that cluster or put in the cloud is now something I can use to predict a future outcome.” It all begins with the data and we now have ways to extract value from that data that, up until a few years ago, were really hard to get.
Mike Gualtieri: It’s a happy coincidence that companies were creating data lakes primarily for business intelligence applications. The happy coincidence is that same data can be used for machine learning as well. If we were going through a phase where companies were very interested in AI, but then they had to think about data platforms that they would use, that would be adding another step. But when we look at large enterprises, more than 70% of large companies have done some sort of a data lake project. They’ve gone through that journey, so they’re ready to go.
So you’ve got your use cases, and you understand that you need ready access to data and permission to use the data. Now you have to do the data science. Now you have to build the models. One key strategy element is identifying the tools that you use to do that. Whether it’s data science team teams or the computer science teams, what tools are they going to use?
We think most companies can benefit from AutoML – and that’s automated ML. A lot of companies and people in general are saying data scientists are expensive. They’re really aren’t expensive. They’re inefficient. If you look at data times teams, they’re about 15 years behind where application development teams were in terms of tools. But that’s now changing, through AutoML. What AutoML does is it automates key aspects of the machine learning model building life cycle.
You can see the life cycle here – you start to get data, it goes to data prep, feature engineering, followed by a process of running multiple algorithms, evaluating those models, and putting them in production. This process hasn’t changed much over the years. And a lot of the tools that data science teams use are straight programming: coding, whether it’s in Python, R, or something else. Some of these steps can be automated. In fact, many data scientists do automate it. It’s just that they write their own code to automate it. There was a big investment in that. But now what we have is AutoML tools which are started to create platforms that actually automate this itself.
At Forrester, we break down the market for machine learning tools into three segments. One is multimodal, and it’s got a wide breadth of tools like drag and drop, some coding, and some AutoML capabilities. And then there’s the notebook-based tools that are code-first, and then AutoML, which is specifically designed for the automation process. Now, we believe that a large enterprise probably has a need for each one of these approaches. And it’s very akin to software development. Large enterprises, they do have .net programmers. They do have sharp Java programmers despite certain tools that are out there. And that’s because sometimes you need to code. But the lion’s share of work can be automated or semi-automated. So when you’re looking at tools, we think AutoML has come of age. What they do is they compress that time to model because they automate, very time consuming aspects of feature engineering, algorithm selection, evaluation, and in some cases, deployment as well.
Ingrid Burton: Here’s my commercial plug – we have an AutoML platform; it’s called Driverless AI. It’s not about autonomous driving, it’s about self-driving AI. So automatic machine learning, just as Mike just said, that does it all in a single platform for data scientists. It’s going to do some of the heavy lifting such as feature engineering that they’ve been doing, and also can provide some insights – interpretability, model, algorithm and model selection, training. Training continues throughout the process as you just saw. The platform also helps you to interpret the results, visualize the results and produce a scored model, which is called a pipeline. So you can deploy it across various applications and edge points as well. If there are folks out there who want to try it out, have the data scientists, data engineers, data scientists, or business analysts try this out for a 21-day free trial. It is really built for data scientists by data scientists and we’d love to have you give it a go.
Mike Gualtieri: One of the things that I find fascinating about AutoML, and I alluded to this, is that there were many data scientists who actually did do automation, but they coded it themselves. And what I find fascinating is how H2O has hired Kaggle grandmasters and got their insights about how to create an auto-machine learning platform.
Ingrid Burton: Exactly. We have hired a number of Kaggle grandmasters, and there are about 100 Kaggle grandmasters. These are the expert data scientists that have earned that right through competitions, so they’ve risen to the top. We have a number of them at the company – about seven or eight at this point who are contributing to this product. So they’re using all their tips and techniques to say, “We know what you need to do when you’ve got this kind of data set, or you have this kind of algorithm.” And they injected that and put that into the product. Those of you out there, you don’t have to hire a Kaggle grandmaster. We provided this platform that gives you the tips and techniques that they have. We want to give you those same tools. Even expert data scientists can benefit from something like AutoML, because it gives them results faster so they can really focus on the next problem, and the next problem that they’re trying to solve, because they are the whizzes who really are solving business problems for you in your enterprise, and you want them to solve as many of these problems as possible. So democratizing data scientists is really important. It’s giving everybody the same level playing field in terms of the tool sets. As Mike said, we need to get people caught up and get them into this new world of AI machine learning so they can get the same results as the next guys or the next company over. Everybody who wants to be competitive moving forward is going to be using some form or fashion of AI and machine learning.
Mike Gualtieri: Just one more comment on this and that is, when you’re trying to build a model, there are dozens, hundreds, and sometimes even thousands of choices to make. So choices of the features that you’re going to use, the features you’re going to create, the algorithms you’re going to use, and how you’re going to tune those algorithms. So there are potentially dozens, hundreds, if not thousands of iterations. So why not configure the likely run and then just having a machine do it and then rank order the models based upon the evaluation methods that you picked? So it’s good stuff. I think there are a lot of exciting things happening in the tooling. I don’t think it would have happened though if enterprises weren’t so engaged and interested in AI. It sort of created the market for the demand.
So that brings us to our next strategy element, which is to make sure that this isn’t just data science. This really speaks to another challenge, that we continually hear from data science teams. They’ve created a model that they think is wonderful. It can predict quality defects. It can predict a machine’s failure. It can predict what a customer will upsell, but then it’s not deployed or it’s very, very difficult to deploy. And one of the reasons for that is that a model that gets created impacts on business process. If you don’t think about upfront how it’s going to impact that business process, then you’re going to have to do that work after the model has been built, and that could add potentially weeks and months to get the process.
The same is true with how an application is designed; the user experience. How is this actually going to surface in an application? In this world of AI and especially for machine learning, we often just focus on the model and that and that’s it. We’re done. But a comprehensive AI strategy has to focus on how it will manifest itself in a business process and in those applications. The strategy should prescribe the different teams that get brought in early on, so the data science team can go figure out if they can build a model. The application and design teams can figure out how that model will be used in this process.
Ingrid Burton: I totally agree with you, Mike. And then where we see success with customers that use H2O.ai is when they have the entire team involved. AI is a team sport, and that is super important. It’s understanding that there’s a business owner, there’s probably a data engineer who might sit in IT, there’s a data scientist, and there’s kind of an end user customer as well that you’re trying to serve. You have to think through not only the implications of using AI to streamline processes and so forth or to solve business problems. There’s a cultural thing here where people have to be aware that these projects are underway. It’s giving them that extra intelligence and insights that they may not have had earlier because the data unearthed so many different things for them. So we want to make sure that when we suggest this, we actually want to involve every stakeholder in an organization in order to be successful. Otherwise, if it’s just a data scientist by him or herself, they’ve got a big uphill battle, perhaps with their business leader who says, “I don’t believe that – I’m going with my gut feel.” And we hear that sometimes. So we really have to bring them in early, and bring them in as a cultural side of things, and make sure you think about all aspects of what business problem you’re trying to solve and how AI will impact that.
Mike Gualtieri: Our next strategy element is to practice responsible AI. AI is fundamentally different from application code or from code in general. It’s probabilistic, it’s data-driven, it’s dat- led to the model, which makes the prediction. So it’s fundamentally different. And AI solutions are a little bit like us. They’re not perfect. They’re probabilistic, right? So when you create a model to make a prediction, the reason why we say prediction is because it’s probabilistic, right? And it may be 97% correct or 98%, or maybe 87% correct, but they’re probabilistic. So it’s really important in an AI strategy to really lay out how AI is going to work and what the acceptable risks are. And first and foremost is to able to explain the models. Explainability is a big field of research, but there’s also some pragmatic methods that are available and machine learning tools now in techniques.
But business leaders rightly get nervous about agreeing to a black box that has millions of dollars’ worth of implications. So include an explanation in your strategy. Part of that has to do with auditability and re-creating your work. Explainability and re-creating your work is absolutely essential in regulated environments. But it doesn’t mean it’s not important in non-regulated environments. It is important for the health of the organization. You want processes and tools that can allow you to create the work. And this really means bringing a software development discipline to data science. I said earlier that the discipline of data science is very much that of scientists, and they don’t have necessarily all of the tools and the processes in place to recreate those models. So we make sure that there are processes where you can absolutely recreate the work of a model where is someone leaves or someone new comes in, you can absolutely recreate the work of that model.
Ingrid Burton: So responsible AI or explainable AI really, if you understand that first of all, in the regulatory environments like insurance and healthcare and financial services, there are regulations that say that you have to describe your model. You have to reveal how you got those results. So there are technologies that allow you to do that. We do have that in driverless AI with machine learning interpretability but you have to guard as an organization against bias in your model. You have to be able to explain the model. You need to be able to document it. You need to be able to have some reason codes come out of your model, so therefore, you can trust your model, trust the AI.” And just like any other business process, you must do some double checking to make sure that you’re constantly guarding against things like bias, and things like outliers that can skew the results. So you constantly have to be on the guard for that. And it is all part of the AI journey that you’re embarking upon or you’re already in the middle of, and it is a key part of taking AI forward. Any other comments on that, Mike?
Mike Gualtieri: I think the other comment is that this is a very big topic, because it also gets into ethical considerations in terms of how models are used, and things like that. I’ve talked to very large enterprises that have a couple of people on their legal team, for example, looking into the ethics of AI and what that means for the company. So there’s impact on a company’s brand in terms of how they use it as well. But it’s not that different from any other technology that any company was going to use. This concept even came up with mobile devices – how do we use mobile responsibly? AI is different, but it’s not as different as people might think.
Here’s one way, and that’s to stay in charge. That’s our next strategy element – you want to stay in charge. And what that means is mitigating the risks, using human expertise to limit the risk of a model and use the model to limit the risk of human decision-makers. Our belief and what we’ve seen is that AI is smartest when it’s driven by both humans and machines. So staying in control means you don’t have to do what the model tells you to do. The example I give is that you might have a model, and that model basically makes a loan decision. It predicts whether we should make the loan or not make the loan. Well, maybe an executive at the company says, “I’m nervous. What if it decides to make a million-dollar loan?” Guess what? You can just put program logic in that says if the model makes a $1 million loan, don’t do it – route it to a human. It actually is quite easy to mitigate many of the risks or to govern edge cases that make business leaders nervous. That technology is generally called knowledge engineering, but they are essentially rules that can govern and even improve how a model works.
Ingrid Burton: I totally agree, Mike. You have to have the human in the loop. Every use case still needs human eyeballs and intelligence to help make it be successful. It’s something that we constantly also advise – it’s back to the garbage in/garbage out; you need a human in the beginning and you need a human at the end, right? You can guard against things going wrong. They have to be constantly vigilant. As you said in your Jeopardy example, we make connections that a machine never can make, right? Like the $1 million loan example. A machine is just going to say, “This person’s good for a $1 million loan, maybe on a credit card.” And we’d be like, “That’s crazy.” You’d never do that, right? Maybe just a few people in the world could do that, but not a whole lot of us. There are a lot of things that you can guard against, but as Mike said, it’s a journey. It’s this whole end-to-end process. It’s kind of cyclical; you constantly need humans. You need teams of humans to be part of this whole experience.
Mike Gualtieri: It kind of goes back to the strategy element of making AI a team sport. Because when you bring in the business stakeholders, the application development professionals, and the applications designers, you want to surface any concerns or challenges early. You don’t want the data science teams to deliver a model, and then everyone says, “O, what about this, what about that?”
Ingrid Burton: Right. You need them in the loop. We see this again as a driverless AI. With some of the key challenges that enterprises face today – AutoML helps with the talent side of it, where it’s going to augment your data science efforts, and it’s going to really amplify and accelerate them as well – saving you time, which results in ROI, of course. Then there’s trust and being able to explain the model.
Mike Gualtieri: Those are our strategy elements. We see AI as the fastest growing workload on the planet and we say that in all dimensions – infrastructure, development, and applications. So we hope that those strategy elements will be helpful at your business. Thank you. Now, Ingrid will open up for Q & A.
Ingrid Burton: We’ve got a couple of questions that just came in over the course of this discussion, so I’ll read them off. I think they’re referring to Driverless AI. Does it run multiple models at a time? Does it just suggest the best one or does it only show the best at the end? Yes, you can pick algorithms throughout the process. There are a lot of dials for data scientists to look at the algorithms. You can override what the tool is saying. It can suggest things, so it’s iterative. The second question was, “How do you classify natural language processing? Is this considered pragmatic AI?”
Mike Gualtieri: Yes, it is what it says it is. It takes natural language – the way we speak. It also extends to the way we write, so that is definitely a part of pragmatic AI. There are all kinds of things to extract entities, topics, sentiment – all kinds of analysis that can be done. So I definitely characterize that as a pragmatic AI. Often, it’s done in combination with structured data. You might actually analyze natural language to extract features from that tech, which you will then use in a structured machine learning environment as well. The other thing to consider here as well is computer vision. When we put the technologies for certain machine learning and deep learning – those are sort of the broadest possible categories, but there’s computer vision with deep learning models and audio models. We do consider NLP to be pragmatic AI technology.
Ingrid Burton: I think these seven suggestions on embarking on an AI strategy for your enterprise are just top notch. Mike, any final thoughts?
Mike Gualtieri: Right now, we see that more than half of the enterprises are implementing AI. We just did a sort of projection where we think we will be in the next two to five years. And we see that continuing to accelerate. In about three years from now, we think more than 95% of enterprises will be reporting that they are implementing some form of AI. So this field is going to keep taking off, in our opinion.
Ingrid Burton: Yes, I would agree. I think it’s just a great time to be in data and data science, and really using that to better outcomes. It’s an excellent time to be looking at these new sets of technologies that are out there and available to you. I want to thank everyone for coming today, and whether you’re listening to the replay or if you’re listening live, thank you again for attending our webinar. Mike, thank you so much for your interest and your great insights.
Speaker Bios
Ingrid Burton: Ingrid Burton is CMO at H2O.ai, the open source leader in AI. She has several decades of experience leading global marketing teams to build brands, create demand, and engage and grow communities. She also serves as an independent director on the Aerohive board. Prior to H2O.ai she was CMO at Hortonworks, where she drove a brand and marketing transformation, and created ecosystem programs that positioned the company for growth. At SAP she co-created the Cloud strategy, led SAP HANA and Analytics marketing, and drove developer outreach.
She also served as CMO at Silver Spring Networks and Plantronics after spending almost 20 years at Sun Microsystems, where she was head of Sun marketing, led Java marketing to build out a thriving Java developer community, championed and led open source initiatives, and drove various product and strategic initiatives. A developer early in her career, Ingrid holds a BA in Math with a concentration in Computer Science from San Jose State University.`
Mike Gualtieri: Mike’s research focuses on software technologies, platforms, and practices that enable technology professionals to deliver digital transformations that lead to prescient digital experiences and breakthrough operational efficiency. His key technology coverage areas are AI, machine learning, deep learning, AI chips and systems, digital decisions, streaming analytics, prescriptive analytics, big data analytical platforms and tools (Hadoop/Spark/Flink; translytical databases), optimization, and emerging technologies that make software faster and smarter. Mike is also a leading expert on the intersection of business strategy, artificial intelligence, and innovation.
Mike provides technology vendors with actionable, fine-tuned advisory sessions on strategy, messaging, competitive analysis, buyer-persona analysis, market trends, and product road maps for the areas he directly covers and adjacent areas that wish to launch into new markets or use new technologies. Mike is a recipient of the Forrester Courage Award for making bold calls that inspire leaders and guide great business and technology decisions.