Return to page

Accelerating AI Adoptions with Partners

This session was held by Vladimir Brenner, Partner Account Manager, Disruptors & AI, Intel AI at the Dive into H2O: London training on June 17, 2019.

lease find the slides here:


Read the Full Transcript

Vlad Brenner: Hello, everyone. Thank you for coming. My name is Vlad Brenner and I’m with Intel. I want to talk to you today a little bit about, how do we accelerate AI adoption with our partners? There are going to be three key topics we’re going to cover, the business imperatives, so really, what problem statements are we trying to solve within the industry? We’re going to talk a little bit about what Intel does in the AI space as a company, and also as a catalyst for the industry. And at the end, we’re going to talk about the collaboration we have between Intel and H2O.

Let’s start with the business imperatives. And really, the AI is a term that’s been around a while. The first definitions of it really came out around 1940s, but really, the enthusiasm started to build up very recently. And we, at Intel, attribute it to probably the three key factors. Number one is obviously the Moore’s law, the availability and the cost of compute, the performance that you can buy for the low price, and that really helps to drive the agenda forward. Secondly, it’s really about the availability of the open source models and algorithms for people to go and develop on. And finally, and probably most importantly, it’s all about the data. The sheer volume of data we currently have out there is ginormous. And by 2020, we expect another 50 billion devices and 200 billion sensors to join the network, and that would continue to generate masses and masses of data out there.

You can see, the average internet user only going to generate by 2020 about one and a half gigabyte of internet traffic, while the smart factory at the same time is generating one petabyte. So right now, a lot of this data is out there, but it’s not really utilized. It’s not really converted into the inside and not really adding as much value as it can, and this is our job collectively here to really convert it into the value, to the organizations to improve their revenue streams, to optimize their costs and hopefully make the world a better place to be.

The second thing is a lot of people talk about artificial intelligence, but as you start to peel the onion and really trying to understand what’s happening there, the reality is only about 46% of the CIOs out there are having plans to adopt the AI, and only 4% of them actually start to do something about it. So we really need to drive that conversation on, really drive it into the double digit percentage, and it’s really down to us to make it work.

Now to talk a little bit about Intel. So obviously, you know Intel mainly for our hardware, our CPUs, FPGAs our SSDs and all the other silicon out there. And ultimately, it still holds true that we are the provider of that hardware from multipurpose solutions to the purpose-built ones. But it’s also now a lot about the tools. How do we accelerate software development? How do we develop libraries for developers to go and take the most advantage of that hardware they use? And then it’s ultimately about the solutions and the partner ecosystem we work with, and how do we facilitate AI in the industries where there is a need for the AI? And it really comes down to those three things. It’s hardware, it’s tools, and it’s solutions working in tandem rather than just talking about the hardware itself.

And if we just talk about hardware for a moment, you obviously have almost end-to-end availability of products at Intel from again, multipurpose to purpose-built, from end point to edge to server, spanning across sensors, vehicles, desktop and mobility, service appliances, and everything else in between. So we are really proud of being able to provide that comprehensive stack of things available to the community and to the ecosystem so that you can go and run your AI workloads on it.

But it doesn’t end here because this is what we already have or what’s going to be released in the very near term future. Intel is obviously working about a lot more futuristic stuff out there for many years now. It’s neuromorphic computing, completely breakthrough innovation in that area. We already put a quantum computing 49-qubit test chip in the works, and there is more things to be coming in the future. So by taking Intel hardware, it’s not about today, it’s about two to three generations out, and it’s sometimes about the brand new types of products that you taking with you.

And again, hardware is critical, but the tools are equally important. It’s the toolkits such as OpenVINO, Intel Movidius SDK, and Deep Learning Studios, which allow app developers to perform a better, in the case of OpenVINO, better visual inference. It’s the libraries for all the most advanced distributions, the Python, the Distributed. It’s deep learning frameworks, whether it’s Caffe, TensorFlow. It’s all being optimized now for work best on Intel hardware to show the best performance output you can possibly have.

It’s the foundational stuff, DAAL, MKL-DNN, all of these things, our teams work on to make sure that they are most optimized and that you really reap of the benefits of it as you use it. And solutions, so it doesn’t end with the hardware and the toolkit that we provide, it’s also about developing in a way, an ecosystem, a community of like-minded people who really want to bring the goodness of AI out there into the market. It’s about, Intel AI build this program where it’s a program for those who want to engage with us, it’s a one-stop shop to find systems, software, and solutions using Intel AI technologies. If you go there today, you’ll find our great collaboration with H2O as an example. And there is some reference solution, gives you the head start to really go and a start, not completely from scratch, but start from some good starting point.

And then the couple of things I want to talk to you about as well. It’s about the performance of the CPUs gen on gen, and how does it change? And why is it now that we talk a lot more about utilization of CPU and machine learning and the deep learning applications whilst just a few years ago, we would only talk about GPUs? The reality of the situation is, a few years ago, our standard product, the Intel Xeon Processor weren’t necessarily optimized for the machine learning workloads. It was more of a general purpose and therefore, potentially, that was many years ago, it wouldn’t compare perfectly well. But if you look at gen on gen, our team has done a lot of optimizations to make sure that the Xeon actually reaps the full benefit of that.

And you can see upwards of 277x improvement of performance, so at that at this point in time, we can confidently say that across many applications, whether you talk about genome machine learning, whether you talk about inference, and even in some cases of training, the CPU is already sufficient enough for you to do that work without the need for any incremental bits of hardware.

And now finally, I want to touch real quick on our collaboration with H2O, so for those of you have been in California and may have seen this banner, which we put on the way for Silicon Valley into the San Francisco airport. And it really shows our commitment to collaborate with companies like H2O, who are trying to bring this democratization of AI and really work with them together to make it possible out there in market.

And collaboration we have is obviously, you can think about it as a better together story with the project Blue Danube, where we bring in our extensive tech IP leadership, comprehensive partner ecosystem, and end-to-end analytics and AI solutions, and combine it with H2O’s open source AI leadership, their commitment to democratize AI, their tremendous AI momentum that they’ve developed out there, and really them being driven by the customer value.

And if you want to look real quick at what our joint stack looks like really, it’s pretty straight forward. It’s built on the Intel platforms, whether it’s Scalable, SSDs, Ethernet Adapters, we put our optimized libraries and frameworks on top, depending on the application. We then put on top some of the leading machine learning platforms, the Driverless AI, Open Source, Sparkling Water. And then we verticalize and make it performance efficient for specific use cases. So it’s really this perfect combination of hardware, the toolkit, and the machine learning platform of H2O that makes it so special, I guess.

At this point, all I can tell you is if you’re not very familiar with what we do at Intel, please feel free to go to, it’s a great repository of various information. Obviously go if you haven’t been there yet, although I doubt you haven’t. Or just go visit C Builder’s website that we have to look at some of the reference solutions, including the one we have with H2O.