Instead of just being hype, the Internet of Things (IoT) is now becoming a reality. Gartner forecasts that 6.4 billion connected devices will be in use worldwide, and 5.5 million new devices will get connected every day, in 2016. These devices range from wearables, to sensors in vehicles the can detect surrounding obstacles, to sensors in pipelines that detect their own wear-and-tear. Huge volumes of data are collected from these connected devices, and yet companies struggle to get optimal business and IT outcomes from it.
Why is this the case?
• Rule-based data models limit insights. Industry experts have a wealth of knowledge manually driving business rules, which in turn drive the data models. Many current IoT practices simply run large volumes of data through these rule-based models, but the business insights are limited by what rule-based models allow. Machine Learning/Artificial Intelligence allows new patterns to be found within stored data without human intervention. These new patterns can be applied to data models, allowing new insights to be generated for better business results.
• Analytics in the backend data center delay insights. In current IoT practice, data is collected and analyzed in the backend data center (e.g. OLAP/MPP database, Hadoop, etc.). Typically, data models are large and harder to deploy at the edge due to IoT edge devices having limited computing resources. The trade-off is that large amounts of data travel miles and miles of distance, unfiltered and un-analyzed until the backend systems have the bandwidth to process them. This defeats the spirit of getting business insights for agility in real-time, not to mention the high cost of data transfer and ingestion in the backend data center.
• Lack of security measures at the edge reduce the accuracy of insights. Current IoT practice also only secures the backend while security threats can be injected from edge devices. Data can be altered and viruses can be injected during the long period of data transfer. How accurate can the insights be when data integrity is not preserved?
The good news is that H2O can help with:
• Pattern-based models. H2O detects patterns in the data with distributed Machine Learning algorithms , instead of depending on pre-established rules. It has been proven in many use cases that H2O’s AI engine can find dozens more patterns than humans are able to discover. Patterns can also change over time and H2O models can be continuously retrained to yield more and better insights.
• Fast and easy model deployment with small footprint. The H2O Open Source Machine Learning Platform creates data models, with a minimal footprint, that can score events and make predictions in nanoseconds. The data models are Java-based and can be deployed anywhere with a JVM, or even as a web service. Models can easily be deployed at the IoT edge to yield real-time business and IT insights.
• Enabling security measures at the edge. AI is particularly adept at finding and establishing patterns, especially when it’s fed huge amounts of data. Security loopholes and threats take on new forms all the time. H2O models can easily adapt as data show new patterns of security threats. Deploying these adaptive models at the edge means that threats can be blocked early on, before they’re able to cause damage throughout the system.
There are many advantages in enabling analytics at the IoT edge. Using H2O will be crucial in this endeavor. Many industry experts are already moving in this direction. What are you waiting for?