Return to page

BLOG

Climbing the AI and ML Maturity Model Curve

 headshot

By Karthik Guruswamy | minute read | November 19, 2019

Blog decorative banner image

AI/ML Maturity Model Curve/Steps

AI/ML Maturity models are published and updated periodically by a lot of vendors. The end goal is almost always about effecting transformation and automate processes in a short period and making AI the DNA/core of the business.

One of the biggest challenges for businesses today is to clearly define what success looks like when their organization is fully AI-driven. If businesses don’t have a sense of what the short-term or long-term goals look like with AI transformation, attaining it would continue to be elusive. In this blog post, I will try to list a few common scenarios folks often run into and how techniques like Automatic Machine Learning can take the initiatives to fruition.

Problem Definition — How can AI help first?

The call to advance AI/ML initiatives is often fraught with confusing messages/understanding, and there are a lot of issues to tackle. There are two parts to this – technology as well as culture. This blog focusses mostly on the technology aspect of AI adoption.

One of the ways business sponsors explore AI initiatives is to start with a small pilot with very clearly defined business outcomes and see how it can be deployed in production, along with other short-term and long-term objectives.

  • How can AI save X million dollars in customer churn? Show me the model insights for a 90% accurate model.
  • Show me that you can give me a 5–10% lift in marketing ROI and how to integrate with my application.
www.h2o.ai2019/11/unsplash-2.png www.h2o.ai2019/11/unsplash-2.png

Photo by Andrew Neel  on Unsplash 

While the above looks like an easy enough problem statement, usually, this leads to more questions to Business folks, Data Scientists, Data Engineers, and Dev Ops.

  • Ugh — we don’t have labeled data in the organization ready to do the pilot for this use case.
  • What is the amount of misclassification rate we can tolerate to calculate the target savings?
  • What is the easiest way to put the model in production to measure lift?
  • Can I see the key drivers in the churn and implicit rules hidden in the data to socialize with peers?
  • What “other” data do I need to get to make my model better?
  • How can I TRUST the model, and also What if data changes over time?
  • How do I know if the model is not biased towards a protected group?
  • Can this BI Analyst or Citizen data scientist build some models quickly?
  • How can I integrate some of the domain/institutional knowledge to the prediction problem?
  • Will this run on the cloud/on-prem?
  • How easy is it to train and socialize the results across multiple lines of business?

Not surprisingly, all of the questions and answers are indeed sprinkled at various stages in an AI maturity model curve. Good AI/ML solutions thoughtfully designed should offer easy answers to above for every single use case, to get off the ground and be in production — basically, significant productivity across the organization and more predictable outcome to realize a use case – no pun intended. J

Expert Systems Redux — with Machine Learning

From Wikipedia — ‘Expert Systems’

In  artificial intelligence , an expert system is a computer system that emulates the decision-making ability of a human expert. [1]  Expert systems are designed to solve complex problems by  reasoning  through bodies of knowledge, represented mainly as  if–then rules  rather than through conventional  procedural code . [2]  The first expert systems were created in the 1970s and then proliferated in the 1980s. [3]  Expert systems were among the first truly successful forms of  artificial intelligence  (AI) software. [4] [5] [6] [7] [8]  An expert system is divided into two subsystems: the  inference engine  and the  knowledge base . The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include an explanation and debugging abilities. [9] 

www.h2o.ai2019/11/Unsplash-3.png www.h2o.ai2019/11/Unsplash-3.png

Photo by Marc Mueller  on Unsplash 

AI Expert systems started with IF-Then rules, which is still the state of affairs in a lot of business processes. Even in some “modern” compliance systems, lexicon rules are used around text data, as an example. While IF-Then rules are hand-coded based on domain expertise, they generate a lot of misclassification -> false positives/negatives. They are painfully erroneous and cumbersome to measure and maintain.

For this reason, AI/Machine Learning algorithms  over the last several years has been replacing the IF-Then rules systems to create more accurate predictions and lower the operational cost of systems and a significant lift in business outcomes.

What about Automatic Machine Learning?

In the past, Machine Learning implementation for a problem was usually a series of steps done by coding in Python, R, Scala, etc., A Data Scientist starts with the descriptive statistics of the dataset, cleanses it, creates features or derived columns, runs a bunch of algorithms, checks for overfitting and does model tuning, etc., This typically requires a lot of iterations, using different libraries, visualizing the data which sometimes may seem never-ending …

With the right talent, solving a use case manually may take 4 to 6 weeks or even a few months  to achieve the level of accuracy/model robustness required for a business problem.

Automatic Machine Learning,  on the other hand, removes the basic iteration task of building high-quality models. It can automate Algorithm Selection, Tuning, Cross-Validation  (to avoid overfitting), etc., Some of the tools like H2O’s Driverless AI,   for instance, can do a lot more than basic Automatic Machine Learning. It can also do Automatic Visualization, Automatic Feature Engineering , Code generation for Production and also create a Word Document (Auto-Doc) with the ‘story of the winning model’, without the user writing a single line of code! So, it’s not really a black box on what happens under the hood. This also means just a few hours to days  to get a model to production as opposed to weeks and months. The tool can even produce artifacts  for Machine Learning Interpretability and also has allows the business to check for Adverse Impact across protected groups and do Sensitivity analysis to examine Model Robustness under different conditions – useful for regulatory compliance.

If the AI/ML model building is automated, then what do data scientists do?  

Data Scientists & Business Stakeholders are always are in-charge  — They can decide and can always be specific about the features, algorithms, scorers that go into a model. Basically, they have the ability to constrain models and set acceptable boundaries on how the complex or simple the final models should be. Driverless AI’s Automated Machine Learning /Feature Engineering works within those boundaries, set by the Data Scientist and Business folks — if that’s what is required. Data scientists can even bring their algorithms or features or scorers to the tool (Bring your Own Recipe) and have them compete with what the tool offers in a tournament setting.

So how does this all come together? 

Deploying sophisticated tools for building AI/ML models like Driverless AI can effectively bring huge productivity gains, which in itself could be transformational — something like taking 10s of projects to completion instead of one with the same amount of resources and integrate with applications. Add that to an agile organizational structure/process & culture change to facilitate such automation, it is a huge opportunity for a business to reach the top of the AI Maturity Model Curve in a very short amount of time.

Thanks for reading!

 

 headshot

Karthik Guruswamy

Karthik is a Principal Pre-sales Solutions Architect with H2O. In his role, Karthik works with customers to define, architect and deploy H2O’s AI solutions in production to bring AI/ML initiatives to fruition. Karthik is a “business first” data scientist. His expertise and passion have always been around building game-changing solutions - by using an eclectic combination of algorithms, drawn from different domains. He has published 50+ blogs on “all things data science” in Linked-in, Forbes and Medium publishing platforms over the years for the business audience and speaks in vendor data science conferences. He also holds multiple patents around Desktop Virtualization, Ad networks and was a co-founding member of two startups in silicon valley.