November 30th, 2022

A Brief Overview of AI Governance for Responsible Machine Learning Systems

RSS icon RSS Category: AI Governance, Machine Learning, Responsible AI

Our paper “A Brief Overview of AI Governance for Responsible Machine Learning Systems” was recently accepted to the Trustworthy and Socially Responsible Machine Learning (TSRML) workshop at NeurIPS 2022 (New Orleans). In this paper, we discuss the framework and value of AI Governance for organizations of all sizes, across all industries and domains.

Our paper is publicly available in arxiv: Gill, N., Mathur, A., Conde, M. (2022) A Brief Overview of AI Governance in Responsible Machine Learning Systems.

Introduction

Organizations are leveraging artificial intelligence (AI) to solve many challenges. However, AI technologies can pose a significant risk. To avoid such risks, organizations must turn to AI Governance, which is a framework designed to oversee the responsible use of AI with the goal of preventing and mitigating risk.

AI Adoption & Problems within Industry

AI Adoption

  • Larger companies have had the advantage of using due to resources alone
  • Smaller companies can now take advantage of AI by more affordable means, e.g., cloud computing.
  • AI is on an upward trend and will continue as such.
  • Unfortunately, AI has pros and cons and cons are sometimes not accounted for.

Problems within Industry

  • Lack of risk management
  • AI technology is moving too fast
  • Government intervention is lacking
  • Lack of AI adoption maturity 

Manage AI Risk with AI Governance

What is AI Governance (AIG)?

AI Governance is a framework to operationalize responsible artificial intelligence at organizations. This framework

  • Encourages organizations to curate and use bias free data
  • Consider societal and end-user impact
  • Produce unbiased models
  • Enforces controls on model progression through deployment stages

Benefits of AI Governance

Alignment and Clarity

  • Awareness and alignment on what the industry, international, regional, local, and organizational policies are.

Thoughtfulness and Accountability

  • Put deliberate effort into justifying the business case for AI projects.
  • Conscious effort into thinking about end-user experience, adversarial impacts, public safety & privacy.

Consistency and Organizational Adoption

  • Consistent way of developing and collaborating AI projects.
  • Above point leads to increased tracking and transparency for projects.

Process, Communication, and Tools

  • Complete understanding of steps to move the AI project to production and start realizing business value.

Trust and Public Perception

  • Build AI projects more thoughtfully
  • Above point builds trust amongst customers and end users, and therefore a positive public perception.

Stages of a Governed AI Life Cycle

Organizational Planning

  • Comprehensive understanding of regulations, laws, and policies amongst all team members.
  • Resources and help available for team members who encounter challenges.
  • Clear process to assist team members.

Use Case Planning

  • Establish business value, technology stack, and model usage.
  • Group of people involved include subject matter experts, data scientists/analysts/annotators and ML engineers, IT professionals, and finance departments

AI Development

  • Development of a machine learning model from data handling and analysis, modeling, generating explanations, bias detection, accuracy and efficacy analysis, security and robustness checks, model lineage, validation, and documentation.

AI Operationalization

  • Deploy machine learning model to production, which requires review-approval workflows, monitoring and alerts, decision making processes, and incident response plans.

Conclusion

AI systems are used today to make life-altering decisions about employment, bail, parole, and lending, and the scope of decisions delegated by AI systems seems likely to expand in the future. The pervasiveness of AI across many fields is something that will not slowdown anytime soon and organizations will want to keep up with such applications. However, they must be cognizant of the risks that come with AI and have guidelines around how they approach applications of AI to avoid such risks. By establishing a framework for AI Governance, organizations will be able to harness AI for their use cases while at the same time avoiding risks and having plans in place for risk mitigation, which is paramount.

About the Authors

Navdeep Gill

Navdeep is an Engineering Manager at H2O.ai where he leads a team of researchers and engineers working on various facets of Responsible AI. He also leads science and product efforts around explainable AI, ethical AI, model governance, model debugging, interpretable machine learning, and the security of machine learning. Navdeep previously focused on GPU-accelerated machine learning, automated machine learning, and the core H2O-3 platform at H2O.ai.

Prior to joining H2O.ai, Navdeep worked as a Senior Data Scientist at Cisco and as a Data Scientist at FICO. Before that Navdeep was a research assistant in several neuroscience labs at the following institutions: California State University, East Bay, Smith Kettlewell Eye Research Institute, University of California, San Francisco, and University of California, Berkeley.

Navdeep graduated from California State University, East Bay with a M.S. in statistics with an emphasis on computational statistics, a B.S. in statistics, and a B.A. in psychology with a minor in mathematics.

Abhishek Mathur

Abhishek is part of the product management team at H2O.ai. He has extensive experience building machine learning and deep learning products at various startups and F100 companies. Outside of work, Abhishek enjoys teaching and coaching budding product managers. He has always been fascinated by new technologies, including blockchain/crypto/NFTs, AR/VR/metaverse, quantum computing, and more. When he's not thinking about product management or technology, Abhishek enjoys sipping on craft beers and single malt scotch.

Marcos V. Conde

Marcos V. Conde is a Ph.D. Researcher in Artificial Intelligence and Computer Vision at the University of Würzburg. Marcos is a Data Scientist at H2O.ai and a Kaggle grandmaster. His research interests include image processing, computational photography, and machine learning applied to medicine and natural sciences.

Leave a Reply

+
Enhancing H2O Model Validation App with h2oGPT Integration

As machine learning practitioners, we’re always on the lookout for innovative ways to streamline and

May 17, 2023 - by Parul Pandey
+
Building a Manufacturing Product Defect Classification Model and Application using H2O Hydrogen Torch, H2O MLOps, and H2O Wave

Primary Authors: Nishaanthini Gnanavel and Genevieve Richards Effective product quality control is of utmost importance in

May 15, 2023 - by Shivam Bansal
AI for Good hackathon
+
Insights from AI for Good Hackathon: Using Machine Learning to Tackle Pollution

At H2O.ai, we believe technology can be a force for good, and we're committed to

May 10, 2023 - by Parul Pandey and Shivam Bansal
H2O democratizing LLMs
+
Democratization of LLMs

Every organization needs to own its GPT as simply as we need to own our

May 8, 2023 - by Sri Ambati
h2oGPT blog header
+
Building the World’s Best Open-Source Large Language Model: H2O.ai’s Journey

At H2O.ai, we pride ourselves on developing world-class Machine Learning, Deep Learning, and AI platforms.

May 3, 2023 - by Arno Candel
LLM blog header
+
Effortless Fine-Tuning of Large Language Models with Open-Source H2O LLM Studio

While the pace at which Large Language Models (LLMs) have been driving breakthroughs is remarkable,

May 1, 2023 - by Parul Pandey

Request a Demo

Explore how to Make, Operate and Innovate with the H2O AI Cloud today

Learn More