Return to page

WIKI

What is AI governance?

AI Governance is a self-defined framework that outlines an organization’s use of AI. This framework includes legal compliance definitions, the organization’s strategies, and other processes or policies.

The goal of AI governance is to create ethical, explainable, transparent, and responsible AI. As a result, software gaps that exist between accountability and ethics are closed. AI governance has become an important practice stemming from AI’s explosive growth.

AI governance can define many areas:

  • Data quality

  • Data autonomy

  • Data control

  • Data access

  • AI safety

  • Determining inappropriate and appropriate AI automation

  • Legal structures

  • Institutional structures

  • Moral questions

  • Ethical questions

  • Justice

Data and algorithms heavily influence the tools we rely on for everyday life. AI governance steers that influence and empower users to better understand and control how their AI software is used. 

Does AI create bias?

AI has been shown to create bias, which is why AI governance is so important. AI is simply an output based on data inputs.

Gone unchecked, machine learning biases have been observed to racially and financially profile, return bad user information, misread sentiment, or even return information that is delivered inappropriately.

Why do we need AI governance?

AI and machine learning algorithms can now be found in a majority of sectors that influence our everyday lives. AI systems are used to improve our healthcare, financial systems, education, shipping, logistics, public safety, entertainment, and even farming and agriculture.

The wide exposure alone should be reason enough for all of us to demand visibility and control mechanisms, or “governance” for the systems that influence much of our lives.

Developing and enabling AI governance will help influence the best decision-making practices? with situations that rely on AI. That governance will help ensure that decisions are just and don’t interfere with basic human rights.

 

How do you Mitigate AI risks?

The best way to mitigate AI risks is to adopt AI governance practices and tools. Those tools need to provide visibility and controls to the systems that employ AI algorithms. Just as a manufacturing process contains quality-control systems to ensure that end products meet standards, an AI governance system ensures visibility and that controls are in place to enforce governance principles.

 

What is the Black Box Problem?

The black box problem comes from the way AI can naturally work in obscurity. AI models take an input, apply rules, and produce an output. But the obscurity (black box) that happens between input and output created can cause trust issues. What made the algorithm decide that? What information was ignored? How can we be sure that the output is accurate or valid?

 

AI governance creates the windows that allow us to see and understand the inner workings within the AI “black box” and helps create the external control mechanisms to alter how AI produces its output.

 

Who Should be Responsible for AI Governance?

A committee should ultimately be responsible to enact and enforce AI governance. No single person or role should be responsible; every team member plays an important role.

Government CEO or top management - Responsible for the AI Governance charter and assigning responsibility in the organization.

Board members, audit committee - Responsible for controlling and auditing the data.

General Counsel - Responsible for assessing risks behind legal aspects of the data and practices.

CFO - Responsible for assessing financial risks of the data and practices.

CDO - Responsible for the regular maintenance and assessment of new AI governance rules and enforcement practices.

Other leadership members - Responsible for ensuring general understanding and compliance of established AI governance practices.

H2O.ai Enterprise-wide AI scale and governance

H2O.ai allows organizations to operate AI across departments with trust and confidence. H2O AI Cloud offers a system of record for every AI project in your organization, enabling organizational scale while reducing risk. The platform provides the most comprehensive set of explainable AI capabilities, ensuring models are accurate, robust, and understood to both data scientists and business users, improving governance and simplifying compliance. H2O AI Cloud supports almost every type of model, and automatically monitors accuracy and detects bias along the data science lifecycle, including stored features, model training, and model operations.

Successfully transform your organization with AI

What do AI ethicists do?

An AI Ethicist ensures trust and transparency by guarding against bias and unintended consequences. An AI ethicist’s primary responsibility is to improve the AI engineering practices by ensuring that the design, development, and implementation of AI systems have added ethical, social, and political safeguards.

Kpmg identified the AI Ethicist as, “Top 5 AI hires companies need to succeed in 2019.” The need for this role continues to be apparent as evidenced by the increasing variation of titles including:

  • AI Ethicist

  • Chief AI Ethics Officer

  • Chief Trust Officer

  • Ethical AI Lead

  • Ethics Analyst AI Ethics and Governance

  • Trust and Safety Policy Advisor.

This list of AI Ethics roles will and should increase as AI becomes more integrated in the systems we use.

 

H2O and AI Governance

H2O MLOps is a complete system for the deployment, management, and governance of models in production with seamless integration to H2O Driverless AI and H2O open source for model experimentation and training.

Production model governance

Production environments require particular security and controls to ensure that software is not tampered with or accidentally corrupted. Production operators receive training in production procedures and production controls ensure their compliance through rigorous auditing of access, changes, and events. H2O MLOps includes everything an operations team needs to govern models in production, including a model repository with complete version control and management, access control and logging for legal and regulatory compliance.

H2O MLOps also gives IT teams control over production models and environments to ensure security and manage risk and compliance based on IT and corporate governance practices.

Read the H2O MLOps product brief

 

Responsible machine learning

The following snippets are taken from the eBook, Responsible Machine Learning.

“The decision to move into the world of ML is not a simple undertaking and smart leaders can be left asking, “how can I mitigate the risks for my organization?” Luckily, there are mature model governance practices crafted by government agencies and private companies that your organization can use to get started. This section will highlight some of the governance structures and processes your organization can employ to ensure fairness, accountability, and transparency in your ML functions. This discussion is split into three major sections: model monitoring, model documentation, and organizational concerns. We’ll wrap up the discussion of model governance with some brief advice for practitioners looking for just the bare bones needed to get started on basic model governance.”

AI governance and compliance

“Aligning your ML systems with leading compliance guidance such as the EU GDPR, the Equal Credit Opportunity Act (ECOA), or the US Federal Reserve’s SR 11-7 guidance on model governance.”

AI governance in machine learning workflows

“Despite its long-term promise, ML is likely overhyped today just like other forms of AI have been in the past (see, for example, the first and second AI winters). Hype, cavalier attitudes, and lax regulatory oversight in the US have led to sloppy ML system implementations that frequently cause discrimination and privacy harms. Yet, we know that, at its core, ML is software.

To help avoid failures in the future, all the documentation, testing, managing, and monitoring that organizations do with their existing software assets should be done with their ML projects, too. And that’s just the beginning.

Organizations also have to consider the specific risks for ML: discrimination, privacy harms, security vulnerabilities, drift toward failure, and unstable results.

After introducing these primary drivers of AI incidents and proposing some lower-level process solutions, this chapter touches on the emergent issues of legal liability and compliance. We then offer higher-level risk mitigation proposals related to model governance, AI incident response plans, organizational ML principles, and corporate social responsibility (CSR).

While this chapter focuses on ways organizations can update their processes to better address special risk considerations for ML, remember that ML needs basic software governance as well.“

Get the full eBook

AI Deployment and Governance

H2O AI Cloud offers complete capabilities to deploy, monitor, test, explain, challenge, and experiment with real-time models in production. H2O’s MLOps technology enables users to watch in real-time how data and predictions are changing as well as monitor alerts and risk flags as they occur.