Return to page

BLOG

Building an AI Aware Organization

 headshot

By H2O.ai Team | minute read | October 26, 2020

Blog decorative banner image

Responsible AI is paramount when we think about models that impact humans, either directly or indirectly. All the models that are making decisions about people, be that about creditworthiness, insurance claims, HR functions, and even self-driving cars, have a huge impact on humans.  

We recently hosted James Orton, Parul Pandey, and Sudalai Rajkumar for a virtual panel to discuss the different stages of AI maturity in an organization, components required to build an AI-aware organization, and interpretability methods and algorithmic fairness of AI systems. Following is a brief overview of the discussion: 

“As a foundation – comply with any regulation that is active in your jurisdiction”  

In a commercial setting, one could argue that the main benefit of responsible AI is risk management, and to avoid the reputational risk of getting it wrong. With increasing adoption, countries and global agencies are tightening the rules around using and implementing AI models, and even so, we see a huge pipeline of more regulation coming in this space.  

Trust is another big pillar of being responsible with your AI solutions. Trust can be internal, do the people you’re working with, your stakeholders trust the solutions that you’re building? Or it can be external too, do your customers trust the way that you’re using AI to make decisions about them.  

Robustness is another aspect that is becoming more and more important – particularly to ensure that models are robust against attacks, such as people covering road signs to confuse self-driving cars. 

“AI is just a tool”  

Parul emphasized that AI and ML algorithms are just tools and how they perform depends largely on a lot of factors including training data, inherent bias in your data which implies that your output is only as good as your input.  

In a recent case from a credit line dispute raised by a couple, gender was not even being considered as an input criterion by the issuing bank but the algorithm was learning and inferring gender using the buying patterns of two different individuals.  

Working with unstructured data, SRK explained the need for Responsible AI using a hate speech detection algorithm. He demonstrated that just the mere mention of someone being straight or being of the LGBTQ community would get flagged as hate speech by the algorithm, thereby under-representing an individual(s) even more.  

James explained using an example he witnessed in the HR space, where there was gender bias against women, even though they weren’t using gender as a variable – he described that the issue of proxies is massive because even the most subtle cues can be picked up by the algorithm. 

This points to the fact that algorithms need to be audited, and organizations have to ensure that before models are put in production, they are rigorously tested for any bias that might have crept into it by means of proxies. 

“ Responsible AI should be imbibed into the core philosophy of your company ”  

James answered the question on Responsible AI being a tedious luxury with another question. He said, “There’s a small cost of upfront investment in doing things the right way, and aligning to responsible AI, versus the potentially large cost of getting it wrong. Are you prepared to take that risk of the AI failure that you may see in a few months to come from that model?” He emphasized setting the right expectations for the stakeholders and what can, cannot, and should not be done. 

Parul added, Responsible AI is not only for the data scientists and should be imbibed into the core philosophy of your company, and not something that can be plugged at the end of your whole pipeline, and only when the model has to go out in production. The entire organization needs to be sensitized about the importance of ethics and responsible AI, because this is something that is not only the need of the hour, it’s the right thing to do. 

“Husky or a wolf?” 

On being asked about the trade-off between predictive power and explainability, Parul cited a paper written by the creators of LIME, “ Why should I trust you ?”. Here, they elaborated that an algorithm was classifying a wolf as a husky not because of any features of the animal, but only because of the snow in the background. Such a result might not be of adverse effect when being recommended the wrong YouTube video but can prove disastrous if one’s money or health is involved.  

With the advent of cutting edge platforms such as H2O Driverless AI, the trade-off between accuracy and interpretable or responsible AI ranges from minimal to almost none. As a use-case, our panelists recommend going the extra mile, and comparing accuracy and interpretability for every use-case you develop. It will help you understand the business value you are able to bring by deploying a comprehensible model to production.  

“Data has a lot of power” 

SRK stressed on the importance of having policies that govern ethics around data procurement, data privacy, and security as these are very sensitive topics and can completely derail an organization’s brand and trust in the markets.  

A key consideration that’s often overlooked is, should the model be built at all given the data and the use-case? We have to consider which data to include in a model, and the one to exclude, including the proxies for that data. Organizations have to ensure that they do not have any personally identifiable information in the dataset, none of which should be required or used in any business context.   

And as users we also have to make sure, we understand the information we’re putting out into the world. As we see in our lives everyday, fake or potentially harmful content is consumed faster than genuine, useful content. As practitioners, we need to understand the limit, and the trade-off we are willing to accept. 

“Always have humans in the loop” 

AI and machine learning are very powerful tools and they have the ability to solve a lot of problems, it’s just like any other tool that has to be used responsibly. And yes, as data scientists, we should make sure that we’re using it in a way that’s responsible, and that’s for social good. So the most important part is having humans in the loop. AI, automation, and all these tools are given to humans for them to use and make better use out of these tools/ This is something to always keep in mind – humans are required to be in the loop. 

You can watch the entire discussion here .  

 

Read more about our Responsible AI efforts here .  

Our latest eBook, Responsible Machine Learning, outlines a set of actionable best practices for people, processes, and technology that can enable organizations to innovate with ML in a responsible manner. You can get your free copy here .  

 headshot

H2O.ai Team

At H2O.ai, democratizing AI isn’t just an idea. It’s a movement. And that means that it requires action. We started out as a group of like minded individuals in the open source community, collectively driven by the idea that there should be freedom around the creation and use of AI.

Today we have evolved into a global company built by people from a variety of different backgrounds and skill sets, all driven to be part of something greater than ourselves. Our partnerships now extend beyond the open-source community to include business customers, academia, and non-profit organizations.