Since we began our journey making tools for explainable AI (XAI) in late 2016, weāve learned many lessons, and often the hard way. Through headlines, weāve seen others grapple with the difficulties of deploying AI systems too. Whether itās:Ā
AI can affect people ā and in various and harmful ways. One of the most significant lessons weāve learned is that thereās more to being responsible with AI than just explainability of ML models, or even technology in general.Ā Ā
Thereās a lot to consider when mitigating the wide variety of risks presented by new AI systems. We claim that responsible AI combines XAI, interpretable machine learning (ML) models, data and machine learning security, discrimination testing and remediation, human-centered software interfaces, and lawfulness and compliance. Itās been tough to holdĀ our evolving responsible AI softwareĀ to this high bar, but we also continue to make progress toward these lofty goals.Ā Ā Ā
To breakdown what weāve learned a bit more, here are the basics of how we think responsible AI tools are most effective:Ā
SeeminglyĀ well-builtĀ systems can causeĀ big problemsĀ when their results are presented to users as a final, opaque, and unappealable outcome. As is alreadyĀ mandatedĀ in the US consumer finance vertical, use responsible AI tools to tell your users how your AI system makes decisions and let them appeal those decisions when the system is inevitably wrong. It would be even more ideal to allow users to fully engage with AI systems, through appropriate graphical interfaces, to satisfy their basic human curiosity about how these impactful technologies work.Ā
Users will probably never understand all the details of a contemporary AI system, but the engineers and data scientists that design and build the system should. This means building interpretable and explainable AI systems that enable exhaustive testing and monitoring for:Ā
Think about it like this: I donāt understand the structural engineering of the I-395 bridge in Washington DC, but along with millions of people, I put my trust in the engineers who designed and built the bridge. AI will likely be as important to us as bridges one day. Letās start respecting the risks of AI sooner rather than later.Ā
One of the best controls for AI systems is human oversight and review. This is why major financial firms haveĀ chief model risk officersĀ andĀ multiple lines of human validationĀ , beyond data science and engineering teams, for their AI systems. Much like enabling appeals for consumers, AI systems need interfaces that empower business leaders, attorneys, and compliance personnel to evaluate the business value, the reputational risks, potential litigation, and lawfulness for an AI system. Well-meaning executives and oversight personnel canāt be accountable if they canāt get the necessary information about AI systems. So, AI systems must enableĀ human-readable documentationĀ or other appropriate executive review interfaces.Ā Ā Ā Ā
If youāve found this post helpful, Iāll be speaking about these topics later next week. Iām flattered to be included in a panel about āTools for a More Responsible AIā with Orange Silicon Valley on October 14th, at 8:30am PT. You can register here.Ā