Return to page

Responsible AI Panel at H2O World Sydney

 

 

 

 

AI, or algorithms in general, have played a very critical role in shaping the outcomes affecting people across various sectors such as education, social welfare, and healthcare. While regulation and assessing the impact of technology on societies is not new, the scale and speed at which AI is being adopted and rolled out, there are renewed calls on how this technology can be better regulated. 

 

The three major groups which have a role to play in regulation are the technology providers, the Government and regulators, the civil unions, and society at large. In order to minimize the harm, technology providers can self-regulate, the Government can regulate, or all the major stakeholders can opt to co-regulate. Arriving at a consensus between the three major groups is a non-trivial task.

Talking Points: 

Speakers:

  • Jasmin Craufurd-Hill, Vice President, ARPI

  • Edward Santow, Professor, UTS

  • Angela Kim, Ambassador, Women in AI AUS 

  • Leanne Ho, CEO,  Economic Justice Australia

  • Dan Jerymn, Chief Decision Scientist, Commonwealth Bank

Read the Full Transcript

 

Jasmin Craufurd-Hill:

 

We're going to keep this with a couple of questions to start out, but we've got Slido. I know that there are questions that are going to come up as we're in discussion, so I can see them both on the board and here. I'll try and weave some of those in as well, so that we don't get to the end of a really good discussion and no one's had a chance to have a question asked. So if something comes up and you've got a burning question, pop it into Slido, and I'll try and put it into the mix.

 

Regulation of AI

 

So to start out talking about Responsible AI, one of the things that comes up frequently is regulation. What are the rules of the road that we're going to be working with? At one end, there is a perspective that we're in the Wild West, the digital Wild West where technology is far outpacing regulation.

 

The advancement is again, beyond the intent of original regulation. At the end of the spectrum is, well, poor regulation rolled out reactively tends to stifle innovation and the speed of innovation in a time where we're after competitive advantage. We're working in a global sphere where that regulation might not necessarily be fit for an international realm. So I wanted to open it up for, that's a very large remit for discussion in thinking about what happens if we don't regulate AI effectively? What are the harms that we might be looking at? Are our existing models of regulation and legislation enough? Is there a change needed? Because what we're able to do, what is looking like it's in our near future, and how this will be employed, not just the technology, but how it's employed is not currently covered sufficiently. So what's the risk with under regulation or not regulated, unregulated and what's the opportunity? Did you want to start off by any chance?

 

Edward Santow:

 

Yeah, sure and it's great to be here. Maybe I'll start with two terrible confessions. The first is I'm a lawyer. I was a human first, but I am a lawyer and so there is a natural tendency among lawyers to be like people with hammers. So you think, well, every problem has a regulatory solution. The second is, when I first started this big piece of work in my previous role as human rights commissioner, I started with a hypothesis. Which turned out to be completely wrong, which, and the hypothesis was, "We need a whole heap of new legislation in this area because it's a digital Wild West." When we looked at it more closely, what we came to discover was that in fact, there is a vast amount of law that already exists.

 

It feels like a digital Wild West however, because the law is not being rigorously and effectively applied. Right? That's a very different problem than saying there's just no law at all. So I say this, here's my third mayor as a former regulator. Put all together all of the major federal regulators and we all had this terrible confession session as a group. And we said, "Well, we've dropped the ball." Like, in the area of responsibility that I had in anti-discrimination law, if it was a human bank manager, obviously not working for CBA, but a human bank manager who was out there just refusing to give home loans to women. I would have been all over that person, right? There's no question about that. We would've been really alive to that problem.

 

But if the problem is a result of an algorithm misfiring and this is not a completely hypothetical example, right? This is a genuine problem. Especially in the home loan space, but a range of other financial services and products as well. We start to question it. We start to go, "Oh gosh, does the law apply? Does it really apply? Is this an ethical issue?" I'm here to say it, the law does apply. It needs to be applied, right? So I'm choosing my grammar quite carefully there. So it is just as unlawful to discriminate against someone using an abacus and just a human making the decisions as it would be using the most sophisticated form of deep neural net or anything in between. The outcome is what really matters. So that's the starting point for me. There are some gaps in the law, I acknowledge that, but the bigger part of it is making sure that the regulatory ecosystem works.

 

Angela Kim:

 

To add a little note you know, surprisingly over this topic, Ed and myself, we really argue and enjoy discussion of coffee as if it's a life or death matter. So this is really close to our heart and I totally agree with them at its point about the laws there. Everything's not like we have to create a new law, but the thing is back to diversity and inclusion. So what we really need to do is we need to be more thoughtful about before implementing the models, before implementing the policies, we have to ensure that the team, the committee, or working group we are creating is diverse and inclusive enough. Which means not just taking off the box exercise, rather you have to really thoughtfully consider, have we considered all different core people, different age groups, and different profile people. So that we are really careful about thinking after this law is implemented. Is going to do any harm to particular code people. After the model is developed, it's going to harm any people's code or segment of people, because we haven't carefully thought about it. So I'm bringing back the empathy to AI and technology and why it's so important to actually implement the right laws and policy makings to really implement and to democratize AI and deliver AI for good.

 

Leanne Ho:

 

Yeah, that's a great segway to me. I would also like to confess that I am a lawyer and have no expertise whatsoever in this technology or data and need to learn from you guys. But the reason I'm here today is really to give you one of the best examples of a failure. To one, follow the laws that already already exist, but two, provide that higher level human rights and ethical framework to the development and rollout of this kind of technology. Especially when you're talking about the most vulnerable people in the community. So for our international audience, I am just going to give a 101 on Robodebt. I've been brought here today because my organization gave evidence to the Robodebt Royal Commission on Friday. So I've got all of this fresh in my mind. So the government hatched this scheme back in 2014-15.

 

To save money, they would compare data from the tax office about what people had earned over the entire year, averaged that data, and compare that with what people reported as their income fortnight by fortnight. Now, this is as good a comparison as saying that the average rainfall across the entire year for all of Australia tells you how much rain there is on any particular day. Your data scientists, I think you can work out that those two data sets do not match. Now, it's been misunderstood as a problem with the algorithm, a problem with technology, a problem with rollout. When we take a step back, what we see is that it was immoral before it was illegal and that we need to have those human rights and ethical frameworks to help us think about before we even start designing any of these systems. What is the impact on those most vulnerable? How do we train our people to be able to develop that technology in a way that's not going to harm them?

 

Dan Jerymn:

 

Yeah, I mean a great point. And I should say it's. I'm thankful to Ed and Leanne for mentioning that they're lawyers in advance. Normally as a banker I've got the highest bar to clear to get the audience on side.

 

At least starting from a level playing field. But I mean it was great to hear those comments. Actually we talk about these things a lot. And one of the things that struck us as a comment when we are thinking about AI and how it was going to become clearly a very important part of the way that we serve our customers is we wanted to make sure that we got ahead of any of the kinds of potential problems that might come about. Some of those that are well publicized or around explainability. Can you understand what it is that you're doing? As a knock on effect unintended bias. So we started to think about some of the control frameworks that we might do and we wanted to work with regulators, industry bodies, and academia to try to think about this problem from multiple different angles.

 

One of the examples that I remember from very early on is we think about a set of ethical AI principles and one of those is around transparency. And there was a consensus at the time. It was one of the things that we should make absolutely clear is that for any AI algorithm that we use we should have absolute transparency. Anybody should be able to request a copy of how you did it. And so you can be very clear and transparent about how you've done that. And in principle I think that makes sense, right? That sounds reasonable to most people. Immediately we started to think about the practical implications in a use case like fraud. Where a logical extension of that would be, we would have to publish step by step. Almost a manual to fraud this to say here's precisely how we're thinking about using AI to keep people safe. Have at it.

 

And so I think the thing that comes home to us more and more, like every single time you have a conversation and swipe panels like this are so great, is that I know a little bit about the AI technology. I feel a little bit less comfortable about my expertise today, while there are some Kaggle Grandmasters in the room. But I don't know about the law to anywhere near the extent that these folks do. And perhaps these folks don't know much about vulnerable customer groups and various other different aspects of society. They're going to be really fundamentally affected by the way we set policy and so the most important thing I think we can do is continue to have these types of discussions. Keep evolving and keep challenging ourselves to think about stress test cases around the edges. Because AI is going to evolve and we have to too.

 

Jasmin Craufurd-Hill:

 

I think it's a really nice point. Just leveraging off both your points, Leanne, Dan, around that potential, not just for a digital divide. We always think of the digital divide being have or have not, but actually that gray in the middle is that it can take already disadvantaged groups and exacerbate those trends. Accelerate those trends in terms of disadvantage, but also thinking to Robodebt of who it was specifically targeted at and who it was inflicted on in terms of being a chance for not just explainable algorithms and AI and ML and DL that we're building, but that chance for education. Is there a really big piece that's needed around education, around application, and employment of AI to help us start getting ahead of the curve?

 

Risk of AI

 

For me, think of these examples. We've seen Apple Pay with credit limits that are different on a gendered basis. We've seen compass with sentencing where this is being employed, with large facial recognition. Some wrongful arrests in Detroit that have occurred based on these technologies. As we start to see more rollout, we've seen facial recognition start to come out in Australia. We're seeing algorithms being employed in our day-to-day. Is there a risk that we have, that we're going to see people impacted in different ways and not understand how to counter it?

 

Edward Santow:

 

Yeah, it's not a risk. It's a certainty. It's happening right now. So I think your question is really about education. And I guess the starting point, which is good news for people like me. We don't need to turn all of Australia into PhD level data scientists. But what we do need to do is acknowledge the fact that increasingly, AI is going to be integrated into almost every aspect of our life. And so what that means is, I mean, to choose an analogy, right? We're a country where cars are still basically ubiquitous. So pretty much everyone needs to learn something about cars, right? So if you run into a car, it's going to do more harm to you, than you are to it. So there's just basic information there that we make sure that everybody knows if you're driving a car, you need to know a bit more.

 

Like you'll know the difference between the accelerator and the brake, for example. That, that sort of thing. If, if you're not doing anything more than that, that's all you need to know. It's like data literacy would be the analogy. But if you are going to start using AI in decision making, even if you're not on the, if you're in the strategic side rather than the technical side, you need to go a step further. In fact, it would be completely unprofessional not to, right? Like, you need to understand where some of the weak points are, where you might need to get advice if it goes wrong. And then, like a motor mechanic, if you are actually building the car <laugh>, if you are building a, a, a new technical system, then you need to have a much, much deeper knowledge, and your knowledge must be able to span beyond the technical to understand a bit about the legal, the ethical, and the social implications.

 

So that's quite a big task, right? The dirty secret of AI is that it fails far more often than it succeeds, right? In the real world. I'm not talking about the laboratory, I mean in the real world. And a big problem is people like me, people who are on the strategic side, who think that they're buying magic in a box, they're not. We need to upskill so that we can engage with it more effectively. So the sorts of changes we are talking about to our education system start at school, right? It really does start that fundamentally, but it also means for people who are actually in organizations doing real work in this space, there's an upskilling that's absolutely crucial.

 

Education with AI

 

Angela Kim:

 

Yeah. So speaking of upskilling, it seems like I'm piggybacking on Ed's point. So I work with the women in AI Deloitte based in Paris. So we have 140 chapters globally, and I work with APAC region and EMEA and also US, Mexico, and Canada. So we are, I'm very fortunate to work with, very closely with the AI for the EU console because AI is part of AI for the EU. So I learned from the best kind of practice and also the people really pioneering to a certain extreme. Like we have to educate the students, youth people, from primary school level. We had this event in Dubai two weeks ago, and the AI minister from Dubai was insisting we need to really empower females, the girls from primary school, and they're planning to roll out this education program around responsible AI from next year, 2023.

 

And that's how serious part of the country is and also I can see a lot of good movement in Australia because I've been talking to some tech providers and banking institutions as well. And very serious about how we can really educate the future Australian, the youth people, who are in primary school and as well as high school, so that they learn that not just getting the coding is most important, rather how that coding will impact the people and community and society. It's more important so that the students can really understand. Before they become scientists, they become really good humans as well, and they care about people around them. So this education program is what it really looks like in 2023. It's going to be a really big year for all the world because the project I'm working on with German Technology University at the moment.

 

So we are trying to create a workshop program for the data scientists, the engineers, and also senior decision makers. Also see suite leaders for them, how to understand what needs to be looked at and what that means for practical business decision making process. So that ongoing education is provided so that people can be aware that having a really good conversation is important and bringing all different members in the team before you do. Any great project is important as well. So I think this direction looks quite positive and I'm looking forward to seeing all this great project in 2023. Fantastic.

 

Disadvantaged Individuals and AI

 

Leanne Ho:

 

I really love hearing you set out the different levels that are required for this education. And it's so encouraging to hear the developments in rolling out that education. But I wanted to make the point that this skill building actually takes resources. What we see in the social security area is a really disadvantaged, vulnerable cohort, and the investment in that skill is just not there. Because I mean, I often say that Robodebt would not have happened if it was a similar data matching automation scheme, but directed at taxpayers. If it was about tax and about wealthy people being targeted with a dodgy algorithm, it wouldn't have happened. So when we're talking about those most vulnerable, we really need to make sure that in a sense, the resources should be turned on their head. You have to be more careful, more skilled, more understanding of what you're doing when you are actually applying this kind of technology to those most vulnerable.

 

Dan Jerymn:

 

I think that's right. And it's about responsibility and accountability as well. I mean, I have to congratulate Angela for the work done on the women in AI program. I think Australia's doing a fantastic job here. We work, we were privileged to have one of our own, I don't, not sure if Anna is in the audience Dr. Anna Leon from our team the transaction abuse piece,. Which I think is actually happening, realize now I'm plugging another stream that's happening right now. Don't go stay here. This is the cool one. But she's fantastic. And I think about the junior at Comac being when a few years ago, I think the female ratio in the team was somewhere around 13%. It's around 40% now. Still some more work to do, but diversity is bigger than that.

 

And actually it's fundamental that all sectors of society are not only considered, but they're actually part of the process as we develop these things from all sorts of angles. I think Ed made the point early on, perhaps in the previous question around regulation and the law being fit for purpose to a large extent already in some of the things that are fundamentally important in this world around the use of AI. And that's true for us. You think about a bank, it's been running on models for a very long time. It's absolutely the responsibility of anybody doing that to be able to understand and explain what they're doing. Be sure that it's not discriminating unfairly. And so as we apply those things that accountability for the people asking for these solutions, implementing them, looking at the results of them, that accountability exists with everybody and certainly more than anybody else, the person who's responsible for the actual solution itself. And so I think it takes a range of different stakeholders to think about those things from every single angle. But the accountability has to reside with the people who are implementing these solutions from making sure it doesn't do things like Leanne just talked about.

 

Incentivizing Good Behavior

 

Jasmin Craufurd-Hill:

 

I think it's a really interesting point as well linking into how we incentivize, so again, I'll tap into some Slido, how do we incentivize good behavior? So when we think about making sure that the AI we're employing and building are fair. That we're dealing with elements of bias, accuracy, drift, and so on are, how do we best incentivize good behaviors, but also how do we make sure that we've got that explainability, what's the role of explainable AI, machine learning, interpretability, and so on. But also these elements around who's in the team? Is there an aspect of that we need in order to make sure we're incentivizing good behaviors?

 

Dan Jerymn:

 

I'll have a crack. I think in terms of the way I think about the incentives is AI for us at Commbank. It's been about a four or five year journey, at least, to date and obviously modeling and existed for a long time prior to that. And we were very careful about the steps that we would take around making sure that our existing governance framework. Which, as you can imagine, is extremely robust. Caters for any implementation of AI as we're doing things. And it isn't. I was actually speaking to a very senior person in the standards authority on Friday. And normally I'm getting very excited about whatever cool thing it is or do I was really excited about. Cause I'd had a meeting that morning about our governance for the first time probably in my life ever.

 

I've been excited about governance where I was explaining about how he'd adapted to think about things like self-learning models and how you can do things that are right by customers, but still be very safe and transparent about those things within a well governed framework. But one of the things I think that comes back to your question around how do you incentivize and how do you think about getting people on this journey. It's about what you can do with it if you get that right. So you think about things like fraud or things like the scams that Matt Common was talking about earlier. Our people are looking at some terrible things happening to the most vulnerable people in society and people want to help and do things about that.

 

And if you have a very good view of how you can make it important to be able to implement AI in a safe and transparent way, that everybody is comfortable with. You can achieve so much more for people who really need your help. So for us, it's about unlocking the potential of AI to help. The transaction abuse example I talked about earlier was an example of AI being implemented to mitigate something that we hadn't even realized was happening previously. And suddenly there's a defense against it, which would've been almost impossible absent of AI. And that's why we have to get the regulation right so that we can be able to implement those things safely.

 

Edward Santow:

 

I think I have a slightly different view. Not that I don't disagree with anything you said, but I do think it's really important to acknowledge we're at this crucial turning point moment. So up until now, you could incentivize good behavior in the development and use of AI really with one lever, which was, it's reputationally going to be beneficial. In other words your position in the market vis-a-vis customers and perhaps to a lesser extent government will be stronger if you can make a really compelling case that you are doing the right thing. So that's still there. But I think two other incentive points have perhaps come over the top and are trumping it. The first goes back to the first theme that we talked about, which is regulation. So when I said a moment ago, a whole bunch of regulators, the reg three ecosystem as a whole was a little bit asleep at the wheel.

 

That is changing really quickly. We're starting to see, even here in Australia, globally significant cases where regulators, plaintiff lawyers, and others are starting to take these matters to court. We've seen it with Robodebt, we've seen it with the Trivago case, which is a case that's being started all around the world. It is really starting to change; those theoretical risks in terms of the law are now real risks. And then the third one is commercial, right? So to return to that example that I alluded to a moment ago and this came up in your, the run up to your question about bias, unfairness, and discrimination as a human rights lawyer. I care about that primarily because if you deny someone a home loan on the basis that they're a woman, of their skin color, their gender, their disability status, or whatever it happens to be. That is a human rights violation.

 

That should be resident enough to care about it. But it's also important to acknowledge that you've lost a good customer, right? Like if you fail to offer a loan to someone who would be able to pay it back, but you've just got this inbuilt prejudice in your system, then you're making a bad commercial decision. So that's something where as we see a new level of maturity in companies using technology, particularly data science. That's something we actually, I think, need to confront more directly.

 

ESG With Responsible AI

 

Angela Kim:

 

So one of the good examples around how we can really incentivize the business or group of people that are using responsible AI. I think we need to mention how to bring ESG elements with responsible AI. So I'm pleased to see that like 2022 this year, we saw so many businesses really truly interested in how to use AI and data to really promote and to mitigate the risk of ESG and also to actually meet all these the metrics and the right pillars under ESG the framework. It's quite interesting how a lot of businesses have been very interested in how we can really use AI, the element automation, data, repository, and single traceable data. All this different framework to actually deliver ESG the framework.

 

The reason being because ESG, a lot of businesses experiencing a lot of the data are outsourced. The reporting is disparate and they see the need of consolidating and bringing in house and try to make a single source with truth because the business is truly, really encouraged to embed all this the right ESG geometrics in the business making decision process. So operationalize the ESG metrics into the business decision making process so that we can. We don't, we can really ensure that in the future, the outcome around ESG is delivered from the design stage. So it is like we are talking about the responsible AI component meeting with ESG and how to bring these two areas together and how to uncover the risk that we can prevent beforehand and how to encourage the data scientists, engineers to actually work on these very relevant topics.

 

Leanne Ho:

 

I'd like to add to the reputational risk, the legal risk, the commercial risk, and the positive potential. Something that I've seen throughout the entire Robodebt Royal Commission is the impact on the public servants and the staff of organizations that have to use less than ideal ranging through to horrific uses of AI. What that does to an organization where it's not being done well, where it's being done unethically, you basically decimate your own organization when you're not doing it properly.

 

Collection of Data and Consent

 

Jasmin Craufurd-Hill:

 

Just to riff off that point of reputation and include your organization with the data element. We've seen this. We've seen so many cybersecurity breaches and data breaches recently where people might have been quite accepting. We're hearing lovely accounts of people now not wanting to sign up in a cafe just to give up their data in order to be able to place an order. Is there a risk with social license and privacy with elements that are not necessarily AI specific. Like cybersecurity breaches that can impact our space in terms of our thinking around data. There's this lovely line that data is the new oil and the counter is, "well, data's the new asbestos. Looks shiny, you want to put it in everything," but then in the end it actually bites you from the inside over time. Is there a new way of thinking that's required around consent? Making sure we actually have effective consent, true consent around data collection storage, but also disposing of our data? Should there be a lifetime limit on some of the data that we're collecting? I'll be seeing a broad range of perspectives on that.

 

Leanne Ho:

 

I mean, I've got a good example from the social security space. Facial recognition technology. On the one hand, with the floods that we've been seeing recently, it's been enabling Services Australia to make emergency payments really quickly and when we are looking at people like refugees who don't have their papers, or aboriginal people who don't have birth certificates, those people who have challenges with proof of identification, it's really helpful. But at the same time, when you look at those groups, are we really getting their consent if they're just so desperate for a payment in order to live and need to consent to facial recognition technology or the whole online social security system? That's not really consent.

 

Edward Santow:

 

I mean. So our privacy law was developed primarily 40 years ago in an era that no one really anticipated the technical capacity to harvest people's personal information on this industrial scale that we're seeing now, and then to do stuff with it, right? Again in an extraordinary, really quick way using modern computing, machine learning techniques and so on. So we've got a problem because in theory, privacy by the way, is the only human right which can be turned off simply by this notion of consent. And so as Leanne has pointed out, if consent is not real, right. Like if you're not, if it's not free informed and prior, then surely you're engaging in some kind of fiction. And so that's why I do think we need a new settlement here.

 

We don't want to make it so difficult for technology to be used in ways that are beneficial. But equally, we can't continue to accept what is fiction. Which is that when you scroll through thousands of words of legalese and click, "I consent that you've actually read and engaged with it or you've had some meaningful interaction with your organization." Because of course, if you consent to 99% of it, but there's one really important provision that you disagree with. There's no mechanism to do anything with that. So it's not consent, whatever it is, it's not consent. It's just accessing the service.

 

Angela Kim:

 

Obviously it's linking back to, really as mentioned before, data and tech literacy programs. So of course it's needed for the knowledge workers within the business, students in the curriculum, but also citizens in the community. So I was really impressed when I attended Brooklyn Client Center, the community data literacy program that was run by Harvard University a couple of years ago. So literally, the program is run regularly, like lunch and learning for the citizens. Academics is playing this role to try to really empower and upscale the citizens, and they're like seniors and they are learning about data literacy and tech literacy. So that really empowers them to, when they consent, they know what they're consenting and they can also guide their grandchildren as well. Because of all this technology, we are talking about the generation with the iPhone. Before iPhone and after iPhone.

 

It's becoming not just technology peoples the domain anymore. It's everyone. And the effort is needed because the university can play a new role. Libraries can play a new role and other communities like a lot of communities can actually run different workshops and the webinars and seminars and try to make it a very simple language so that layman can understand what that means for them. So it's all needed and it seems like a lot of good movement happening. So again, this is quite positive, but still there's a role we can all play in this area, definitely.

 

Jasmin Craufurd-Hill:

 

Yeah. I'm just thinking of education as well with exams. Are you really able to consent if the alternative is not being able to sit your university exams with the organization that's actually being filmed, recorded, and analyzed. So I think it's a really interesting question of saying this is actually already here and it's been exacerbated or accelerated by the pandemic. One last thing I wanted to touch on, a positive note.

 

Positive Use Cases

 

We've been covering some of the doom and gloom, but what are the positive use cases? I'm keeping in mind that I did get a notification this morning, picked up by Commonwealth Bank that someone was trying to defraud my account. So thank you Commonwealth AI for the grant. It was a good one. I don't gamble that much. But what are those examples? When we think of things like personalized medicine or we think about opportunities into new horizons in space and communications. The creative spaces, we haven't really touched on creative spaces as well. What are those opportunities to do some good, for AI to be good, not just the doom and gloom that we sometimes may focus on.

 

Dan Jerymn:

 

I maybe take that from a CommBank perspective, I mean, thank you.

 

Jasmin Craufurd-Hill:

 

Sorry.

 

Dan Jerymn:

 

Interesting one about that is, and I think it's very germane to this conversation is that with fraud and with alerts like that you are always weighing up what's best for the customer. Which may not be exactly what you. So for example, what would be an appropriate amount of intervention for you that you would be happy with for the purpose of potentially stopping a fraud? I remember I had a gentleman once in the audience who was like, you had stopped something that was a genuine transaction. Yeah and said, "well, I'm really sorry that that had happened, but it may well be the case that there were a thousand other people for whom we prevented fraud as a result of that one customer being inconvenienced just marginally." And there are choices to be made around those sorts of things all the time.

 

I think we talked about a lot of examples where AI can really make a difference in a way that couldn't possibly have happened previously this morning. Things like our benefits finder solution, which is returned half a billion dollars in the first year to customers. Natural disasters, our ability to get in front of these emerging things and be able to help people when they really need it. There are huge opportunities there. But I think, again, one of the things that strikes us a lot is around making sure that you don't just do what you think is the right thing to do. You engage and you measure it and you work out whether it is or not. I can give you a real example of this. From my earlier days in CommBank. Back around the time when ATM fees started to be removed and there were still some private ATMs or you get charged a little bit of money to take money out of the atm.

 

And so we thought, "Oh, it would be great to let customers just a little trial experiment on small number of customers, let them know for the very high users of the pay service, did you know in the last month or so you spent a hundred dollars that you didn't need to access in your money. Here's a map of where the ATMs are and we're trying to help you. Though we can help these people." A lot of segments of customers did reduce, but there was a segment in particular, and it tended to be the ones who were most financially vulnerable who actually increased the rate at which they were drawing out money they paid for. And there's a human behavioral thing about that where they just didn't like being told what to do. And that's why you were very nice about not making fun of my title for most people when I joined the stage as decision scientist. But that's because you have behavioral scientists and experimental people within the team too. It's not enough to just know to predict things that would happen. You have to think about the human impact of that as well and you have to think about things in a more rounded way. But certainly very excited about what AI can do for good.

 

Leanne Ho:

 

I think every example that I think of, there's a crossroads. Either the AI can be used for good or it can be used for bad, for lack of better terminology. And I think with Robodebt that I want to make sure that I make clear that I think integrity in the social security system or any place where taxpayers money is being used is really important. So I think AI, when put to good use to work out where there are problems in the social security system, I want that. The problem with Robodebt was that the wrong sets of data were being used. That the algorithm wasn't right. That there wasn't sufficient understanding of its impact or disregard for its impact on vulnerable people before it was rolled out.

 

A recent example that we've had is of refugees who because of a delay in the processing of their temporary visas from an immigration perspective, were seen as still having their visa in train. So being eligible for a social security payment. But on the social security side there was an automatic cutting of their payment because their visa had expired. I could see where you'd have a positive use of automation there. Either for the two systems to be able to talk to each other or for those two pieces of data to be matched. To work out which refugees have fallen into that gap between their visa eligibility and their payment eligibility. And there's a conscious decision, as you say Dan, to not put resources into automation for that purpose and instead just let those really vulnerable people languish without payment. So there's always a good way and a harmful way to go.

 

Key Takeaways

 

Jasmin Craufurd-Hill:

 

I'm conscious that we are running down on the clock. So we've covered a lot. We've covered everything from regulation, consent, ESG as we come into the mix as well. Again, how do we get around the bias? How do we make sure that we end up using AI for good? Back to the title, responsible AI. What would be key takeaways? Again, 30 second quip or an area where you'd like everyone to remind to think of as we leave? What would you like everyone to walk away thinking about?

 

Angela Kim:

 

Well, I'd like to share that AI for good is very important. And it starts from actually yourself. It's not like you don't think about the people, community leaders, or the key decision makers are making a big impact. It starts from you. And what you do as a data analyst, data scientist, data engineer is very important because it's got a huge impact. Always try to work collaboratively, which means if you find something really useful, make sure you share it with people around you because they will really appreciate it. And we can build on top of each other. So not everyone has to start from scratch. And yeah, I think that's what I'd like to share today.

 

Edward Santow:

 

I think there's a lot to be excited. I'm a technology enthusiast and specifically an AI enthusiast. If you think about where we've come from to where we are today, the bank management, not CBA bank management, but the typical bank manager from 40 or 50 years ago using a combination of their own personal prejudice, heuristics, and a little bit of data to make home loan decisions. Now so much more sophisticated, but we're still at the first generation of AI. People like Professor Crips and others who are doing amazing work in this space to take this to the next level to remove some of the problems of explainability, algorithmic bias, and so on. That's where I think we have to be laser focused so that we can not be self-satisfied about where we are now, but we can really smooth off some of the really rough edges that we've got with first generation AI.

 

Leanne Ho:

 

I think my point's going to be a little bit selfish. For one, have a better word, that civil society has to be part of this journey. If we're going to hold governments to account, if we're going to make sure that problems with AI get the attention they deserve, we need to make sure that civil society is well resourced to both have the training to understand what is going on. Thanks Ed for doing some work in that space and then well resourced to be the whistleblower when things go wrong.

 

Dan Jerymn:

 

Awesome. Cool. I think my 30 second takeaway, I have to hold myself to time cause I'm very bad at this is, to engage with as many people as possible. Yeah, I think everyone I deal with is a great panel, great people, there are great people everywhere in this auditorium and beyond. Everyone engages in really good faith. I think there's a fantastic opportunity, particularly in Australia to really lead here. The one thing that I learn over and over again is that no one person within that ecosystem knows everything. So engage with smart people, challenge your ideas, and be humble, open, and vulnerable about that. And I think together we'll get to a really good place.

 

Jasmin Craufurd-Hill:

 

Thanks. Great. That is a great space to finish. So if you can please join me in thanking the panel.