Enabling AI at Scale: Governance as Competitive Advantage with David Hardoon

Enabling AI at Scale: Governance as Competitive Advantage with David Hardoon
David Hardoon explains how integrating AI governance into existing frameworks accelerates innovation while building essential trust.

Fresh out of the studio, David Hardoon, Global Head of AI Enablement at Standard Chartered Bank, joins us in a conversation to explore how financial institutions can adopt AI responsibly at scale. He shares his unique journey from academia to government to global banking, reflecting on his fascination with human behavior that originally drew him to artificial intelligence. David explains how his time at Singapore's Monetary Authority shaped the groundbreaking FAIR principles, emphasizing how proper AI governance actually accelerates rather than inhibits innovation. He highlights real-world implementations from autonomous cash reconciliation agents to transaction monitoring systems, showcasing how banks are transforming operations while maintaining strict regulatory compliance. Addressing the biggest misconceptions about AI governance, he emphasizes the importance of integrating AI frameworks into existing structures rather than creating entirely new bureaucracies, while advocating for use-case-based approaches that build essential trust. Closing the conversation, David shares his philosophy that AI success ultimately depends on understanding human behavior and asks the fundamental question every organization should consider: "Why are we doing this?"


"[Question: So what was the biggest misconception for most business leaders usually when it comes to operationalizing AI governance?]
Based on my interactions and conversations, now suddenly they think they have to erect a whole set of new committees, that they have to have these new programs. You almost hear a sigh from the room. Like, oh, we have now this whole additional compliance cost because we have to do all these new things. The reason I see that as a bit of a misconception, because building on everything that was just said earlier, you already have compliance, you already have committees, you already have governance. It's an integration of that because otherwise guess what's gonna happen? We all know that this is the next thing around the corner that's gonna pop up, whatever it's gonna be called. Are you gonna have to set up a whole new committee just because of that? Then the next thing, another one." - David Hardoon

Profile: David Hardoon, Global Head of AI Enablement, Standard Chartered Bank (Personal Site, LinkedIn)

Here is the edited transcript of our conversation:

Bernard Leong: Welcome to Analyse Asia, the premier podcast dedicated to dissecting the pulse of business, technology and media in Asia. I'm Bernard Leong, and today we are diving into the critical question of how financial institutions can adopt AI responsibly at scale. With me today is David Hardoon, global head of AI Enablement at Standard Chartered Bank, a distinguished leader with over two decades of experience spanning financial institutions, academia, startups, and regulators.

David played a pioneering role in shaping Singapore's national AI strategy for the financial sector. During his time at the Monetary Authority of Singapore, he was instrumental in developing the FAIR principles for ethical AI use in finance. He combines technical expertise with strategic insight, making him one of the most influential voices in advocating responsible AI adoption globally.

So David, welcome to the show.

David Hardoon: Thank you very much, Bernard. You're being far too kind. I simply have had opportunities and privilege to support this space that I love, so I'm very privileged in that particular aspect.

Bernard Leong: So David, you had a career spanning academia, startups, regulatory bodies, and now global banking. What drew you first to data and AI?

David Hardoon: What drew me first to data and AI? My usual response goes back to a story when I was 16 in detention, which is true because that's actually what ultimately drew me to the world of programming and Pascal. But on a broader sense, what drew me is actually trying to understand better human behavior. This may sound a bit weird because like, what do you mean? But if you go back to the origin of artificial intelligence, its roots are psychology, its roots are neuroscience, its roots are really unlocking, unraveling the mystery of us. That's what drove me. To combine the two things together of the behavioural psychology, the human nature aspect of things and programming - some realization that wait, we can actually program a computer to do this interesting task like a baby or young child where you present to them pictures of a car and a motorbike. Even without fully understanding the context of what that thing is, they can learn to distinguish it. Honestly, I was hooked. That was the beginning of the journey.

Bernard Leong: So what are the formative experiences that actually shape how you now approach AI and analytics given the importance of data?

David Hardoon: I would bucket it into three different pillars. They continuously are evolving because as we all know, data is continuously evolving. As I mentioned, the very starting point was I was just genuinely and morbidly curious about this. Wait, we can learn, we can actually program. I wanted to go into it. That resulted in going to the more theoretical side where I was interested with the fundamental question of "How do you learn? What is the thing that drives it?" For the audience and for the people listening in who are familiar with the different types and the variety of the methodologies, what interested me - not that I'm promoting or recommending my thesis, it's pretty antiquated by now - I was fascinated about the world of semantic models. So the premise behind semantic models is capturing information and knowledge. My idea that I got fascinated about was if you can capture knowledge, then the learning mechanism becomes secondary.

So that was one pillar that then took me to that next dimension. Some may argue my first step into the "dark side": "How do you make this real? How do you actually start implementing it and result in applications, in programs beyond just a piece of paper?"

You find that there is a delta, there is a difference. I used to have interesting debates with my supervisor who was a theoretical mathematician. You had a theory but it didn't quite work in practice. I was like, "What do you mean it didn't work in practice?" Well, there are other factors that we need to consider.

Then finally the last stretch of the period is not just theory but application and operationalization. You can say, "David, okay, you're splitting hairs." I would push back and say, "No, there is a delta. Because what takes theory to be an application has considerations, but what takes an application to be operationalised also has very specific considerations."

An example from many years ago - I won't name the person or the bank - I was asked to go talk about optimization. Of course, I'm happy to go talk about optimization. Let's just say I got a good scolding. That was my starting point when I realized actually, it's not about optimization. It's not about the algorithm. It's not about the application. It was about how do I use it? How does it make a difference, how it gets integrated in my day to day. So in a rough sense, those are the three buckets I see it as.

Bernard Leong: One interesting thing is, even for me, because we were around in the UK at the same time when I was working on machine learning and doing the algorithms, I was sitting right next to the human genome project. The development of algorithms may be important, but what we were tasked to do is to further the data analysis and the annotation on the human genome. So a lot of that focus actually forced us to be on the production level. That transition is interesting. But maybe just help me understand, when you transition from academia to regulatory roles, which I never had that experience, how does it influence your thinking about responsible AI?

David Hardoon: Oh, absolutely. I can say without a shadow of a doubt, to give them full credit, I learned and have absolutely appreciated to this day, the world of governance and regulation. If you had asked David prior to working for the regulator, candidly, I wouldn't even have thought about it. It wouldn't even be there as a concept other than like, yeah, there's guidelines. Going into it, I've realized two things because I used to get asked this question: "Oh, David, don't you think that governance, compliance, regulation, whichever form you want to call it, is a brake or an inhibitor for innovation and adoption of new technologies."

That really got me thinking because as you rightfully put, I'm not a regulator. I've come into the world of regulations. What I've realized is that we need governance. Not because like, oh, it's the right thing to do. No, we need governance in order to accelerate our adoption and usage of technology. Let me give you perhaps a silly driving analogy. If you know the road, if you know the rules of the road, if you know the different speed limits at every single point, if you know where the traffic lights are, if you know where the speed cameras are, well guess what? You can drive fast and you can enjoy it.

It's when you don't know where things are, you have the people who stop, stop, stop, stop. Slow down, and you start under your breath yelling like, why are you going 30 when you're supposed to be going 50? You know why? That's exactly the point. What you find is a lot of times, ironically, because there's this aversion to the word governance and people trying to not get involved with it, that's resulting in things slowing down, that's resulting in having all of these start, stop, start, stop, start, stop.

By having the proper governance, having the transparency of it and having the conversation like I had the privilege of having, you mentioned FAIR as one of the examples, okay. What is it that we're trying to achieve? What is the control or the bad thing that we're trying to prevent? Okay. Coming from technology, coming from the research, this is how we can go about achieving it. So you see now it becomes a constructive conversation rather than Bernard, this is what you need to do. You go like, hey, doesn't make any sense.

Bernard Leong: So given that you have talked about FAIR, let's start with the AI governance and implementation piece first. You were instrumental in developing the FAIR principles at MAS. Paul Cobban, who's been my mentor formerly from DBS, was also part of that committee as well. Can you share the origin story of the framework?

David Hardoon: I'm giggling because it's a bit of an unusual origin. As part of the mandates that I was given in MAS was actually supporting and developing the industry with respect to its adoption of AI. I like going into what I believe in first principles and root cause. When you looked at the industry and you spoke with people individually, everyone said, we do AI, but then when you really open up the box, it's like, is it in production? Is it being used? Maybe, maybe not.

So I said, you know what? Rather than sitting in an ivory tower, presuming that I know, or we know something, let's have a conversation. It was an interesting conversation because I took the compliance officers from the various banks locally to lunch. They were very shaken. They're like, wait, why is MAS taking us to lunch? It was because I really wanted to have an open conversation as to what are the challenges that they're facing, what is holding them back or not, or it could be an interpretation of what they think is the intention.

Then I had a separate lunch and conversation with a variety of titles from CIO, CDO, heads of AI. Same conversation of what's stopping, what's that inhibition. What I mentioned earlier, that realization that having good governance is in fact an accelerant to adoption.

That's exactly when the penny dropped, when it was, you are telling me that compliance is stopping you. Let me take it face value. I speak to compliance and like, well, this is our understanding. You realize that every hand was trying to do the right thing, but no one was properly clapping. Actually what was needed was a set of, well, let's put this out in the market, in the audience, in the industry of this is our expectations. This is what we believe should be done. Notice though, for the frustration of some at the beginning, we didn't say how in the FAIR principles, it's not prescriptive. It's providing the objective that is trying to be met.

Let me give you one specific example of this whole debate of: "Oh, can we use sensitive attributes?" You have different restrictions coming with different stuff. We didn't say you can or you can't. What we said is you need to justify it. Why do you need to justify it? Because there needs to be an underlying intent behind it, to make sure there's the right reason and it's aligned with the organizational objectives you're seeing.So it was a focus on what is your intention, what is it that you're trying to achieve? Then saying, well, now it's up to you. That resulted in the ability of having internal clarity. These are the things we're trying to achieve and this is why we're trying to achieve them. It could be a debate. Then the people downstream in terms of implementation go like, ah, we have clarity with respect to the speed limit.

It's not about you saying to me, no, but David, why is it 50 kilometers an hour, not 70? Sorry. It is. This is what we decided. Okay, fine. Then I will drive 50 and I can get to my point, rather than trying to fight the law and going 70 and then getting arrested or going 30. You see my point? It's creating that clarity. So the outcome was the realization that clarity on governance was needed, and that's effectively what FAIR is.

Bernard Leong: So you've articulated the lessons learned from applying these governance frameworks across organizations from the government to corporate. The question then is, how do the FAIR principles actually help financial institutions balance innovation and ethical considerations of AI?

I remember when I was talking to one of the prominent banks when I was still in AWS, one of the things that we did was a very detailed pilot program where we literally had to do all the data masking, privacy protection and making sure that everything works and even deploying it from on-premises into cloud with everything encrypted. So how do those lessons translate into the real world?

David Hardoon: Let me try and give you a few examples, a few analogies. But let me start off by saying, remember what I said in the very beginning, it's first principles. Why do I keep going back to that? I keep emphasizing it because we have an understandable tendency, especially when it comes to new technology to go like, oh everything is new. It's invented. We need to basically, it's a blank canvas. We have to start from the ground up.

Whereas I personally take a slightly different approach by saying, well, no, especially when it comes to governance, when it comes to responsibility, when it comes to intent. That shouldn't change the objective. What may change is the approach in achieving it. Now by saying this, don't get me wrong, of course there'll be an introduction of new concerns, new risks, a hundred percent. But fundamentally, is the intent the same?

Let me give you an example. Prior to AI, there'll be these massive deliberations on, oh discrimination by AI. Discrimination by AI. Can you use it in a loan algorithm or can we use it in HR? Can we use it in this? What effectively is it decomposed? The problem decomposed by saying, number one is AI aside, how do you deal with these concerns today?

If the answer to that is, ah, we don't, well, there you go. You need to have an approach, a mechanism, whether it's part of a committee, whether it's part of a principle, a conduct, culture. Not for me to say, to have that. That's number one.

Number two, if you do have it, then now it's the question of, okay, how do we incorporate the world of AI? You see how that aligns, because they say, okay, yeah, no, we acknowledge, we have, let's say, a principle with respect to fair - this will sound like an oxymoron, but a fair discrimination approach. But now in order, because we acknowledge that AI may amplify the potential risk that's coming from it, we need to have a more resilient manner in terms of identifying it.

Well, guess what? I need to know. Let me use as an example, let's say the risk is in terms of gender discrimination. I need to know your gender in order to assure that I am having a fair-based approach with respect to that. So you see it now creates a mechanism in order providing that unification rather than taking the approach of like, oh, no, no, no, no, we can't do it. We can't use it.

Let me give you the famous reference of what happened with the Apple card, Goldman Sachs, where they actually, there was a big hoo-ha and they actually said, but we're from a regulatory point of view, we're not allowed to use gender information. So what you find is that when you start using the more sophisticated type of techniques, actually using explicitly within a model doesn't mean it's not there. So you need to use it potentially in order to identify the risks that are occurring. That's one particular example.

The other one is an ability of representing the application within an organization. I still remember this. I had a conversation with two, because we had this little triparty debate where one reviewed the potential application of AI within their HR practices and decided it wasn't a question of can we or can we not do it? Obviously you can. They decided we don't want to because of our approach, our conduct, whatever it may be. We are actively choosing not to leverage on these capabilities for right or for wrong. Not an issue because it's aligned with a certain manifesto that they have.

Another organization said, well, we have chosen to use AI and we are cognizant of that potential discrimination, but actually it's okay because we're specifically looking at, let's say, a program which is designated for women, so it's not relevant for men, et cetera.

So you see, it forced us to think about the intent. That's what I meant by we get distracted by technology. Like for example, now what happened with Gen AI? Oh, we need to worry about Gen AI governance, Gen AI governance. I coily came up by saying, well, it's not that it's not introducing new risks, let us take the beautiful world of hallucinations, which we had that conversation about that hallucination may be a bug for some and a feature for others, as you said.

But actually again, first principles. What happens when you have an intern that does something weird? Something wrong. You see my point? How do you handle that? I remember a very good lawyer friend of mine, he was slightly up in arms on the fact like, oh, Gen AI is competing. He said, but all Gen AI is doing is putting words together in a way that seems reasonable and logical. I, in a slightly characteristic fashion, cheekily looked at him and said, well, isn't that what you as a lawyer do as well? He said, "Touche, David, touche." But you see my point. It's effectively forcing people to go back to first principles. Don't reinvent the wheel. These are things which we must address and understand first, and then apply methodologies in order in terms of either addressing them or preventing in certain cases.

Bernard Leong: So what was the biggest misconception for most business leaders usually when it comes to operationalizing AI governance then?

David Hardoon: Oh wow. I think to me, based on my interactions and conversation, is that now suddenly they have to erect a whole set of new committees that they have to have these new programs. It's almost like you almost hear a sigh from the room. Like, oh, we have now this whole additional compliance cost because we have to do all these new things.

The reason I see that as a bit of a misconception, because building on everything that was just said earlier, you already have compliance, you already have committees, you already have governance. It's an integration of that because otherwise guess what's gonna happen? We all know that this is the next thing around the corner that's gonna pop up, whatever it's gonna be called. Are you gonna have to set up a whole new committee just because of that? Then the next thing, another one.

We have to be pragmatic.

Bernard Leong: I just want to switch the conversation and get into the AI within financial institutions or the corporate setting. Given your corporate experience, what is currently the mental model on AI adoption within financial institutions, given the field is changing almost practically every week? Just for interest sake, we sit in a couple WhatsApp groups with all the AI builders and the amount of speed that's coming across, everybody is like, oh my gosh, what's gonna happen next week?

David Hardoon: You're absolutely right. I think this is where, if I may be, well, I think everyone's struggling. I'm happy to get pushback and say, 'no, no', David, we've cracked the code. It's solved for us. But I think if you look at it from an enterprise point of view, across the industry, I think it's fair to say everyone is struggling exactly to your point because it is just moving so damn fast. There's still a mental mindset or a shift that needs to be taken from the SDLC, the software lifetime development cycle to, okay, how do we now set up an environment and a platform that allow us for this kind of elasticity? Candidly, that's still, obviously not in the tech world, which had the luxury of setting the foundations. Right. If I may be fair, it's a bit tricky. It's a bit tricky when you already have a monolith when you already have a lot of legacy systems, when you have a lot of technical debt.

But on that same vein, I think that is exactly where the shift has moved. It's not just about the use cases and application because let's just be transparent. I think there are many out there, but how you create this operationalization environment. By environment, by the way, I'm referring to the full stack of not just infrastructure, not just platform, not just API gateway capabilities, not just people, but it allows us to go, okay, we want to do this use case, we want to do this use case, we want to do this use case. Oh, hold on a second. It's been an advancement now it's GPT version 4.5. Okay. How do we plug in? So it gives you this modularity within a governed and safe mechanism of review. I would say that's the mindset, that's the goal. But as with everything, the devil's in the detail and it's not always that easy to do.

Bernard Leong: But now for banking environments because of the way how financial institutions are regulated, one big trouble issue is data leakage. I see it with companies that I advise. I just want to hear your more practitioner point of view. How do you think about, say you want to operationalize an AI model within an environment that's so highly secure, robust, resilient? The requirements are extremely strict. What's the thinking from your perspective when you try to make that kind of engagement with the bank itself?

David Hardoon: So that's actually a very interesting point. I'll be candid, I think it's also an evolution in my thinking as well. I don't want to sound like a parrot, but actually going back to the first principles. Specifically a financial institution. Obviously regulated entity risk is nothing new because that's finance. You estimate risk, you price risk and you engage. If you think about if, okay, let's focus on the top line. Let's focus on the services side for a second, I'll get back to the core of your question. But I just want to set the premise right. Banks do very risky things. We're buying debts. We're buying potential futures. We're making bets, we're doing pretty damn risky stuff that can go wrong. But, this is a very important but, we know how to estimate it. We know how to review it. We know how to create frameworks within it. We know how to go, yes, I know what is the risk I'm exposed to and I know how to mitigate it, and I know what to do when it does happen, if it does happen. That's why regulators go, yeah, go ahead, please do your work.

I don't think, I don't want to be unkind, but this is an evolving thought process. I don't think we apply that same type of discipline and maturity on the back side of the house. I notice exactly like you said, it's like, oh no a hundred percent security. I remember I was in a conference once and we were talking about data leakage in terms of de-anonymization of data. I said, if you legally require the inability to de-anonymize data, don't do it because 0.00001 is still a risk. So we take this extreme view. Which as a side point is ironic because I actually think that increases the risk. But that's a different conversation.

So my recent thinking is, no, no, no. We need to take the same mindset that we have on the front office to the back office. So it's not about zero risk. Step one is understanding the risk. Do we understand it? Then you suddenly realize not all risks are the same. Number two is, well, what is it we're trying to really mitigate? Number three, what is our risk appetite? I'm sure you've heard the term risk-based approach, but it said risk-based approach. But then when you look at it, it's like, yeah, but you're trying to achieve zero risk that's not risk-based. So you get my point.

So then it's ultimately, obviously we want to eliminate, we want to mitigate leak of data. There needs to be a pragmatic approach towards it. Ironically, I actually think that will make it more resilient and safer. Doesn't mean that bad things don't happen because we live in the real world, but it means it actually will be safer because we're cognizant about it and when things go wrong, we know calmly. What do they always tell you when there's a fire happening, whether there's an earthquake? What they always tell you, stay calm. It's when you overreact. Same thing. We can calmly know how to deal with it and how to mitigate it. That's the lens that I'm applying.

Bernard Leong: I'm quite curious, right? Like, for example, if you think about the large language models, even within the AI space today, a lot of the large language models, one of the questions is the open source and closed source models, right? Enterprises need certainty, right? So one of the things that if you look at, like Anthropic does, is that they provide it through a cloud provider, whether it could be Azure, GCP, or AWS. Then there is a policy saying that whatever you do with this here, it doesn't get, there's no data leakage into those environments, right? So there's one layer of complexity that you need to deal with.

Then there is a second layer where the CEO has gotten productivity gains to using ChatGPT and decided that, hey, you know what, everybody should turn on their co-pilots only didn't realize that they forgot to turn on the security switch. Everybody gets to read the calendar of the CEO and the emails of the CEO. How would you apply a first principle approach into risk based on each one of these situations or is it something even much more tighter and say, okay, this is a specific enablement use case. This is a development environment use case, and this is something else. Maybe a customer service use case at some point.

David Hardoon: So I think the honest answer is, because look, I'm very happy to learn from others, but I think we're still in the evolving process. One thing I forgot to mention earlier is, people should not underestimate, is trust. I'll give you an example. It's like when you go to a doctor, do doctors always get it right? No. Sometimes they get it wrong and sometimes pretty dramatically wrong. But we fundamentally have a mechanism of trust.

The problem that we have, I'm segueing this into your question here, that we lack with all the excitement of the world of AI is trust. We just simply did not have the time to build it. That's the thing is we, there's certain things that we can't skip in that process. I know this sounds very emotional, but we're human.

Bernard Leong: You are absolutely correct on that because Payne who came on the show in a previous episode made exactly the same argument on there is about trust. Whether AI application takeoff is not a question of, because the technology is so commoditized, the question is whether, how much trust you can build with the user.

David Hardoon: So in that vein, the approach that I still prefer to take is a use case-based approach. Because what it allows you is to start having that elasticity. So for example, you may say to me, Hey David, you know what? This productivity game, let's say let's take the money minute co-pilot. It is awesome. I'm an activist. It's like, okay, fantastic. How about we do it for your team? See my point, the because then if something goes wrong, okay, at least it was in your team.

Bernard Leong: Yeah, you remind me when I was doing digital transformation, right, I have an agreement with the CIO. Say, hey, you know what? We are gonna start with Slack. The CIO said, well, your team can try it first and then let me know when we can bring it over to everyone.

David Hardoon: Going back to the point of framework is when you have, so I almost see the whole process of AI in a way which results ultimately in an assembly line. The reason I do that is because it allows for some extent of consistency and replicability. So both in terms of development of AI, but then also more from a strategic point of view. Basically the strategy of AI as part of that assembly.

Exactly. Just because we're doing it for, let's say your department or we're doing it for your particular use case, let's say banking. So let's say treasury, liquidity management, or a particular bank assurance engagement, whatever. It doesn't mean that I can't scale it up rapidly, but at least creates these deliberate conscious checkpoints to see, okay, are we going in the right track? Then to backtrack, because let's say, I'll give you a perfect example, as we're doing it, we may find that just like with the case of HR on that particular institution, I'm sorry Bernard, we're not going to switch on copilot. I'm sorry MAS, we're not gonna switch on copilot for you or we're not gonna do this use case for you. The reason we're not doing it for you, but we're doing it for them is because A, B, C, D. Going about setting the boundaries.

I was having this wonderful conversation with someone, full credit to them, and we were talking again about speed limits and they said like, oh, you know, the speed limit is let's say 70 kilometers per hour, but we are only going at 40. I said, but hold on a second, just because the national speed limit is 70 kilometers per hour. Fast or slow, maybe as an organization, not right or wrong, we're saying we don't want to go 70, we want to go 40. Or there's certain parts of the organization because they're driving these massive semi-trailers, it's like, no, if even if you go at 70, it's really gonna be so risky. Whereas a small little MG, you know what? Go for it. You can go at 70, even 71. You see my point? It's about contextualization, and this goes back to the premise and mindset from SDLC in how we used to do tech, and how we build and we roll out.

AI has fundamentally, I'm trying to think of the right anecdote, but just fundamentally changed the mindset that it's now contextual to the level of application. It is contextual. It depends. So going in by thinking, oh, we're just gonna have this one ring to rule them all, I'm sorry, but my personal point of view is it will fail. That's going back to FAIR. It's one ring in terms of spirit, in terms of Bernard, I want to make sure you do not cause harm. How you take the approach to not do harm, I'm sorry, but that's up to you because it depends and it's contextual to you.

Now you also see that internationally, no right or wrong, what China's doing is not right or wrong, what US is doing is not right or wrong. What Europe is doing is not wrong. It's contextual to them because they all want to achieve the same goal of do no harm.

Bernard Leong: So if you think about practical examples of AI implementations, maybe why don't you talk about very good, interesting examples you have seen outside or implementation that even illustrates some form of value? I always get this question from a lot of CEOs and sometimes some of them, they're really doing it. I'm also quite embarrassed to say, hey, I actually, your AI implementation is pretty interesting. I would like to learn more.

David Hardoon: This is what I meant that, to really be candidly fair. Some really exciting stuff that's been out there. I've recently been learning a bit more in terms of also on the operational side of the house, for example, you have this in production use case where you have literally fully autonomous agents in terms of cash reconciliation. Almost think of it as self-healing. So when errors occur, usually you would have a person who will come in to do some reconciliation or correction. Now you literally have agents which have a bank ID, they have a bank email that are able to do these micro tasks as part of a large reconciliation. So that is absolutely mind blowing, seeing it in operation all the way to your front-end line.

Finance is about identifying products, it's about hyper-personalization. It's just doing it. So the ones which actually got me excited, ironically was bank assurance. Why did it get me excited? It's because no one likes to talk about mortality, and it's a very heavy product. Yet they used AI to really be able to identify the cohorts and people who need it, because you're planning a family, you're going to your next stage of journey. So just like food apps saying like, oh, you want to get from this and have a discount? Same thing. Bringing it to life and seeing the actual impact too.

The area, which I do carry from my ex-life in the regulator of financial crime. I have seen some applications, which candidly blow things out of the water. Let me give you two concrete examples. Transaction monitoring. Today, our solutions, I'm not throwing mud because it is what it is, 97% false positive. So for every transaction alert that is flagged, actually it's okay. That's a lot of noise. Imagine you're dealing with a system that has 97% noise ratio to actually correctness. I've seen overlay of AI that has been able to drastically reduce that by up to 50%.

Now that. Don't just think of it from a banking, operational efficiency point of view. Think of it from a customer service and enjoyment level, where now when you're overseas and you're trying to buy a friend dinner or your wife and your card gets declined because it incorrectly got flagged. Now suddenly it's a lot more fluid all the way to preemptively mitigating the risk like. Not to jump to the world of cybersecurity, but as those who deal the world of cybersecurity know zero day attacks. So when they've already infiltrated the system usually happens on average about 130 to 180 days before the actual attack. It just sits there. So using these agents to identify these vulnerabilities before the incident occurs.

So you see it's shifting this mindset and the ability of operations from a reactive mode to things are running. We're correcting, we're adjusting, we're seeing satisfaction. Then finally, really going down to the level of providing the level of service at the level of service that is needed.

Then a final point that I'll mention on this is we're currently in a very interesting situation whereby we have no choice because we as consumers, individuals to corporate, we are using these tools. I'm like you just mentioned, you got GPT on your phone. So you literally go to your bank and branch. It's like. Okay. How can you not do that? Here, let me help you. It's a level of expectation that has gone up. I can tell you candidly, it's now a competitive advantage because the provider that can do it, will just naturally get the audience because like, yeah, that's the level of service, that's the level of engagement that I expect as I can do it at home.

Bernard Leong: I have a question, like if that's the case, then what about like, if you're definitely building a team to in order to help other teams to be able to be AI enabled, what kind of skills or roles have actually become critical? I get a question all the time. Somebody sends all their C-Suite people come in and then talk, and then after being blown away by the AI, then the first question asked me, how do I get that downstream?

I say, I think there's some things that do not change management, your company culture, how it gets people to adopt technology that doesn't change, and that has to be adopted the same way as you've done it before.

David Hardoon: You see my nodding a hundred percent. Let me break my responses to two parts. Let me talk about the people. Let me then talk about something you just said right now, which I want to really emphasize because I feel sometimes because this world is seen very technology, we actually forget about it. I don't think deliberately, but just we forget about it.

So the first one is back to my point that I try and see everything as a supply chain. Same thing. AI is not one thing. It's a supply chain, so you do have to have all the way from your infrastructure. People, whether or not you're a cloud advocate or you have your own, you do need to have the people that understand the underlying architecture of this stuff, because at the end of the day, everything writes on top of it.

Two World of Gen AI, which in fact, the word AI can be a bit misleading. It's a lot of full stack software development that is necessary because also from an integration point of view. Of course as you go up the supply chain, look, I've seen various ways of calling them from AI translators to business. I've recently defaulted back to the term just product owner, AI product owner, just to make it simple. But what it simply is, is a profile that knows enough about the technology to be dangerous, but understands and can communicate with the business and vice versa.

Because also you don't. Look, I used to joke and say that if you can find a data scientist, like I used to say this in the past, that can understand and do the world of data science, code, implement, and do that and at the same time stand in front of the business and coherently, explain that simply in their terms, that is a unicorn. Lock them up in the room and don't let them leave because that is very rare. Usually you'll find that the people are very good at different stages on that horizontal.

So that's necessary. Then to your second part which you were touching on actually, I know that I come from this world, so it's easy for me to say, the tech is easy, it's the people, it's the process.

It's like, for example, remember we just spoke about the agents or reconciliation, et cetera. How do you not just change today's process? How do you fundamentally reimagine it? How do you reward existing, sorry new managers from the existing manager. What's their KPIs? If you go all the way to the extreme, let's say we have organizations whereby the way you get promoted, Bernard, is when you've gone from managing one person to managing a hundred people. Okay? I get promoted because I'm managing, but now suddenly I'm still managing one person, but I'm also managing 90 bots.

So there's this massive. 90 agents. Exactly. When you have this massive, it's almost like an organizational talent necessity of how do I do that?

I'll give you one final example. I was, remember I was in front of an audience not too long ago, I think maybe a month, two months ago. We were talking about the culture of innovation and I said, okay, let's talk about the culture innovation. I looked at them and I said, do you reward your staff today for failure? They just stared at me and said, well, David, what do you mean? Do we reward them for failure? I said, no, literally, I'm being as literal as possible. Do you reward them for failure? Now, obviously I'm saying it slightly provocatively.

I said, okay, let me double click. Now, when I say failure, I'm not obviously talking about that you're maliciously or in a negligent manner resulting in issues. Obviously not. I mean, that's a fireable offense. But let's say I come to you and let's just say for, let's role play for a second. You report to me and I say, Bernard, I'm gonna give you a mandate. I want you to roll this, use cases of AI across the business, blah, blah, blah, blah, blah. You go ahead and do it. Well, guess what? Remember what I just said? It's contextual. Just because all the pieces of the puzzle doesn't mean it will work, right? Because maybe they contradict in a certain manner, maybe the level of gain is not as we envisaged. Life happens.

Do I then still reward you? Do I give you a healthy bonus? Because you've done a damn good job in trying to do that, yet it did not succeed. I will err on the side of saying, probably not.

Bernard Leong: I see.

David Hardoon: That's something that has to change. If the intention is from an innovation, this is a personal point of view. It's about the intent in which we're trying to move. We can never neglect or forget about the human element. The irony for me and remember my starting point? What drove me into the world of AI is understanding human, it's understanding us. If there's one thing I've discovered over the years of doing more and more about AI, it is about us, and yet we sometimes forget it.

Bernard Leong: I once asked my class, because I teach generative AI in the university. I asked everyone because I did this study about hallucination. They know which part of the neural networks is actually turned on when the hallucination showed up and then I asked everyone, should we turn it off? If we know where it really is and everybody's like 50-50 on it.

Then I said, well, there is actually study in psychology and other disciplines, right, that shows that actually experts have the highest tendency to hallucinate because a medical, you can think of it very simple, right? A medical specialist specializing in cancer doesn't encounter something that is not in the textbook.

David Hardoon: Well, Bernard, I'm willing to go on a limb and bet lunch, which I'm happy to have anyway, that almost all. If not, most leaders, innovators, at the point of inception, were challenged by people around them by saying, you're hallucinating. One that just pops into my mind by saying, electric vehicles will be a common commodity, going to outer space, et cetera, et cetera, et cetera. At the point of inception of the idea, people look and go like, you're hallucinating. It's like, that's not gonna happen.

Bernard Leong: Yeah. That is the whole point. Right? Then now you have to impose such a strong standards onto the AI that you're implementing. Don't you find it very ironic?

David Hardoon: This is why hallucination for some is a feature and not a bug. Therefore, it's really important to be self-aware as to what is it that you're trying to achieve? I know it sounds, complete tangent, but talking about talent, I was in a university round table and we were talking about the necessity to train more about AI governance for students to help promote a culture of innovation and AI.

I thought about it for a second. I said, actually, you know something, obviously conceptually, one cannot disagree. What we need to teach is philosophy. What I need to encourage you is to read more about Greek mythology because if anything, we need to have more thinkers. If anything, we need to ask more questions because if anything, AI now facilitates that.

When my best friend from literally pre-kindergarten called me up in a frenzy and said, ah, you see, I told you you're gonna be the reason for the end of the world. She's a teacher, and it's like, these Gen AI and these just posts are gonna basically get me fired.

I said, no, if anything. It's challenging you to teach more is to when we keep on saying that, oh, it's critical thinking that is important for the future. Well, guess what? That's exactly what you have time to focus on because the model answers and the easy stuff.

Bernard Leong: It doesn't work anymore.

David Hardoon: Let the agent handle it.

Bernard Leong: I have one more question that's quite important, and I think this is a business conversation that a lot of business owners like to ask me, how do you measure your ROI or value from AI initiatives beyond cost savings?

David Hardoon: Oh, crikey. That's a fun chestnut. That one. Okay, let's start with. It's very important to understand the team's reason d'être. The reason for being, why do I say that? Because if the team's purpose is innovation, there is no such measurement. In fact, anything I would say you're deluding yourself because innovation, your revenue, your potential quantifiable benefit is a downstream impact.

Look at the world of AI. This is stuff that's been happening since 1940s, 1950s. Hello. So the way of KPI, the way of measurement needs to be coded. However, the rest, which is really about adoption, which is about implementation, it's about operationalization. I do believe in, I don't know, this may sound a bit unfriendly, but I do believe in enforcing a quantitative and for simplicity, financial monitoring.

Now even, let's say so I build a solution that helps a frontline in terms of selling more to customers. Obviously they're the P&L holders, I'm just in the background, but I will still spend time initially too to compare, to look. Okay. What was your baseline, let's say for the last three months and now what is your baseline vis-a-vis using these capabilities to basically be able to report that delta, but then of course it gets normalized.

Similarly, on the cost saving side, I'll look. So I would look at it on both fronts and I'll create a mixed bag of the revenue, the impact from cost savings, but as well as also the revenue impact from your top line. Then also risk, by the way, so we talked about, let's say reducing the number of false positives. I would also measure that, or mules detection in a financial gains because in terms of how much money have I helped protect. So, the reason is because it's easier to interpret it and then to be fair to the organization, it becomes easier to identify like, actually, you know something, that area just doesn't make sense. Yes, we can do this, but candidly just no, it helps prioritize.

I just want to say just a last point about cost saves. Yes, we do it. I do it. I've always done it. But AI in the end of the day, is really about knowledge. It's about insight. I remember I had a, not debate because it was more my boss at the time giving me a scolding. So it was a one-sided debate where I said, well David, you've rolled out this initiative for operations and we claim that so in this particular case it was we were able to reduce the number of times you needed to call a customer in order to result in a collection. So let's say if I previously had to make five calls before to collect $1, now I only have to do two calls to collect that $1. So yeah, cost saving, but it'd be like, but David, you still have the same number of people.

You see this is where sometimes it gets hairy. I'm saying, well, look, number one is I can't control the usage of people because I'm ultimately a provider of a capability and it cannot detract the outcome of the capability that it results in that less need. But number two is the reality of today is that this will probably result in more insight about how the consumers are behaving.

You find that you're assigning the people to do that to work on that additional insight, ways in creating more processes, whether it's be creating more services. My point is that we shouldn't undermine the value add created by knowledge. If you see where I'm going with this.

Bernard Leong: Yeah. No, I get that point. So the point of it is not in the cost savings, it's the way how you're going to use that knowledge to create the value.

David Hardoon: Exactly, exactly. It's not like a net net zero kind of scenario that you shave here, but you gain here, so it comes together.

Bernard Leong: So this comes to the final question, right? How do you see the role of AI now evolving for the next few years within financial services? Is it going to be a pretty gradual, or is it going to be very fast track similar to a lot of other industries who are trying to work out what they want to do with AI.

David Hardoon: Well, I can tell you what my hope is. My hope is it'll be fast tracked because look, everyone's investing. I think you need to be living underneath a rock that's underneath another rock to discredit the tangible value that has been provided. This is actually a fun time to live because before it was more difficult to explain, to visualize, to demonstrate the value of using these knowledge based systems.

Now it's just so easy. It's like the way we operate, the way we do things on a private basis. So I would love to see that accelerating. Now, realistically, of course, there'll be your shakers and movers and your followers, and that's how it always will be, because candidly, it goes back to the very earlier points that we discuss about existing environments. Your technical debt, legacy, it's an, some of them will be existential questions because you may look at your house and go like, wow, I can renovate it, but it's gonna cost me 10 times more than just tearing it down and building a new one. But I can't really tear it down because I have people living inside, you see?

So it's questions that go beyond technology and AI and empowering skillsets.

Bernard Leong: So David, many thanks for coming on the show. Before that, I have two closing questions. The first one, I call the Azure question on a personal level, how do you use AI tools or platforms to enhance your leadership decision making or productivity?

David Hardoon: Okay, number one, I so, was it last year something I got a tick on X. So I got access to Grok. I use Grok extensively. My preferred approach of using it is I don't have Grok write for me, but I write, but I have Grok essentially help me in terms of grammar, in terms of recommendation, because what's also nice is that he actually looks at my historical style. I'm not just about these types of things. He goes like, well, David, this has been your style and some changes. So it's actually helping me as part of my evolution.

Secondly, I'm not ashamed to say it, is I like to test ideas. So one prompt, which by the way I recommend to everyone is literally end with ask me questions as part of this conversation. So it's not just - No, no. Seriously, because a lot of people, understandably like, oh what's this? Or how's about that? I literally said, so I remember I had conversation about foreign currency exchange, geopolitical, Taiwan wines. It's fascinating because it goes back to my underlying premise, my first principle, it's about knowledge. It's about how do I use it to improve me? Back to your point about teams, also, how to engage, how to...

Bernard Leong: I'm really surprised you are not using Claude and Anthropic's Claude because they is a default for them.

David Hardoon: It goes back to ease of access. It goes back to ease of access. It's like when, let's, one of the reasons there was antitrust lawsuits and all that, but not going down the path of like, when you get an Android phone, it already comes with a Google browser. It's just there. It's not that I wouldn't use Claude if I had it, if it's not, I would, it just happens to be that I'm using it.

Bernard Leong: I decided to be a professional user because enterprise wise I'm using it to help companies to do their financial automation. But of all the of all the three is my most preferred when it comes to deep thinking. I agree with you. Grok is really good. You could solve theoretical physics problems.

David Hardoon: But there's some stuff. It's not really...

Bernard Leong: But the math part for some reason is better between between Sonnet 3.5 and ChatGPT o3.

David Hardoon: So maybe as we're tailing off, just to add, which is actually gonna be quite interesting. I really mean it from a behavioural perspective, how it evolves is we're now faced with, so once upon a time is like, you have what, Disney plus, Hulu, Netflix, and it's just different content. You basically have, let's say, multiple subscription or say like, no, no, I'm just gonna stick with Netflix.

But now it's not just, you just have different, let's just simplify it and call it agents. Their capacity is different. Their knowledge is sometimes even a bit different. The way they interact with you is slightly different. So I'm wondering whether you'll find that you'll get this, whether people subscribe to one rather the other, whether they almost, I'm stretching here, the reality, but evolve slightly differently in the way they're thinking and the way they're doing things.

Or on the contrary, you'll find aggregators by saying, you know what? I am now giving you the ability to pose your question or have your dialogue or search for information as a super note that basically gets it from...

Bernard Leong: Yes, there is an app for that.

David Hardoon: There you go. Because think about this, like why do companies, why do people arrange panel discussions? Why do you have a survey? Why do you speak to multiple experts? Because you're like, okay. Let's get a few views and let's challenge one another. Let's do that. So that's gonna be really interesting. I think that's gonna be a very interesting development moving forward.

Bernard Leong: My final short question, what's the one question that you wish more people would ask you about AI?

David Hardoon: Why? Literally just why like in everything to go to the why, why are we doing it? Why in terms of, it's like literally why? That to me is the biggest one.

Bernard Leong: Do you have an answer to that?

David Hardoon: I do. It depends.

Bernard Leong: David, many thanks for coming on the show and how do my audience find you?

David Hardoon: Oh LinkedIn. I got a website. Was it davidroihardoon.com? But please, I've succumbed to the visibility of the worldwide web.

Bernard Leong: [Laughs] You can definitely find this podcast on YouTube, Spotify, and LinkedIn. Of course we are going to continue talking. So David, many thanks for coming on the show and I shall also say thank you and we'll talk soon. Okay.

David Hardoon: Very good. Cheers. Bye-bye.

Podcast Information: Bernard Leong (@bernardleongLinkedin) hosts and produces the show. Proper credits for the intro and end music: "Energetic Sports Drive" and the episode is mixed & edited in both video and audio format by G. Thomas Craig (@gthomascraigLinkedIn). Here are the links to watch or listen to our podcast.

Comments