Exploring AI Risk and Strategic Opportunities in Insurance
Ep.49
Exploring AI Risk and Strategic Opportunities in Insurance
In this episode of the Reinventing Finance Podcast, Nikolaus Sühr had the pleasure of talking with Simon Torrance, Founder of AI-Risk, a startup focusing on reducing the risks of AI innovation and adoption and helping leaders understand the risks associated with developing and deploying AI around their organisation(s) and connecting them to the best solutions.
As a well-known Thought Leader on Embedded Insurance, Simon shares his career path as a long-term advisor on Business Model Innovation with Nick and how he found his startup AI Risk right at the beginning of the global hype around new AI technologies.
The discussion centres around four categories of AI Risk to consider when focusing perspective on Insurance Companies:
1) Strategic Risk (= How to not become outcompeted by insurance companies (or other companies as they use tech more efficiently))
2) Financial Risk (= How is the opportunity cost / POC)
3) Operational Risk (= How to protect from and cover cyber threats, reputational risks, talent risks and other risks associated with daily operations)
4) Compliance Risk (= How to make sure you adhere to compliance standards and ethics)
In the podcast with Nick, Simon also shares insights, use cases, and future perspectives on AI for the insurance industry.
Nick: Hi everyone, welcome back to another episode of Reinventing Finance. I have a lovely guest, not for the first time. It's a repeat guest and someone that I personally know and appreciate very well. Simon, very happy to have you back on the show. How are you today?
Simon: Very well, thanks, Nick. And a great pleasure to be back here with you.
Nick: So today, most of you might know Simon from Embedded Insurance, which is really, I don't even know when it was, but it was a few years back. You could probably call it a seminal LinkedIn post, I believe, really putting that on the map for very many people. Today we're not talking about that. Today, we're talking about AI Risk. So maybe it would be great if you could briefly introduce yourselves for those who haven't dealt with Embedded Insurance, because everyone who knows your name, but who are you and what do you do around AI risk these days?
Simon: Yeah, thanks, Nick. So I entered the insurance industry about five years ago. I was asked by a big financial service group that owned a big insurance company to help them think about their business model, fundamental way that they create and capture value. My background has been as an advisor on business model innovation for many, many years now. And what struck me about the insurance industry was this big gulf between what people need in terms of protection and what the industry is capable of delivering in a way that is profitable. And that gulf has been going on for a long time now. And you see it in the penetration of GDP figures, that insurance just tracks GDP. It's never; it's stayed flat for decades now and has never really been able to break out of a problem that is basically related to closing the gap between customers and the innovators within insurance companies. And so a few years ago, as you said, I was looking at new types of business models and embedded insurance was one that was potentially addressing that problem. It was about collaborating with other organisations that have closer, deeper, more frequent interactions with end users and taking advantage of those relationships, the data that they have on their customers to create new types of insurance, which is more relevant, accessible, affordable, and so that's the background to embedded insurance. Now, last year, I was asked by a group of insurers to look at some new emerging risks. And one of those was AI, because obviously it's a very big topic for everybody now. And they asked me, well, what does it mean for the insurance industry? And what's the risk to us of either using AI for our business or the risk for our customers in using AI? And these two topics, embedded insurance and AI, actually combine because if you've got third party organisations that have got a lot of data on customers, insurance relies on data. And now we've got more real time data than ever before. And of course, AI takes advantage of real time data and allows us potentially to do things much more efficiently than we could in the past. So that's my that's a little connection between what I have been doing for the last four or five years and then AI as it's become such a big buzzword very recently.
Nick: So from that conversation, fast forward, you've now established a new company called AI Risk. What is it that you do there?
Simon: Yes, so there are four categories of AI risk, and these refer to any type of organisation. But let's if we focus on insurers, insurance companies. So the first category of AI risk is what I call strategic risk. So this is the risk of being outcompeted by other insurance companies or maybe not insurance companies because they are using particularly generative AI much more effectively than you are. Now, I'll just set the scene on that one and then we'll go through the other four. So if you look at all the sectors in the world today and you look at how technology boosts productivity, which is then the stimulant for new growth, insurance and other information centric sectors in theory and over time have got the most productivity boost out of deploying new technology. Now, if you look at it from another axis about the impact, potential impact of AI, particularly generative AI on the tasks and the jobs that people do within insurance, again, it rates the highest in terms of the potential substitution or augmentation of those jobs. Second highest, well actually it's about the highest, the same with commercial banking. So if you think of a sort of two by two diagram, the potential and the historical impact of technology on productivity and growth on one axis, and then the potential for generative AI to either, well, to dramatically change the nature of productivity at the workforce level, insurance and commercial banking come top right. So there is what that says to me is that there is there's enormous potential for insurers to improve their business models and productivity. We'll come back to the detail of that in a second. I'll give you some good examples. But that means that's the same for everybody. And so we're going to enter an arms race where people will pretty soon, I think it's going to start next year, they're going to be competing on their ability to use generative AI. Come back to that topic. There's plenty to unpack on that. So number one is strategic AI risk. The second one is what we call financial risk. And there's a real danger. We spend a lot of money on investments around AI that don't create a good return. And there's so many things that we could spend it on, but there's a risk that we waste that investment. So there's a sort of financial risk. The third area is what we might call operational risk. And these are the sort of day to day technical operational risks that you hear a lot about in the press. So new forms of cyber-attack that AI is creating or the reputational risk that we use AI in a way that is unethical. And then there is, from an operational point of view, the talent risk. You know, we're trying to compete. We cannot access the talent that we need to compete effectively. That all gets sucked up by the big tech companies or our competitors have done that. And then the fourth and final category of AI risk for corporates, for enterprises, is let's call it compliance risk. And that's complying with standards. A lot of insurers are driven by ESG standards today as well. And there's a lot of ethics related to that. And of course, regulations with the EU AI Act and similar acts around other countries as well. And the risk that we don't comply with the standards, with the regulations and with other ethical norms. So there's a lot there. And that's why, you know, when I talk about AI risk, it applies to any corporate. But I'm particularly focused around financial services and insurance at the moment because those industries have the most to gain and the companies within them have the most to lose, is my view, if they don't act effectively now. And I'm suggesting people need to rather than so far, they've been sort of and they would they tell me this themselves, they've been dabbling in generative AI and AI so far. But there's going to be I think next year is going to be a time where they need to be they need to take bolder steps and they need to be really careful about how they do that.
Nick: So let's try to dive into a few areas. So let's see. So on the strategic risk now and the axes make sense. I would it runs count. And let's see whether we can consolidate those or whether there's just, you know, time will tell. So my observation for insurance and insurTech is one that is in general on the operational side. Almost no efficiencies of scale and scope, if you just look at insurance companies as they scale their operational margins, once they've hit a certain critical mass, they don't fundamentally really go down. I think there's some metrics I need to get, a customer service agent to existing customers remains the same ratio.
Simon: No, no, I disagree. I disagree, Nick. So let me give you a really good example. Yeah. So I give you a really example. And this is a real one I found out about just very recently. And it's this is a small company, small insurance. It's it replaced its whole operational staff with AI robots, all of it. So the actuary. The claims manager, the policy admin team, the payments team, the fraud team, customer service, the whole lot, and they replaced them all with AI bots in a couple of months. And they and this is I mean, by the way, this is done by an AI wizard, you know, and then they created an AI manager that managed those bots and they communicate with each other on Slack. So the bot will say, I'm thinking of sending this message out. What do you think? And then other people input to it and the manager says, yes, this is OK, but I'd change it this way and that way. And they have the humans in the company act as coaches. Yeah. But essentially on that small example, they've replaced the entire workforce. Now, now I I'm not suggesting, of course, that at scale you would do that. But what's more, what's interesting is that and a lot of scientific exercises have proven this is you can dramatically reduce the routine tasks, operation staff and increase productivity by a factor of in some cases it could be 30 percent. So that and this is scientifically proven that so improving insurance productivity by 10, 20, 30 percent has just not been feasible before, as you say, because it's been difficult. But generative AI has properties that make it able to do this. And people already are just using tools that exist today to achieve some of these really interesting results.
Nick: So it's super interesting. And I was going to segue into my hypothesis is that the big insurance companies won't really be able to apply this across the board, more about labour laws, about not being able to make staff redundant, about change problems, about having too many legacy processes that run business and you would have to say no to business. So I think on a Greenfield smaller scale, I think it's easier the argument I would have made a lot of these and I've heard discussed before was admin costs are comparatively low to the entire premium stack for most insurance companies. Mind you, Lloyd's might be something, you know, something of a different take.
Simon: Let me give you another example, though. On the loss ratios as well, so most insurers and just being a little bit blunt are not very creative in how they access data to do their underwriting, but you know, very, very backward looking historical data and so on. So you've got this explosion of real time data now. And as I said, particularly when if you're working with, you know, with embedded partners or partner organisations that have got a lot of data that you could take, you could take advantage of. But most insurers don't really take advantage of that data. But you've got other sources of data as well. And when you match these together and use AI, you can do extremely powerful underwriting that you could just was too difficult to do in the past. So again, this is a specific example, but it just illustrates the, you know, the broader theme. There's an insurTech that I know that does product liability insurance for sellers on Amazon marketplace. Sure. So that's an enormous market and all e-commerce businesses will are increasingly requiring that sellers on their marketplaces take out their own product liability for obvious reasons that the e-commerce company doesn't want to retain it itself. So it's it creates a mandatory new market. You cannot sell if you don't have the product liability. Yep. Now, the traditional players, what they say is, well, we don't like product liability. It's always been a bad market for us. You know, combined ratios are over 100. We will we don't like it because it's Amazon. We'll reluctantly we'll support it because of the volumes, perhaps. But they do nothing to access any new data. Now, what clever companies are doing, and I'm and I think the big companies will start to follow as they see the art of the possible. They're asking the big e-commerce players for access to real time data. But they're also combining that with other data sources, and in this case, in terms of product liability, they're looking at the reviews and the ratings that people have given for certain products. Sure. And that helps them to under to really understand when products break down and the nature of that. And you the only way you can really make sense of that in an effective way is to use AI to scan all this to make sense of all the data. You combine that data source with the real time data feeds from the sellers, commercial activity on the e-commerce platform. And you suddenly got this very, very rich amount of information that then you can price much more effectively, target more effectively. And what we're seeing already with this insurTech that I'm working with is the loss ratios of that they're actually they're unethically low. They're below 10 percent. Now. The market rate is about 80 percent, they their target was 40, which is already half, but they're actually getting way below that. So this example and what I gave you before are just some early canaries in the in the coal mine showing that when you have entrepreneurial people who really understand how to use data and AI, you can do some really interesting things. Now, now to your point, and you made a really good point about the ethics of deploying AI. What I'm starting to see, because I'm now speaking to the big insurers about this topic, they are seeing they're seeing other examples like the ones I've mentioned, and they know that there's going to be an arms race pretty soon with between them and their competitors. They've tried out bits and pieces, but I'm starting to see them wake up and say, right, we need to get serious about this now. And that means, as well as creating an effective strategy of working out where exactly could we get the best returns on our investment in AI? And let's come back to that topic in a second. And we need to put in place the guardrails, the governance mechanisms such that we as responsible organisations with a lot of regulatory oversight, often a lot of ESG principles that drive us, how do we manage the exploitation of AI in a way that suits the principles that we have and what the regulators require from us? And we're starting to see now and it's followed on from the banks. The banks were a bit earlier in this. We're starting to see insurers now trying to put in place responsible AI programmes and systems to monitor that. And what that means is there's ultimately what we should be aiming for is a dramatic reskilling of the workforce, not trying to think of substituting the workforce like the example I gave you, but a reskilling. And as you implied there, that's going to take quite a lot of effort and thinking. But the value is very significant. And so the new contract that we might have with our workforce, and this will be the enlightened companies, is to say no one wants to do routine and drudgery work. There's an opportunity for us to add much more value to the world with insurance and other types of risk. Our mission is to do that, and AI is a way of unlocking that so we can add more value and we can give you a better set of roles and tasks. And working that through, I think is going to, as you've as you've talked about, is going to be important in the next in the next year to get right. So I believe that if we have the right guardrails in place, the right governance, plus be clear about where to place our bets, then for me, I think this is a real interesting inflexion point for the insurance industry. And that's why I'm so excited about it.
Nick: I'm equally excited about it. I would have said that there are certain counter forces towards the true potential on the cost base, it's just been that the cost structures are comparatively low for in and from the entire stack, I would say in terms of pricing; you need markets where you can price dynamically, not every market you can price dynamically. I mean, try your tight agent market, even your broker market. So you need multipliers, embedded multipliers or price comparison websites where you can dynamically price and where the connexions are and you can make benefits of the real time connections or when you go more into traditional underwriting, where you have a bit of a kind of where the market power sits with the underwriter rather than the broker, because you could just adequately price the risk if someone else just does cash flow underwriting. It's not really helpful because you just you quoted it correctly, but it's not on your books. That was all. There's just some diminishing effects that I think will work themselves out over time. And absolutely, I think there's a great strategic rationale. And what I would argue, my bet would actually be not so much on your large insurance companies being now magically managing change better than they could before, because there have been modern technologies before, not me, and they didn't for and that hasn't changed. But I would say very agile MGA's underwriting teams who can green field, who now just have yet another asset and technology stack where you now can run, you know, it's like where you needed 50 people before, you know, from a tech team, you can you can now just run your e-commerce shop and now you can even run operations, et cetera, and you have these really nimble two, three, five, maybe 10 people underwriting powerhouses who can actually scale and are taking a taking that would be my if I had to bet that would be the strategic risk of the strategic opportunity, the piranhas coming rather than the whales getting thin.
Simon: Well, so, yeah, the piranhas coming is obviously it's been people have talked about that as a threat, of course, for some time, and we know that insurance is difficult and so the customer facing insure tech haven't done.
Nick: Oh, totally. That's our show underwriting pricing and they would then need to still win over an Amazon or, you know, this thing you need to still unlock distribution. Yeah. And I think pricing power needs certain situations to really work well. And anything that is strongly and traditionally intermediary puts a damper on pricing power.
Simon: Yeah, exactly. I mean, maybe I mean, I've just been started to think about this, but maybe the whole reliance on brokers and agents starts to recede a bit, to be honest. You know why? You know what? They are intermediaries between people who don't understand insurance and don't know what they might need and the supply side. So I don't know about insurance. I call up a broker or an app or share and they tell me what to do. Now, those types of agents are becoming digitised. So and this is happening. It's happening first in banking and finance, and it's going to come to insurance a little bit. Maybe we could say it's more complicated, but I think I think we sometimes hide behind the complication. But there is this notion that one of the big moats, as we might say, protect the incumbent industry from change and disruption is the complexity of insurance and the fact that it's really, really boring. No one likes insurance. No one wants to get insurance. And we can't we no one wants to think about it. And it's the same with mortgages, really, and credit. And it's all in the same with pensions and savings. It's people and it's not something that people really like to think about. Now, if there are AI agents who are doing that work for you, then what's the role of the old, you know, the human agents that are that are doing that or the human brokers? And I think we're going to see not immediately, not straight away, but I think we're going to see that those intermediaries getting eroded by smart AI agents over the next. It's not going to happen immediately, but I think there's going to be an inflexion point in about three years’ time when the AI gets really I mean, that's when we'll have another evolution of the capability of a massive step change. And that's going to affect things quite, quite dramatically. So in the banking world, they're starting to get very worried about this, that that suddenly if there are agents, AI agents that are you just speak to them through Siri and you say, can you can you find me a pension that suits my financial profile? And these are the criteria. And it sorts that out because of open banking data. It has access to all your accounts. Totally, totally. That stuff now that and that sort of scenario, I think, could come to insurance pretty quickly as well, relatively quickly. And so what the moat that I mentioned or I was alluding to is essentially the moat of apathy and uninterest. Oh, God, I've got to change my car insurance this year. I'll go on the price comparison site. It's really boring. I can't be bothered. I'll just stay with the same company or, oh, it looks like it's one pound cheaper. So I'll move to all that sort of that hassle could be could be taken away. So I think on the I think on the distribution side, on the sales side, I think we're going to we're going to see a lot of efficiencies or a lot of change. Let's put it that way there as well. And certainly the insurers that I'm speaking to, they're starting to get a bit worried about that. It's not it's not imminent right now. But my message is. And this is classic digital disruption, you'll have the piranhas, as you say, the small companies will eat away, nibble away your toes. But what tends to happen in digitising markets is the biggest threat is the big giant, the big whale, your direct competitor who makes a bold move before you do. And now insurance industry has been not is not a super dynamic as maybe other industries in that.
But I think, you know, a bold company or a couple of bold moves by some big companies, I think, is going to shake things up a bit. So that's what I'm expecting at the moment. And certainly the people I'm speaking to in the industry at the very senior levels are saying, you know, we have been dabbling. We either we've done nothing or we've been dabbling in particularly generative AI. And we've used machine learning, of course, for the basic operation. But generative AI is fundamentally different. You know, it's about intelligence, not about not about just processing and prediction. So there's going to be, I think, some big changes coming up.
Nick: Now, I listen, I think I think there is, generally speaking, traditional distribution has been called to the grave many times before. So far, they have withstood because the UX on a product that you've just mentioned from a customer interface to just say, can you just deal with this, Steve, it's just easier, irrespective of what happens in the background. And that is why kind of traditional self-service, I need to deal with everything, hasn't captured that many consumers and my absolute conviction is the only reason why the UK has such high rates is because you have forced renewal. If you weren't forced to look at your policy every year, you would have similar effects like in continental Europe, where you have automatic renewals, where people just don't really get they get the invoice and that's fine. Having said that, it's always the same as before until it isn't. Right. That's the problem with change. You know, if you with that in mind, you would go into any organisation that now we've seen this before. This is just like last. This is just like Windows 95. You know, this is the same thing. We've seen this before. It hasn't changed anything. So I think that's the danger. And with AI applications and maybe we can just segue into financial risk is. It seems like you can validate the business benefit without all of the upfront investment.
Simon: Sorry, so explain what you mean there.
Nick: So you mean financial risk is that, you know, this big investment in AI opportunity costs, especially, as a from an investor or LP side, you know, is this I know it's the same.
Is it now still good time to buy Nvidia? You know.
Simon: Yeah. Sorry. Yeah. What I meant is from an internal within a big insurance company making investments in AI that could in the wrong space place and it could just not create a return. Yeah. Yeah. That's what I was meaning. Now, now I mean, the productivity benefits, I mean, because this is not, you know, like basic IT. This is about intelligence. So it's about taking all.
Nick: it's a little bit about both. Right. I think that I mean, at the end, the interesting thing about AI is it is about productivity because there are manual tasks that can be automated. And, you know, it's RPA on steroids, right? It's just easier to set these things up. So it has that productivity gain. It also has the thing about insight, assistance, pricing, getting insights. And that could be by generating genuine insights or but just allowing you to run way more information and analysis and getting the right reports by interacting with the data in a different set, that you just get more insight from it. Right. And then, and I've done this today, I had to calculate, you know, just a very simple calculation, I wanted, how much revenue do I need to do to get a net income of a potential new salesperson? I just I was like, could have done it would take me hours, I probably would have asked my CFO, because I don't like to do it. I put it into the spread, I put it into GPT 4.0, whatever it is, I'd run some, and I had exactly what I wanted to do at this stage in a minute. Yeah, because it because it was able to ask the question. And that is really interesting, not to say that that information wasn't previously accessible, but it would have taken me days, and I couldn't have done something else. And, and, you know, I wouldn't act on it. But the third thing, and I think that is something you've mentioned as well, is, I believe the biggest threats or the biggest moat in insurance, it's not regulation, it's not all that, it is distribution access, either by having a brand, having existing broker relationships, having tied agents. If you're looking at what drives growth in continental Europe, look at markets with tied agents, they're doubling tied agents, because it's the most profitable market for them, from an insurance perspective. So if you have that, now, if what generative AI, if it can replicate some of the, let's say, customisation and empathy towards you as a customer, so if I'm presenting with the same problem, but I'm a single mom, or I'm a diehard Chelsea fan, or I am very wealthy, independent, divorcee, I don't know, right? Anyone would interact with you in a different way, even if you have the same products on file. And I think that can drive conversion. The one thing that I'm not sure AI itself can do is that kind of activation, that interest, that nudging, I think that probably comes from embedded insurance, or anyone else, or maybe Siri, you know, maybe it is that, right? It's whoever your companion, but I think the interesting thing with AI is, it can tap into so many things. And you're right; it's not just this one thing.
Simon: Yeah. What's really interesting and how it's developing at the moment is it's, there's this move, there is this development towards what some people call artificial capable intelligence.
Nick: Okay. What is that?
Simon: Yeah. So what it means is that AIs are able to undertake complex multi-step tasks that also require interaction with other systems. So for example, your example is a very simple one, you ask it a question, and it gives you an answer, and you refine the answer one on one. But artificial capable intelligence, and this, the Mustafa Suleiman, who was the founder of DeepMind, and now runs all of AI for Microsoft, he coined this term, and he said it's a new Turing test. Turing test was the test about when humans, as computers become as smart as humans. And his example was this, is you give 100 grand to a bot, and you say, turn it into a million in six months. And you have to design a product, find the suppliers, configure the product, sell it on Alibaba or something like that. And that's the task. Now, so that, to do that, of course, that you'd have, the bot has to interact with lots of different organisations and players, and try and achieve that end-to-end task. And he says that capability is about three years away, he thinks. Now, people already can do it to, in much simpler scenarios, particularly around software, design me a, create my website. So we've got basic versions of this already. So this is called, some people call it agentic AI. So the agent is solving a problem for you. Now, I think that is going to be coming to our market or this market as well. So you're going to start asking the bot, can you do this end-to-end task? Not as complex as the one I told you before, but other simpler tasks. And it will be able to do that and interact with other systems that are out there. And that's the world that is what it's capable of. Yep. And that is something that is a massive step change in technology. That's why it's called a general purpose technology. And it's been proven and scientifically proven to be able to undertake these more, let's say tasks that require intelligence. Yep. That not, and this comes back to another risk. It's not what is in traditional software is what is known as deterministic. This is the output, here's the input. And it just does that, you know, two plus two in my calculator equals four. That's traditional software. What the next generation of AI is, it's probabilistic. So just like we as humans are probabilistic. You give me the task; I may or may not achieve it. And I go off in my different direction to try and do it. Absolutely. That's the same as these systems for all kinds of various reasons. And that, because the potential and the power and the impact of that is so significant, everybody is going to be trying to do, to use it. So it's going to create these multiple arms races. But in so doing, it creates incredible risk as well, because in, if you think about the nature of insurance is it's about foreseeability, responsibility, liability. And when a bot interact with multiple other bots to achieve a task, those notions start to disintegrate or they're very difficult to apply to this type of world. So at one level, we've talked about some of the productivity benefits and the opportunities for the insurance industry to just do its existing business model better and the competitive forces that that might create.
But there's also an opportunity on the, in terms of creating risk products for other people who are using AI as well. So if the rest, if the whole world is using AI and in complex and new and creative ways, they are, they have opened themselves up to all kinds of risks. And just as the insurance industry has been trying to create cyber coverage, not very successfully. We'll come back to talk about that in the second part. There's going to, there's a new wave, which is going to be AI risk. And that is either, and this is where the think tank that I ran earlier this year, was one of the topics we were looking at is where those risks for the rest of society are? How does the insurance, how could the insurance industry create products to address that? So, so you've got new types of cyber risk, just amplify beyond, you know, you've got a million people trying to hack your site, your system rather than five people. You've got actually in a few years’ time; you've got a hundred thousand PhDs in computer science trying to hack your system. And that, you know, that doesn't, you can't do that today. So one level, you've got all kinds of new, new types of cyber-attacks that we're not even ready for. And then you've got all kinds of other risks to do with the use, how companies are using AI, maybe to recruit people, maybe to do their sales, which are, that could be intentionally or unintentionally unethical or illegal, whatever, all kinds of unintended risks or consequences that can come from the use of AI. So there is a, not only is the, how to improve productivity within the insurance industry for its existing business, but there's also the opportunity to create new solutions to help the rest of the world manage that risk. And I guess the opportunity here and the threat is that we have let an enormous cyber insurance protection gap grow. You know, as you know, you know, the, the, the amount of cover we, the people that particularly businesses have is tiny and the amount of losses are, you know, orders of magnitude bigger, and that's getting greater and greater every year. We we've let that happen. And now there's going to be a new wave where that gets even bigger. And there's another set of protection gaps emerging. So for me, there's that suggests there is a golden opportunity as well, if we can be creative to come up with new solutions that address those emerging, but the existing and the new protection gaps, which, which I'm foreseeing at the moment.
Nick: Do you have some concrete examples that you can share from your think tank? As if cyber is already tough to underwrite, and you are now asking the same people who were for, I'm not saying who didn't find a way to profitably address this, right. Everyone was able to write losses and some profit, but there seems to be the gap persists, because it was to, we haven't found profitable transfer products yet. That would lean itself that we're also not talking about transfer products, but we're talking about risk prevention products, really, and services. And do you have some examples from the think tank in which area insurers to think, this is not a new theme, beyond insurance, something if you've been, like the two of us going to these conferences, this past couple of years, it's not a term that hasn't been thrown around as well. So it seems to be going into that direction. Is that correct?
Simon: Yeah, no, exactly. So, so again, the notion here is, is how can we work with, let's just take the small business market, because that's the biggest area of protection gap in around cyber at the moment. And by small business, I don't mean micro businesses, I mean, sort of pretty sizable businesses, typically small and medium size. But what’s happened so far is that we haven't worked with that industry, and the industries that support those industries, to help them put in place the right sort of monitoring systems of their IT. So as a result, you know, like in many sectors, it we're flying a bit blind, we think, oh, well, they're a player in this sector, they've got this number of employees, and they turn over this amount, therefore, they, they fit into this very degree. Now, if you insist or help or enable them to put in place monitoring systems on their IT, which can give real time feeds about the vulnerability in their systems, then of course, you can price the risk much, much better. And we've only just, the industry's only just starting to, to do that. I mean, years after the, you know, the horse bolted from the stable. And we're now required because we're worried about it, we're now requiring it, or they're insisting they do it, or suggesting that you get lower premiums if you do put in place these monitoring systems. And the monitoring systems are becoming more and more sophisticated, but they themselves are in a sort of arms race with the attackers, by the way. So at one level, the method of doing what I've just described technically exists today, but it's coordinating it to make it pervasive, which is a, which is a challenge. And as a result of not being proactive, a lot of insurers say we're not going to underwrite S&E cyber risk, because we don't want, you know, it's too risky for us.
So we're backing away, the industry is backing away from this market in many cases. And that's what that's doing, then is leaving with these huge losses, due to get bigger who picks up the pieces, it's the government. So, so in some respects, it's like other catastrophic risks, like floods, and so on, where the government has to come in and bail people out, companies out, or we're starting to see the creation of early days, but state, creating the public private partnerships around catastrophic risk, like we saw with terrorism risk or flood risk between multiple insurers who come together with the government. So, so I think part of the approach, strategic approach will be for insurers to work more closely with the public sector to see how could we collaborate with our combined resources to, to tackle the root cause of the problem. Also collaborating with, in this case, sort of technology and security companies to, to help small businesses to protect themselves. And that again, you know, if you come back to trying to close the gap between the industry and the customers, again, collaborating with embedded partners who have the day to day relationship with small businesses helps to make that much more efficient. So working with, you know, accounting software packages or vertical software as a service applications on which many SMEs operate can be helpful here. So that's, that's just one very simple example whereby if you have the security monitoring system that is connected to the underwriting process, and then the manufacturers of product, then you can help to, you're putting your finger in the, in the hole in the dam, you know, before the whole thing falls apart.
Nick: So if you're just, this has been really fascinating and it's interesting for everyone listening. You obviously kind of prepare for, there was lots of questions, but this conversation was just kind of taking such a natural turn that we've, and this was really, really helpful for me, I hope for everyone listening as well, just kind of conscious of time. If you had to take a step back and kind of say, your counterparty would be an insurance CEO who says, listen, I get you. I see these things. I myself am not an AI expert. I'm not an IT, I'm an insurance executive, but I see the strategic element here. I see the strategic risk. How do I go on about it to put, what are three to five, or maybe just one, what's my next initiative? What's my next move on this one? So I don't fall into the trap of either just creating activity and lots of slides and everything, but everyone's entertained and it's cool, but nothing really, it just doesn't move away. And it's just this strategic exercise for our own sake. So how do I actually, how do I put some rub on the road on, on, on this one? What would be your next move on that one, your recommendation?
Simon: Yes. No, it's a great question, Nick. So I actually had this conversation last week with a CEO of a very large insurance group. And there are three things that I was saying. One is that you need to create a proper holistic strategy for how you're going to exploit AI. At the moment, there are lots of you doing lots of experiments and pilots and things like that, but it's uncoordinated. There's no systematic approach. So number one is to have a proper strategy for AI. So that's the first step.
Nick: But what does that mean? That means that it's like a blank canvas. Anyone, because obviously you're not going to do non-holistic AI tactics. No, one's going to sign up for that, right?
Simon: Well, they all do at the moment. It's all tactics. Someone comes up with an idea.
Nick: But they don't call it like that. They don't call it, we're just stabbing in the dark and, you know but, what would that look like? Or what is the sign that you're getting closer to it being holistic? So it's not just a big word that everyone agrees upon, but take something else out of.
Simon: Yeah, I would do well. So what I recommend is doing holistic economic analysis of where the biggest potential improvement to your business model could come from the use of particularly generative AI. And at the moment, no one has to do that sort of work, you know, takes, you often have to hire consultants and it's very inaccurate and costs a fortune and takes a long time. Interestingly enough, there are some AI tools that can do that for you in a matter of weeks. So, I won't, I'm not going to promote what I do, but one of the, I do have a collaboration with an organisation that does that. But I think that's the first thing you need to get really clear about where AI could improve the productivity of your organisation across every job function and every task within all roles. And you can do that pretty quickly now. You don't have to wait a long time because once you've done that, you can also look at, well, my nearest competitor is very much like me and they will have the same opportunities as I do. So at one level, you're creating some baseline that you can then use to align your leadership team and the board on theoretically, where's the economic impact on our company. And just doing that analysis would be massive tick rather than, as you say, living in a fog of today.
Now, the second thing I would say as a really important foundation is to put in place a responsible AI governance framework. So that means that for every time we are going to define how we're going to use AI responsibly and ethically against principles that fit with our values and our company. And the most advanced AI companies have done that themselves. People like Microsoft got very, they publicise it. It's really important that they're responsible AI programmes. I would put that in place as a second, you know, you do the, I do it in parallel because it's really critical foundation and it, and it protects you against, you know, regulatory challenges, which is very risky for this, you know, obviously the C level and, and ethical issues as well. And, and I would also put in place methods to automate that as well. So it's not just PowerPoint that no one reads, but it's becomes a system of record for the company. You cannot start an AI project unless you've shown how you comply with these standards and guidelines and regulatory requirements. You cannot take it to the next stage unless you've proved this. So that is a really important risk management capability. And the best approaches is where you, you engage all the functions of the organisation and particularly the risk function for that. And that creates a dashboard that reports to the board because the board are concerned about two things. One is about your competitive position in the market and that the likelihood for you to create returns to shareholders. And they're also concerned about risks to the company as well. But I would say if there are just two things I could think, I mean, there's plenty in between, by the way, but I would say those two things are the two key things to get right, quantify the economic benefits and have a deep discussion about how we then prioritise our investments. And without that quantification, you're just flying in the dark and put in place the responsible AI governance framework with a system to make it a system of record rather than general principles. And of course, in the middle, you can then define, you know, pilots, experiments, proof of concepts as you go along, but you've got these two, you know, I guess two key elements in place. You've got the context and you've got the risk management element, the governance element. And in between that, then you can start to innovate and try things.
Nick: Awesome. Awesome. Makes sense. Anything before we wrap up any kind of lots of things we didn't touch upon, but anything that you would like to leave our audience with before we wrap up that we just weren't able to cover?
Simon: Well, I guess, you know, I think we are entering the age of AI. It's the defining technology of our generation. Some people see it as important as the steam engine. Steam engine dramatically changed our muscle power. We could now do, you know, in terms of it changed muscle power. AI is going to change intelligence. And that's, you know, that's at the core of human, you know, existence and capability. And without sort of over being too overblown, I think that's where people see it as why it's so powerful. It's a general purpose technology. So my final thought would be, I think it's incumbent that all leaders really understand the nature of this technology and get a sense of how it's going to evolve and really, you know, get under the skin of it. And given and if we come back to the insurance industry that we work in, given that it has, in theory, that unique potential for productivity and augmentation, then it's critical that we spend we spend time to understand what it could really do for us and then put in place the guard rail such that we can move forward in a way that is safe and trusted.
Nick: Awesome. Simon, thank you so much.
Simon: Pleasure. Nice to speak to you.
Nick: Have a lovely day.