Beyond the dashboard with Martijn Verbree, CBA
Bank-grade cyber security is considered the gold standard of cyber best practice, but what does it look like in reality, and what can other organisations learn from it?
Valeska and co-host Isabelle Guyot are joined by Martijn Verbree, Chief Information Risk Officer at The Commonwealth Bank. Martijn shares his approach to balancing innovation with security, staying ahead of the curve in a constantly evolving threat environment, and managing cyber risk at Australia's largest bank. He also shares practical insights on working with boards, the importance of multi-layered security testing, and the risks and opportunities posed by AI. If you've ever wondered what's keeping our protectors up at night, this conversation is for you.
| The Cyber Brief is a podcast for decision-makers in cyber. Through candid conversations with the industry's best, The Cyber Brief delivers executive-level insights on cyber risk, best-practice governance and emerging threats. Leaders in the field share practical insights, real-world stories and actionable advice for boards, executives and cyber professionals. |
Episode five: inside cyber defence at Australia's largest bank with Martijn Verbree, CBA
Martijn's podcast recommendation: Risky Business
Valeska: Welcome to The Cyber Brief, the podcast for decision-makers in cyber. Through candid conversations with the industry's best, we bring you executive-level insights on cyber risk, best practice, governance and emerging threats. We've advised on some of the world's most complex cyber incidents, and we know what it's like in the trenches. We're asking the experts for their unfiltered truths and best advice on what executives, boards and cyber professionals should be doing now to stay ahead. Hi, I'm Valeska Bloch, head of cyber at Allens, and today we're sitting down with Martijn Verbree, Chief Information Risk Officer—effectively, the CISO—at CBA. Martijn's day job is helping to anticipate and defend Australia's largest bank from an army of threat actors operating 24/7. From AI-powered bots that waste scammers' time, to the need for multi-layered security testing programs, Martijn pulls back the curtain on what it really takes to protect a financial institution at scale. I'm joined today by my co-host, Isabelle Guyot, a partner in our cyber tech and data team, and together with Martijn, we discuss how to balance innovation with security, the limitations of compliance scores, the questions boards should really be asking their CISOs, and whether the role of the CISO will involve orchestrating AI agents or become even more technical than before. If you've ever wondered what's keeping our protectors up at night, or how to convince a board that's asking, 'Can we go faster?' when you're already at full throttle, this conversation is for you,
Martijn, a common refrain that we often hear from clients is 'We don't need bank-grade security.' And, unhappily for you, that's not something that you can say. And so really, really looking forward to the discussion today about how you manage cyber risk and incident response in a very high-risk environment at scale. So, thank you so much for joining us today.
Martijn: My pleasure. My pleasure.
Isabelle: So, I'll kick off. The role of CISO has evolved significantly over the last couple of years. It was a technical position reporting to the chief technology officer, strategic executive role now and increasingly independent from the technology function. So, can you give us a bit of an overview about your role and how it fits within CBA structure?
Martijn: Yeah. And I think, I think generally, in the profession, the pendulum has gone back and forth a few times now. I think we always start, to twenty years ago, we started off in I think it needs to be independent, because we need to keep the technology folks honest, that then moved in, actually, if you're in technology, you can influence the agenda much better. So, it needs to be part of tech. And I think slowly we're seeing the discussion go, 'Hey, how do we then split this up?', and looking at CBA, similar to most other CISO roles, what happened was the CISO inherited a lot of security tools. Think firewalls, think identity Nexus management solutions and so on. And rather than spending time on the cyber risks that we have and going after those, we spend time dealing with technologies themselves. Jira tickets, roadmaps, products, product owners and so on. So at CBA, what we decided is that what we had to do is to look at a different operating model. I'm responsible for cyber risk end to end. So my job is really to go left to right. 'What should we be doing to go after our highest risks in relation to cyber for the bank?' But then I work with a lot of my colleagues in technology to get the job done that needs to happen to do that. So, I still own a number of cyber technologies, but not the entire stack. Fortunately, there's other people that, thankfully, do that.
Valeska: Do you mind just talking us through because there are a number of different components of security. There's physical security, cyber security, fraud, scams, etc. How is that structured within CBA?
Martijn: Yeah. So we, as many banks have done as well, we've consolidated a lot of our security functions. So, our function looks after cyber security, but also operational resilience. We look after what we call protective security, which is really the physical security, but also fraud and scams. So, we've consolidated into one function because we do believe there's a lot of efficiency gains that we can get from that, and we're all dealing, at the end of the day, with threats to either ourselves or our customers that we need to deal with. So, hence we've consolidated into one security function, and I'm on point for the cyber risk that we run in there.
Isabelle: What is expected of CISOs in 2026?
Martijn: The focus is much more about, actually, what are the risks and how do we deal with this? A lot of organisations still think there's only one cyber risk, and there isn't. It's not like we've dealt with cyber, we're done, so it's really about how do we make sure we understand what's happening in the environment, and what are the new threats that are coming in? How do they act? So, how do they operate? And what are the best defences that we have, then, in depth, that can deal with those threats. So, rather than I'm a CISO , and I need to put an identity management system in, we're thinking about, okay, but what risk is going to address and how it's going to work with all the other things that we have, and when something changes in our environment, what more do we need to do to stay within our appetite? So, the discussion has been much more about, how do we stay within our overall cyber risk appetite, rather than what tools and techniques do we now need to adopt ourselves.
Isabelle: I mean, that must be constantly evolving. How do you stay on the pulse for that?
Martijn: Yeah. I mean, it's, there's never a dull day, and it's why I love this profession of ours. But, yeah, things change quite rapidly. Especially over the last six months, we've seen a huge change in the risk landscape. We take an approach that we focus on: what are we doing inside? So, as you know, we do a lot of work in AI, for example, and that was probably less of a thing over a year ago, so that is now something new that we need to deal with. So, what signals are we getting from the inside? What are our business functions experimenting with? What direction are they going? So, in a way, my role is almost to be across a lot of those activities, and to be real champion for cyber security, to make sure that we bake it in by design from day one. But at the same time, it's also an external view, like, what's happening in external environments? And obviously we're talking about what trend intelligence are we getting. But also, increasingly, what's everyone else doing? What is everyone else worried about? We get a lot of information from our key suppliers because they're dealing with similar risks as ours, and so we're taking very much an ecosystem approach now as well. What more can we do based on what we're seeing, and not just to protect our brands and our side of the story, but also our customers and the broader ecosystem here in Australia.
Valeska: So, when you talk about that intelligence sharing, and I know there's been a much greater focus on it over the past few years, do you have formal structures set up to do that, whether that's with government, suppliers, customers?
Martijn: Yeah, it's a bit of both, right? So, there's a formal element to it, obviously, where we share important signals with government and also with our regulators. But I think in cyber security, there's a lot of informal knowledge sharing going on as well, which is still what I really enjoy about our profession. We all go up against the same threat actors, as we call them. So, I get a lot out of my own little black book of context, basically—
Valeska: WhatsApp groups?
Martijn: Whereby you get to tick off—well, no, we use Signal, not WhatsApp—
Valeska: Sorry—
Martijn: We use Signal. And often you get the question there from my colleagues across the globe, really, like, 'Hey, have you dealt with X or Y? And what's your take on this one?' That's happening an awful lot. So, yeah, at the end of the day, we're, I don't think anyone's competing on cyber security, we're all, it's a shared problem that we all have to lean into and be good at.
Valeska: So, as Australia's largest bank, CBA has an enormous potential attack surface. You've got a large number of staff, you've got hundreds of critical systems. You've got a very complex supply chain. You're under intense regulatory scrutiny constantly for a range of reasons. How you approach the challenge of ensuring that the controls that you have in place and the broader processes are going to be adequate to meet the threat, and also the pace of change, even of the regulatory environment. That's an enormous task.
Martijn: Yeah, it's an enormous task. And I think historically, most banks, we've been quite compliance focused. We've adopted Essential Eight, for example, quite early on, we taught our board how to use it and what it's for. Then we moved into NIST, so they understand NIST, but I think, increasingly, we're starting to talk to them and ourselves about cyber risk scenarios, because the compliance side only gets you this far. And I always make the joke, because once you reach a 3.6 or something like that on a NIST score, the question is, like, is that okay? And my McDonald's around the corner has a 3.6 on Google Maps rating for the good restaurants. My kid thinks it is, but I've got my doubts. The compliance scores become a bit meaningless after you reach a certain threshold. So, cyber scenarios is what we're doing increasingly, and we are talking to our board about, and by cyber scenarios, I mean double click on that cyber risk. What is it that we're worried about: common ransomware attacks, DDoS attacks, insider supply chain attacks, those are the angles that we're increasingly taking. And then what we do is, we take a defence in-depth approach whereby we, for each of those scenarios, we determine, like, what's causing friction, what's stopping it, what's detecting an attack, and how can we respond to it? That shines the light on lots of really interesting elements that go beyond a control framework that you would normally adopt, because you place it much more in context of the attacks that we're seeing and that we're worried about. And then what we're doing is, we're combining it with increased testing, so no longer paper-based assessments, but actually real hard testing. So, we do a lot of red teaming, as we call it, penetration testing, blue teaming, purple teaming.
Valeska: Can you explain your approach to cyber security testing?
Martijn: We actually do three different things here. We still do what we call traditional penetration testing. That is a craft that's been around for decades in cyber security, and that's basically where you use smart people with an attacker mindset on to break the system. These are typically short engagements, whereby you go quite wide but shallow and find obvious vulnerabilities in the setup. So, we pretty much do that for every new system that we introduce into the bank or any major change that we make. So, very localised, system-specific pen tests. We do also a lot of red teaming, as we call it, and purple teaming, and red teaming has been getting a lot more interest recently. It's also increasingly required by regulators. And in red teaming, what we do is we take much more a threatless approach so we, in essence, we do either ourselves or we use a third party to attack us in the same way that a real attacker would do. There's different scenarios. So, we tie that red teaming exercise to the scenario work that we've talked about as well. So, very often we simulate things like, okay, can you be an insider? We give you access to our network and see what you can get; that can be quite specific, with a specific flag that we can give them. This is what you need to go after. And we also do it from the outside, of course. How far can you get from, the external environment, into the bank? When we do these red teams, typically, our defenders don't know about this. We call them the blue team, and it's their job, obviously, to spot them, but they're not in the tent with this. So, it's giving us a lot of information about where are we vulnerable? How long does it take to do an attack? And, also, how long does it take before they get detected and we kick them out? So, that's why we really like those red teams. Purple teaming is a variation of that, and this is, as you can imagine, the defenders and the attack is much more working much more closer together to compare notes through the testing. And that's basically where you get blue, the defenders, and the red team has come together and that helps sometimes, because you then also have the defenders in the tents and you almost use as a real-time training exercise, rather than, 'Hey, we did this red team a month ago, and this is what you missed.'
Valeska: Can you talk a bit about how you think about using your security vendors, both on a day-to -day basis, but also for some of this testing, like, are you using the same ones year on year? Are you trying to mix it up? How are you approaching that?
Martijn: We're mixing it up a little bit. So, number one, we're lucky. We probably have some of the best testers ourselves, so they work for us. I've seen a lot, and I think they're in a class on its own.
Valeska: And do they sit within your team—
Martijn: In Group Security, under a different GM, but we work very closely together, and we agree, given the risk profile and the cyber scenarios that we're worried about, that's how we then tailor the red teams. So, we talk pretty much every day about what we want to do next and where we want to go. So, yeah, so we use, in addition, different vendors for the red teams, the purple teams, but also for the pen tests. So, we still do a lot of penetration tests, but that's much more buried into the development pipeline. So, when we go live, almost certainly you have to do a penetration test when it's internet facing or when it has sensitive data in it. So, we do dozens of tests for system-specific tests as well that we use external vendors for. We do it ourselves. We've got a bug bounty program as well.
Valeska: So how does that work?
Martijn: That works, that actually works. We get notified of, yeah, vulnerabilities zero days in our environment. And then when that comes through, we obviously go after it really, really well. And I think that is something that I still believe in, that every company should have.
Valeska: How long has that been running for?
Martijn: A few years, I think; before I came it was already in place, and it's a really good source of information to have. So, yeah, that's our approach to that angle. But at the end of the day, you can have the best controls and compliance frameworks in the world, you can do all these beautiful paper assessments, but that is where, that lower technical level is where the mistakes are being made. That's where the vulnerability sits and where they hide, and you're never going to find them unless you actually scan tests at that level. So, hence the focus that we have on that.
Isabelle: You mentioned earlier, the impact of AI over the last year, and I know CBA has been investing heavily in AI, I think you've also got a an AI bot to disrupt the disruptors. Can you explain a little bit more to us about what you're using AI for?
Martijn: We want to be an AI-first bank, and that's what we've been preaching and we were doing. So, we use AI in three ways. We think about AI in three ways. Number one is that the attackers are increasingly using AI to conduct attacks. So, what do we need to do in response to that? Then the second bit is that, when we release AI models, how do we make sure we protect them? How do we make sure they don't hallucinate and can be jailbroken easily? And then the third bit is, how do we use AI ourselves to defend the bank a bit better? So, those are the sort of the three layers that we consider, particularly around the first question, like, how do we see the attackers use AI? It's very much an efficiency play right now. There some indicators that new vulnerabilities may be found using AI, but probably not seen as much yet. It's much more like how the time it takes an attacker to set up, say, phishing or a scam site is much shorter than it used to be, and they can monetise it much quicker than before. So, it's very much an efficiency play right now in the attack lands, and that means we need to adjust our ways of doing, we need to spot them quicker and take them down quicker and that is what we're doing. Then, around protecting our own AI, there's a lot of theory about AI models, but we realise that we're doing an awful lot with AI. So, therefore the more we security test this, actually the more we learn about what the attack factors are, and how you can get more to talk about stuff that shouldn't be talking about, how you can extract data from it. So, we do an awful lot of what we call red teams on our AI models. We've done well over 100 and that allows us to learn an awful lot about how these things behave in the real world, what to look for and that has been one of our approaches, and that then leads us to putting new controls or capabilities in where we feel that we were, we were a bit light. By the way, 80 to 90% of your normal controls apply to AI. You still have to deal with identity Nexus management, security configuration management, patching, vulnerability management and so on, but there's a few new things, particularly around AI, what we call AI guardrails, that we then make the business adopt around their models. And then the third one is, like, how do we use AI to better defend ourselves. That is very much a bit of an efficiency play. But we are also realising that with AI, you can actually do a lot. You can cover more grounds better. So, we are starting to experiment with our own, our first threat hunting agents that supplement our blue team, and on the fraud side, that is a really cool one. We're working with a party whereby we have a whole number of bots in our environment who keep scammers on phone calls. So, when a scammer calls, we try to redirect them to one of the bots. They sound like an Australian. We've got different personas, and the whole aim is to make sure that they can't scam a real person. And we occupy them.
Valeska: And presumably you learn a lot from that process as well, how they respond.
Martijn: Yeah, we record those conversations, so we learn about the techniques that they use, which we then can adjust in in the real world as well.
Valeska: Let's talk a little bit about the relationship between the CISO and the board you mentioned before in the discussion around which frameworks do you use, and how do you actually report on that. Because I think, you know, boards want concise, business-focused updates that go to the financial impact, the business operational impact, legal and regulatory exposure. How are you reporting to boards these days, and also, how do you think about the metrics for success? Because ROI doesn't necessarily make sense, not necessarily applied to finance or legal functions, for example. How are you thinking about all that?
Martijn: We are fortunate. We have a very well-educated board that deeply cares about cyber security. And every two months, basically we do an update to the board, a longer paper, which is three pages and a short paragraph. So, we do that one every two months, one, one of the other. The trick is how do you condense your message around cyber security into three pages and maybe a small dashboard that they can see where we're continuously finding where the sweet spot is. I empathise a lot with the boards myself, because the packs are enormous that they get. Cyber is one of the many topics they need to cover. So, it's actually our job to make sure that they really understand what the cyber risk is and what we're doing, and actually what they can be doing. You want to get the board to a position where, whereby they almost continuously ask, what more can we do? Can we go faster? And that's certainly something that we get a lot. It's sometimes frustrating because you go, like, I can't go faster. But it's really, it really forces you to think differently. We're trying to strike that balance between giving them a lot of detail and, actually, what's the headline? And that's a tough ask. That's a tough ask. We settled on a simplified NIST dashboard. We've picked 12 indicators that are aligned to NIST that we feel are the most important ones for us, particularly because those are the ones that we want to make progress on. It's quite easy to pick 12 indicators are flashing green, but we deliberately pick the ones that we feel that we've got work to do, so that we can hold ourselves to account, but also the board can hold us to account. And then every year, we refresh those ones and consider, like, is there anything else that we need to add on top of those, although the indicators we've shared, the cyber scenarios that we have, and in terms of where we sit, in terms of our ideal risk and what that means and what we need to do, and what our acceleration plans are, so that is very much the dynamic. But, yeah, for us, it's a topic that comes back every two months. The joke I have is great. I tell my team every time we need to do a paper, it's, like, the good news is the board really cares about cyber. The bad news is the board really cares about cyber, and therefore we need to really demonstrate what we've done and what we're going to do next.
Valeska: For boards that really care about cyber, what are the questions they should be asking the CISO?
Martijn: Well, there's an obligation for them to understand what does cyber mean for our business, and so that's probably the first question. So, if the worst, if we have the worst cyber day, what does it mean for the business? Can the business survive? That's probably the number one question that you want to ask first and then CISOs need to be able to talk to that if all systems go dark, and we've seen that, when Maersk happened in 2018, I forget what the date exactly was, that there was a real pivotal moment in cyber security. I still remember it. I was sitting on my desk in London, and I saw the news come in and, like, oh my gosh, this is the stuff that we've been warning about for years. And up to that point, every cyber professional would have done similar packs. A cyber incident, we didn't call it cyber back then, by the way, security incident could have massive repercussions. And then the answer was very often from leadership or the board, 'Yeah, but it hasn't happened yet.' And you know what, they were right. It hadn't happened yet at that point. So, Maersk was one of those moments whereby, okay, this is now proper on, a global company is now completely out and that's probably your worst day. So, from a board's point of view, that is probably the worst-case scenario you need to think about if that happens. Hey, could it happen? What's the likelihood and if it happens, how do we recover from that? That's probably the number one question. Number two question is very much about, okay, what is our appetites around cyber risk? How do you think about cyber risk? Very often CISOs, and I noticed from my KPMG days, they have a tendency to report board metrics. Sorry, cyber metrics are quite technical. Number of vulnerabilities found, time to detect, time to respond. And that's quite meaningless for a board. There's just a number of statistics. And if you're a board member, what do I do with this information? Are you telling me it's bad? Are you telling me everything is okay? If it's bad, I probably want to know, but I also don't want to understand, what are you doing about this and what you need at the end of the day, the board is there to strike that balance and make, I guess, the executives accountable for what they're doing around cyber security. So, asking those questions is, yeah, it's just super important. What does our worst day in cyber look like? Would we be able to be recover? What are the cyber risks that we're worried about? Are we with an appetite? What are we doing about it? And then last but not least, how can we go faster?
Valeska: And it feels as though the risk landscape is changing all the time. So, how often are you thinking about whether the way that you're reporting or what you're reporting on, or the key risks that you've decided, the 12 key risks you've decided to focus on, are still the ones that you should be reporting on?
Martijn: Yeah, all the time is the honest answer, all the time. But I think once you have the framework and the top-level scenarios right, you can fit the subordinate scenarios quite easily in the top-level one. So, supplier risk is a worry, right? For everyone. This is Metcalfe's Law on steroids at the moment, we're connecting so much. We have been connecting a lot to the internet. We're now connecting to our suppliers, who use their suppliers and their suppliers. So, it really snowballs out. So, the way we think about supplier risk is, okay, what are the sub scenarios there? So, SaaS, that's probably the number one. So, where we use SaaS services, what keeps me up there is that a lot of SaaS solutions, they allow you to connect other platforms to it really easily. Connectors, plugins, you name it, and suddenly what you thought was a few dozen highway SaaS providers have now become SaaS providers, each with 50, 60 connections to other parties. So, they become your fourth parties that you know nothing about. So, how do you keep control of that? So, that's one scenario as part of the third-party scenarios that we have. Another one is basically where we use the software that is a bit more F5 SolarWinds territory, where we've seen breaches, and suddenly you need to make sure you close the holes in the software in your own environment. And then we have suppliers where we give a lot of data to, so they're not operating in our environment. We give a lot of data to them. How do we make sure that they look after us in the right way? Then you learn a lot from other incidents. So, continuously, we're asking ourselves when a breach happens, when there's a zero day somewhere, like, what's the impact on us? Is this something new that we can do? What else can we do to move the needle?
Valeska: Presumably, there are a lot of other teams within the bank, within the group, that you're having to engage with. Can you talk a little bit about how you interact?
Martijn: We do have, again, with the evolution; like, 10 years ago, cyber security was one department's role, and that's just unrealistic. So, we have what we call federated cyber security responsibilities out, the business needs to understand that they're on point. It's everyone's job, really. We have two mechanisms on how to keep track of cyber risks. One is we follow a crown jewel approach like everyone else is. So, we look at availability, sensitivity of systems. We stack, rank them, so we know what's high risk. When we adopt new applications or IT services, we go through that process, we do the assessments. It gets a label, if you like, and we discover these systems ourselves if they don't tell us. So, we have mechanisms in place to find out when someone's experimenting with something and they didn't update the CMDB or the asset register, and based on that, you have different security requirements that apply, as you can imagine. So, that's one mechanism, but what we also have introduced now is a concept of non-negotiable security for everything. So, we have drawn a baseline of about 10 things you have to have in place before you can go live with any new IT service in the bank, and there's nothing special about it. It's things like multi-factor authentication, making sure you manage the identities on it, security configuration management, patching, but there's been a huge campaign whereby pretty much every business team now knows about them and that you can't go live because we stop you from going live if you can't tick those boxes. And what that gives us is a minimum level of security that addresses a whole lot of risk that we were worried about.
Valeska: And presumably a minimum level of awareness.
Martijn: Yeah, yeah, yeah. Very often I get a phone call from someone in the business who wants to go live with something, asking for an exception on a non-negotiables. It's kind of in the words non-negotiable, so we, we don't do that, and then you get into interesting discussions whereby the owner goes, 'Oh, but we committed that we're really going to go live with this and it's really hard to to do MFA.' And then my response is always, 'Well, what have you tried so far,' and then you kind of work with them to get it in place, before go live. So, yeah, and that's sort of how we how we work at the moment. We've gone beyond that security is the responsibility for the security team, to actually now everyone has a stake in this, and everyone's responsible.
Isabelle: That presumably means as well that you're not hindering innovation as well.
You're balancing the security and cost—
Martijn: And the pace of releasing new products and, yeah, and absolutely, in an AI world whereby the development goes really quickly. We want to make sure that our engineers experiment with this, but we want them to do it in a safe place, not with real customer data, and in a live environment. So, that is the natural tension that we have. So, I call it freedom within a framework. So, we, it's our job to make sure that our engineers can experiment, but it's also our job to make sure they do it in an environment that's inherently secure. So, yeah, building security, and by design, making sure that they don't have to think about security almost is where we want to go with this. It's a bit like your training and education. If you do the phishing simulations with staff, you can craft any simulation so well that people will click on it. But then, you know, links were meant to be clicked on. It's actually our job to make sure when people do click on them, nothing bad happens. So, we need to move the needle a little bit from 'That's bad, you can't do that', to 'No, we want to enable you to doing that. Go ahead. We've got you'.
Valeska: So, speaking about training and education, and particularly if you're adopting this federated model—again, large number of staff, large number of business functions. How do you approach training programs across the group, both on a group-wide basis, but also ensuring that it's role specific?
Martijn: Yeah, yeah. We do. We do. We do both those things. So, we obviously do our standards security training, like everybody else does. It's the, it's every six months, you have to go through something and answer a set of questions, and then hooray.
Valeska: Sounds like we've got the same one that you do.
Martijn: I still feel proud when I when I pass it, by the way, or when I spot a phishing email, and I get the well done. We've adjusted our phishing simulation campaign a bit. We are now … we know much more about the three-strikes model, because we acknowledge that people will phish and we send different levels of difficulty to people, but we know when we make it really difficult, the click rate goes through the roof. So, what we want to do is, we want to mix it up. And we really focused on, okay, if you miss one, you get another one, and then you get another one, and if you get that one, we're probably going to take your mouse pointer away for a while just to make sure that that behavior stays up to date. When we don't do these simulations for six months, the next one you do, the results go to the floor. So, we've learned that actually you need to do it continuously, and we try to mix it up, easy ones, harder ones, and then the three-strikes model.
Valeska: Well, one of the metrics that I've been hearing about much more recently is an increased focus on people actually reporting the phishing links. Because you can have 99% of people who don't click, one that does, and that's enough, whereas, if one of the 99% had actually reported it already—
Martijn: Exactly, exactly, and rather than the click rate itself, because that becomes meaningless. So, we've adjusted our indicator around that this year. And then, yeah, we do the role-based training as well. So, people in certain roles, they get tailored, much more tailored security training, whether it's fraud and scams, whether it's fincrime, yeah, we make it much more bespoke. I still think there's some way to go, particularly for the technical engineering teams, to really make them aware about, hey, what are the particular attack factors and things you need to do if you work on that technology stack over there. I think that's where we can still do quite a bit better.
Valeska: So you came from, from KPMG, you've come into the bank. What are some of the key lessons that you've learned in your period, making that jump?
Martijn: Probably drinking my own champagne is the one, right? I think, back in in the consulting world, it's, you know that it's a customer–client relationship, and it's going to be temporary. Sometimes they last for years, but it's going to be temporary. And you're hardly gonna have to drink your own champagne. So, it's for decades I've been making recommendations about how to solve for some of the hard cyber problems that we face. Now, on the other side, it's any decision I make now, I own it.
So, that is sometimes, yeah, I think about it a bit more these days, then, because it was quite easy, especially my early days as a KPMG consultant, where there was a manager and a senior manager and a partner and director, everybody was reviewing it. It was quite easy to write a line item—
Valeska: Do this—
Martijn: Yeah, why don't you do identity management, and not knowing that's probably a five-year program at a large organisation to get right, and then you still struggle after that. So, that's probably the biggest moment. So, I'm a bit more considerate about, okay, if we actually go and do it, what does it mean? Because then we own it. The other bit is a bit of the paradox, right? The more you know, the more you realise you have to do. We are on a tractor that we want to, we want to have much more real-time information, whether it's about vulnerabilities, threat actors, whatever it is, but that means that you get a lot more information. The more tools you implement, they all come with their own dashboards.
Valeska: So, how are you synthesising all of that? Because you've got all the informal Signal chats—Martijn: Yeah, all the formal and then traffic lights, that's, yeah. And so, that's what you need to really think about. It's not just about slapping a tool in and then dashboard that no one looks at, at that point you then know. So, that means you need to do something about it. So, how do you filter the signal out of all the noise? Because configuring some of these security baselines, if you like, that all these tools report against, everybody reports high-risk issues quite easily, and before you know it, you've got hundreds, if not 1000s or more. And what I do? What do I do with this? So, making sure we can tune this and we can place it in a bigger context is really important.
Isabelle: Earlier we were talking about the role changing a lot over the last couple of decades, and what's happening this year. Where do you think it's going to go in the next couple of years?
Martijn: Honestly, I don't know. Cyber security is not going to go away. The more we do with technology, the more we connect to it, the more risks we have. I've been hoping that becomes much more of a built-in type situation, with like how you get electricity from your power sockets. You don't need to worry about the generation of it and how it's all done safely. But we're absolutely not there yet. So, it's a really tough one. Before Christmas or the Christmas break, one of the chief engineers of Anthropic, he actually showed the world how he works and how he does development of some of the biggest AI models. And you see his setup, and he's got seven or eight agents working on their core platform, and he sort of orchestrates that. And that means that a lot of the low-level technical coding is no longer a human job. And he sits above that and, like, oh, gosh, what would that look like in cyber security? And then you get to the whole point, like, okay, will my role become much more business focused, whereby I know everything about how the bank works, and if something breaks down, what would be bad. Or, actually, it doesn't go the other way, whereby I'm much more technical, or I need to be much more technical, so I understand all these agents, what they're doing and how they're doing it, and whether they're doing it safely. And I'm absolutely flip-flopping between those two. I think, 20 years ago, we were saying that to be a CISO, you need to understand the business. You can't just be a techie, but you still need to understand the technology and I think that that holds more true now than ever. You need to be much better understanding what is the business risk of when things go bad, but at the same time, you need to know the technology as it sits underneath and how it then impacts you. And a lot of that will go away. I hope a lot of that will be solved by better products, better security, and build better vendors and so on. But we're not there yet.
Valeska: What are the top three challenges that you think are going to be keeping CiSOs up at night in 2026?
Martijn: Supplier landscape. That is what we talked about. It's not just your own third parties, and you need to understand what third parties you're dealing with, right? Are they SaaS? Are they using their software, or are they using your data? Or are they in your network because you just hired them to help you out with something. So, yeah, supply risk. I think there's a people risk piece. I think that's coming back every few years, but I think people insider should be, should be a topic that CISOs need to think about, that anyone can be bribed or blackmailed to do something bad at the end of the day. And last, it is probably AI is not going to go away. It is changing how we work, and it's changing how systems interact with each other quite, quite rapidly. And with AI, it's different than in the old days, because the old days, we were talking about what we call deterministic systems. So, if you ask it to add two and two up, you always get four. AI is probabilistic, so you don't always get the same outcome on your question. So, what do you then need to do differently to make sure that you have trust in those systems and how you work with it. So, yeah, those are my three.
Valeska: How are you thinking about the quantum risk?
Martijn: It's on the radar. It's going to happen, and we're thinking about it's from a Q-Day perspective. So, when will quantum be able to break our encryption that we use, particularly our asymmetric encryptions? The estimates are between eight and 15 years, right? So, it still feels far away, and that makes it also hard for us to address now almost, there's also not many products that are quantum proof yet either. So, even if I decide like, hey, we need to replace everything, I can't necessarily place it because you can't buy it or our partners don't have it. So, this is going to be quite a journey. So, what we're doing, and what I recommend everybody else does as well, is to make your inventory. So, work out where are we using asymmetric encryption that could be susceptible to this Q-Day, whether it's eight years or 10 years from now? And get ready to replace this, when you can engage with defenders, ask them about it. But it's one of those slower, slower risks that will creep in until it's there, and then it will be a big problem. I think about it very similar as the millennium problem. I'm old enough to have been around when we crossed over from 1999 to 2000. I was an intern at KPMG, and my job was to make an inventory of all the equipment that had a date in it, because that could magically fail, and when the millennium, when the turn of the century would happen, and a lot of organisations put a lot of work in it. And I still remember being at home waiting for that midnight and waiting for the—
Valeska: We thought it was going to be much more exciting than it ended up being.
Martijn: We were all waiting for the lights to go off, and then it didn't. But that was probably partly because all the work that everybody has put in, so I when I think about quantum, it's very much, say, a millennium problem. We just don't know the exact dates that we need to get ready for, but it's gonna absolutely come.
Isabelle: And, sorry, just to clarify, so when you say Q-Day, what do you mean?
Martijn: We call it Quantum Day. The Q-Day is basically the moment whereby traditional encryption, asymmetric encryption, that underpins a lot of our communication security, when that's broken, that means that we can't rely on it anymore, and so we need to have something else in place that is quantum proof.
Isabelle: So, before we wrap up, we always ask our guests to recommend their favorite cyber security film, podcast, book; what's yours at the moment?
Martijn: Well, apart from this podcast, obviously, I listen a lot to Risky Business by Patrick Gray, Australian podcast, but pretty much world famous. That is probably one of my … I actually like listening to them because they balance that good sense of humour, but also with very pragmatic advice. So, often, when I'm making up my mind about a certain topic, I take some inspiration on what those folks have been discussing already.
Valeska: Well, thank you so much for joining us. Been really interesting.
Martijn: Yep, likewise, thanks for having me.
Valeska: I think that was really, really great to get insight into how much Martijn is dealing with on a daily basis with an organisation of the scale and complexity of CBA. One of the points you made that I thought was really interesting is the move from just focusing on particular cyber security frameworks like Central Eight and NIST to really thinking about what the risks are that are most relevant to the business and then reporting in a way that's going to be meaningful given the specific risks that are faced.
Isabelle: Yeah. And, I mean, I think one of those practical takeaways when you are then also engaging with your business, whether it's small or big, is that point around having those key non-negotiables so you can innovate, you should be innovating, but rolling out new products and services, but this is the minimum things that it has to meet. If it's not going to meet those, then, you know, you can't go live.
Valeska: Yeah. And there's just a widespread expectation of what those are that needs to be embedded into the development life cycle.
Isabelle: Yeah.
Valeska: One of the other things I found really interesting is that sort of frequency and nature of engagement with the board, the questions they should be asking, but also the regular updates to keep them educated on what the nature of those risks are and what organisations are actually doing to address them.
Isabelle: Yeah. And, I mean, I think the final one for me was about the testing, you know, red, blue, purple teaming, pen testing, just the frequency with which you need to test the different kinds of testing that you're going to do on your particular products, on your systems as a whole, and just how that is integrated into their lives, you know, their operations,
Valeska: Well, it sounds like it's just happening on an ongoing basis. And also who you're getting in to do that as well, that combination of it being undertaken internally, where there is that capability, but also ensuring that there's a variety of external vendors that you can use to really try and poke holes and see where the vulnerabilities sit.
Isabelle: Yeah, and the bug bounty.
Valeska: Yes, yep.
Thanks for listening to this episode of The Cyber Brief. Check the show notes for resources from this episode, or visit allens.com.au/cyber for our latest thinking; don't forget to follow to keep up to date on what's ahead for cyber risk, governance and emerging threats as we interview some of the most respected voices in the industry.


