
4 Topics
A practical deep dive into defending organizations in the AI era, exploring where automation helps or hurts, how privacy and zero trust coexist, and how cybersecurity becomes a driver of trust and growth.
The Blind Spots in Modern Cyber Defense
A look at how unseen attack surfaces, exposed data, and misplaced confidence in tools continue to undermine modern security programs.
AI vs. Encryption: Who Has the Upper Hand?
As AI capabilities accelerate, this topic examines whether cryptography is being weakened by AI-driven attacks or strengthened through AI-enabled defence.
Incident Response in the Age of AI-Speed Attacks
An assessment of whether today’s incident response frameworks are fit for AI-driven attacks, or still built for a slower, manual era.
Cybersecurity as a Strategic Advantage, Not Just Cost
A look at how trust, privacy-first design, and risk avoidance translate into measurable business and brand outcomes.
Register


Register to watch


Full transcript
*Disclaimer: The accuracy of this transcript is not guaranteed. This is not investment advice, and any opinions expressed here are the sole opinions of the individuals, not of the institutions they represent.
Chapters
- Welcome & panel setup — 00:00:03
- Blind Spots in Modern Cyber Defense — 00:03:07
- AI vs Encryption: Who Has the Upper Hand? — 00:14:54
- Incident Response in the Age of AI-Speed Attacks — 00:28:46
- Cybersecurity as a Strategic Advantage, Not Just a Cost — 00:44:20
Ben: [00:00:03]
Welcome everybody to this episode of the 4x4 Virtual Salon. Today we're going to be discussing staying ahead of the threat actors of cybersecurity in the AI age, and we have four fantastic speakers with us here to discuss this topic. But before we introduce you to our four speakers, I just want to quickly say thank you to our sponsors, Stableton. Stableton works with wealth manager and asset managers, helping them to provide semi-liquid systematic index-based products to their clients, and allowing those clients to invest in some of the largest and most liquid private companies like SpaceX and Anthropic. For information about Stableton, please visit www.Stableton.com.
Today is 4x4. To anybody who's new to the 4x4, it is called a four by four because we cover four topics, we have four speakers, we cover four polls. We'll launch those in a second, so please take part in those polls. Then last of all, we'll take at least four questions if they come in. Don't be shy, if you have a question for any of our speakers, please submit it and I shall put it to the speakers that as your question comes in.
Over to our four speakers. I'm going to introduce them left or right on my screen. I don’t know if that's the same for the people watching this. But I'm going to start with Filip Stojkovski, who is the director of SecOps and AI strategy at BlinkOps. BlinkOps is an agentic platform purposely built to accelerate security operations at scale. Next, we have Eamonn Maguire. Eamonn is the director of engineering, machine learning and AI at Proton. Proton, also based here in Geneva, is massive security company which works with enterprises and individuals, and provides them with a suite of security enhancing solutions. Next, we have Robin Bratel. Robin is the CEO of Lab 1. Lab 1 works with financial institutions, helping them to understand their exposed data. Then last but not least, we have Sonal Rattan, who is the CTO of eXate. eXate does a lot of things. It's quite difficult to define succinctly, but I think you describe it as operating at the intersection of data classification, data privacy and data sovereignty. Also working with our very large enterprises.
Over to the first of our four topics today, we're going to be discussing the blind spots in modern cyber defense. Filip, I'm going to come to you first with a question. What are the most dangerous cyber blind spots that you still see in large organizations, even though they might be throwing money at their cybersecurity defenses?
Filip Stojkovski: [00:03:07]
That's a pretty good one, and I would say one of my favorite one lately to discuss. When it comes to blind spots, I usually like to compare it to the era of the cloud adaptation. I think the AI tool adaptation nowadays is one of the main blind spots that we are creating as well. If you go 10 years back, we see that everyone rushed back then to implement everything in the cloud, they wanted to use SaaS software and so on. But security come a bit later, it became like the byproduct. Later we started to catch up, implement all the guards that we needed. I think now it's happening the same thing with AI. Organizations want their employees to use AI because they think it'll enable them, it'll make them do more stuff, faster, better. But the issue with enabling all the employees to do all these AI tools, is we are creating what we call shadow AI, which has been known for many years as shadow IT. I wouldn't say it's a separate category, it's more just a subcategory of the shadow IT.
Because the AI tools is not just, someone is creating an account. You have data loss, leakages that are happening, you don’t know employees what type of data they're uploading there. The second part is when it comes to connecting all those tools and connecting them to agents, you need to take care of where they can connect to, what specific rights they can connect to your infrastructure. Yes, I would say the AI adaptation is probably the biggest blind spot that we are creating nowadays.
Ben: [00:04:53]
Robin, would you agree with that? With your exposed data hat on, what do you think companies are consistently missing or underestimating when it comes to the surface attack area?
Robin Bratel: [00:05:14]
Well, obviously I look at this with the angle of an exposed data or an exposed data angle, because we analyze the breach and ransomware data sets that get published by threat actors in the public domain. I do think it's a huge blind spot for most organizations. There's a lot of services that look at credential dumps or credit card data, but I don't believe there's anyone other than Lab 1 that look at the actual contents of files that appear in the public domain. Just to give you some metrics to help frame what I'm about to say, we've processed about 290 million files at this point and about 220 terabytes of data over the last couple of years that's been dumped by criminals in the public domain. This year alone, we've seen Crunchbase, Iron Mountain, KPMG, and a number of others have issues.
[00:06:16]
Sometimes these incidents get reported as say a big company like KPMG or Iron Mountain. But in actual fact, when you dig into the incident, you'll find that it's come from one of their close third-party suppliers, but they're the most exposed in the dataset. I'll give you a couple of examples because I think it's helpful. This year alone, a global financial services customer of ours felt that they weren't exposed by an incident impact in the financial management platform. It was quite a small incident. It was only a thousand files. That's really small compared to what we see. Recently we did an instant that was 2.7 million files and 2 terabytes. This is small, 8 gig, a thousand files. But we extracted 5 million key terms from it, and that impacts 37 and a half thousand companies in the blast radius of that. The financial services customer of ours when they first saw this incident, didn't believe they would have any exposure. When we said that they did have exposure, they didn't think they're a third party, they assumed they're a fourth party. Then they started to think, well, maybe our staff have signed up for something here or there's some link between our staff and this organization. It turned out that it was systems within the organization that had got system access to this financial management platform, and that actually credited very serious exposure for the financial services customer. It was a complete blind spot.
Ben: [00:07:50]
Sorry, what do you mean when you say system access?
Robin Bratel: [00:07:53]
Those systems were accessing the financial management platform. That's CIS access; credentials accessing systems completely in the blind. The financial services customer didn't realize this link existed in the threat team and the vendor team, they just didn't think they had a relationship with this third party. That's the blind spot. There are other blind spots. Another global tier one financial services institution in one week found four critical instances as a result of Lab 1's analysis of incidents in the public domain. They said they were all critical, and they wouldn't have known about them again without this analysis. In terms of the blind spots, we see impact in fraud, physical security, actual security, commercial risk, brand risk, VVIP risk, regulatory risk and IP risk. Then you get all the different attack vectors as a result of the data sets that are out there from the things we're all familiar with: phishing, fishing, smishing, business email compromise, deep fate. I'll stop there.
Ben: [00:09:13]
Yeah, I want to ask Eamonn. This is going to get worse, isn't it? How is this not going to get exponentially worse? Robin was talking there about system access growing through. This is going to get exponentially worse, isn't it?
Eamonn Maguire: [00:09:29]
Sure. One part of this, you've got say pipelines, copilots, agents for example, running on people's networks. Basically an adversary doesn't need to compromise your model anymore or even your system. They can do it via, for example if you have to do a web search, you do a web search, it can hit any particular webpage which contain instructions. Those instructions can be executed by the mode, and from that you end up exposing a pile of your systems to any. You can extract emails or documents, or any internal information can be exfiltrated without the attacker ever having to actually gain access to your systems directly, which is quite incredible.
Ben: [00:10:18]
You're saying we're almost inviting the attackers in, in a way?
Eamonn Maguire: [00:10:21]
Yeah. It's the classic hype cycle thing. People are scared of missing out, so they're willing to plug whatever they want into the system to make it work for fear that someone else is going to overtake them. The reality is that these models are going to give them probably small increases in productivity, but at the same time they will increase their risk of data loss exponentially, at least. It's not only that, there's model supply chain attacks are another thing, which is a big issue. Organizations are adopting open web models, but they don't really have a proper security review process, which is what they would've had, for example third party software.
As Filip was saying before, the shadow IT or shadow AI, is something which is happening, which you don't know is happening necessarily within your company. People can use cloud or they can use whatever to send out data or to connect to their internal systems to act on their behalf, to call browsers, to call the terminal. But you also have these models that are being deployed internally by companies that can act on triggers the same way as you had, for example, command and control type infrastructure for malware that would trigger a particular action on the ransomware or malware running on a person's infrastructure. Now this can be triggered by a particular prompt, and then all of a sudden, you've got models injecting malicious code or performing malicious actions directly on their systems. People are totally blind to this risk because they're scared of missing out.
Ben: [00:12:10]
One of the things that you would say is be very careful what data you give access to when you open up to AI agents and LLMs and so on. Do you think data governance is part of this question?
Sonal Rattan: [00:12:32]
It's a part of it, yeah. it's definitely a part of it, but we are talking about AI threats at the moment. We've not even managed to just do manual threats; we are still struggling with some of that. This is just more stuff that we're going to have to deal with. Having policies for internal actors, that's never worked. We know that. If we look at what will happen with Tesla, there was two disgruntled employees. Next thing you know, Elon Musk's social security number is on the web, that sort of stuff. Breaches happen because people still do bad stuff.
But data governance, it seems to just be to put policies in, or think you're doing the right thing. But realistically, we should be being a lot more proactive and actually doing proper data protection, looking at making sure that at the source we've got good protection in place. Right now everything is done on the perimeter and you think you're safe. The perimeter is so easy to break now, and I think policies are just not going to cut it anymore. Governance, yes. But when we are talking to organizations, they'll be saying things like, “Well, we just don't know where half of our data is. How do we do all of this?” We get those questions. They don't know what their threat points even are. But that's why organizations like us exist. We're trying to help, get to that point where we're able to help them find where their information is and then start putting the governance in place and putting the access control in place and then putting data protection in place. But right now, it is still quite difficult to get organizations to still want to do the right thing when this threat is just only increasing right now.
Ben: [00:14:18]
I know you guys can't see it, but the poll data is in for this question. Robin, this will please you, right? People perceive the biggest blind spot to be exposed data, yay! But the second biggest blind spot is considered to be that we don't know what our biggest blind spot is, which is what you were saying there, Sonal, which is people don't even necessarily know.
Sonal Rattan: [00:14:43]
They don't. My view on this is that no one should be comfortable. We should be doing as much as we can to deal with these things right now.
Ben: [00:14:54]
We're going to move on to the second topic. They have no questions so far, so we just invite people. Again, don't be shy. I'll take the questions. I'll always give priority to the audience questions over my questions, but we're going to move into section two here, which is AI versus encryption. Who has the upper hand? Encryption is a big part of what many of you do, particularly Proton, so I’m going to come to you first. Can AI, and I guess further down the line quantum computing, significantly accelerate the breaking of existing encryption methods?
Eamonn Maguire: [00:15:36]
The short answer is no. Not directly for well-implemented symmetric and asymmetric cryptography today, but the threat model is shifting of course. A classic AI, ML doesn't threaten AS256 or RSA in any fundamental sense, but brute force optimization doesn't change the underlying computational hardness of the system. However, AI is used in say, side channel attacks, which are more interesting in fact. For problems where you cannot derive the key directly, the option is to go to site channel. You can use ML, for example, to extract key material from power consumption traces, electromagnetic emissions on the GPUs or CPUs to understand how the decryption process is working to extract it, what the actual keys are, or timing variations as well. You could do this probably with far less data than which you needed with classical methods. This was the thing that you could always do, but now you can accelerate that somewhat with some things that you have within AI.
[00:16:43]
For post quantum cryptography, there's already quite some things in place. NSA's migration to CNSA 2.0 for example, or NSA’s post quantum cryptography standards, which is FEPs 203, 204, 205, those ones were finalized in 2024. They've got very good safeguards in place for what happens after the fact. The problem is that most organizations aren't really implementing those yet, and they need to. The longer it takes you to do it, for example, by the time post quantum cryptography comes along, a lot of state actors or other actors are just harvesting data. They're harvesting data so that they'll be able to decrypt it later on. It doesn't necessarily mean that whatever's exposed now is going to be a decrypted, but in the future it could be, and that could cause quite a lot of problems.
The other part of it, I think, which is maybe even more interesting, is that there's people that don't really implement cryptography well. Many companies don't. For example, there's many companies that don't fully understand what end-to-end encryption is. They'll sell end-to-end encrypted software, but it's not end-to-end encrypted at all. I think AI also accelerates the discovery of implementation vulnerabilities, like fuzzing and code analysis and protocol analysis are much faster with modern tooling like AI and ML tooling. Meaning that weaknesses in how your encryption is deployed will be found faster than before. But fundamentally, the mathematics of encryption will not be broken by AI itself.
Ben: [00:18:28]
Filip, I can see nodding quite a lot when Eamonn was talking there. I thought encryption was under threat from AI and quantum computing. I thought that was almost like, Eamonn had almost like a contrarian standpoint. What do you think, Filip, is encryption under threat? Are cryptographic standards going to be easier to overcome than in the past?
Filip Stojkovski: [00:19:00]
I think I'll agree here with Eamonn. When it comes to encryption, it's more of a math problem and it's not something that AI is great at doing.
Ben: [00:19:09]
You say maths problem?
Filip Stojkovski: [00:19:10]
Yes. I don't think it's so good that it'll break encryption or necessarily the threat is higher there. But I would go on the other side, how AI can help us here, as Eamonn was saying, is identifying how we can better implement or identifying if we have bad implementation of the encryption. AI is really good as doing summaries, in other words. If you feed the right data, it goes through our environment, it sees how our configuration looks like. It can point out all those issues and tell us this is how you can fix it; this is what you can implement here to have stronger encryption. Yes, in other words, it's more of our assistant that can point out the issue and, yes, help us implement implemented better, or point out the issue where it's not implemented well and we might have some additional risks. In other words, it'll generate some alerts that we'll need to handle later. But I think there's a lot of buzz around this, because it sounds a lot of science fiction. A lot of people relate AI with complete autonomous robotic actions, that they can go and do quantum computer powers, which is a bit different than what AI does nowadays and what is available to the public.
Ben: [00:20:35]
We've had our first question here, which actually requires some encryption itself. I'm not totally sure, see if you can understand what this question says. It says, isn't it undue if the government asks or the court requires to decrypt? Is it voluminous and undue? I think what they're getting at is, are we going to get to a stage where governments may or there may be court orders to decrypt information? How hard would that be? In a way you would answer in a similar fashion, which is, it’s not going to get significantly easier in the short term.
Filip Stojkovski: [00:21:16]
No, I don't think so. I think we are not there and AI is not the solution to that.
Eamonn Maguire: [00:21:26]
If you do cryptography correctly, for example, you don't need to do this. You won't be able to break it because you don't have access to the keys. Whereas many companies when, say for example, they've encrypted data, they keep the keys in the same infrastructure. It doesn't make any sense at all. When you implement how you store the keys correctly and you secure the keys with the user's passwords and so on, which is what we do, we have no way of accessing the key. We can't decrypt the data. One thing is a company comes along and says, we've encrypted all of our data on our servers, but then they store the key on some volume somewhere, then this is totally useless. It comes down again to how you implement it.
Ben: [00:22:09]
Sonal, I'd like you to comment too, because encryption is a key part of what you do. Encryption rest, encryption in use and so on. This would be good news to you then that encryption is not immediately threatened by these new or these breakthroughs in AI.
Sonal Rattan: [00:22:27]
But it depends on what encryption you're using. For us, we are big fans of tokenization and using other techniques or field level as well, because we are seeing an increase in harvesting of data right now that's traveling through all this post quantum threat. I think they're looking at in the next five years that we're going to definitely need to at least address credit cards information and how that's being transferred. But it is everything that's traveling through the network, whereas using asymmetric keys, you've got a public private key; that side is a problem today because people are still harvesting that data. For us, we look at additional things. It was quite interested in one of the fireside chats that was recent in Kuala Lampa, they had the PKI and quantum conference. Certain things that we were hearing were, well, we might have to start using symmetric encryption before we start transferring data across, because it may become an insecure channel. Actually using double encryption on it.
But we know from an AI standpoint, it's not going to be a main issue at the moment. But there would be different ways to be able to go around it to try and find that information and be able to find the keys exactly as mentioned before. There are going to be ways around it, it's just how secure are you going to be with that? For us, we offer this as a service as I said, to make sure that we are doing best practices. We're making sure it's on different network. But we actually do a fragmentation part as well, which is something that we've been hearing from NIST as well, potentially fragmenting cipher text so you're not having everything in one place. We're seeing new different things in place. But organizations are waiting for the industry to solve this as a problem. They're not doing it themselves. Eamonn was mentioning that people are not implementing the techniques that are being prescribed by NIST, or whatever. But at the conference everyone was saying the same thing, “We're waiting for the industry to fix the MTLS problem.” Which is the encryption when you're sending data over the network, they're waiting for that to be fixed at an industry level. Right now, what measures should we be taking? What additional steps could we be doing to protect that data, because the threat's there now.
Ben: [00:24:58]
Robin, I haven't asked you a question yet. My question to you is, earlier on you talked about all the different ways in which companies, in your case their supply chain is leaking information, do you have any statistics on what's the biggest source of those leaks? Is encryption a big factor?
Robin Bratel: [00:25:24]
Well, I don't want to repeat what's already been said three times, but I can. Eamonn said almost exactly what I was going to say on this subject, which is, mathematically encryption is great. It's very, very hard to crack it. On the trade mission that I think I mentioned before, there's a new PQC or post quantum cryptography startup. It spun out of Leeds University with even better math that makes quantum encryption. I don't think that's the problem, the encryption technology. I think the problem is the deployment, the implementations of it, the use of AI to find flaws in the implementation and ways in around the sides.
[00:26:24]
I try and think of good analogies for this, but I haven't come up with one yet. The letters in the postal system, let's assume it's secure through the postal system. The way to try and read the letter is not in the postal system, it's when someone's writing it or when it's received. Then how it's either stored at the beginning or the end is probably how people try to crack the encryption model. Not in transit. I don't think people encrypt data at rest very well. I think that's more complicated, especially if you're a large enterprise. Therefore if I can get into those servers, then I can extract the data. We are seeing just huge data sets unencrypted in the public domains. I think that's probably the main risk with encryption technology. Quantum's still not very stable. Qubits just change state randomly at the moment. They haven't controlled that yet. I think there's a little while before quantum's really at a place where it can break this current encryption technology.
But that's not to say I don't agree with what's being said about the data being harvested and stored for a time when they can decrypt it. Going to the question to answer that, my view on that is if governments ask organizations that encrypt technology to create a backdoor or some way of decrypting for them, then that's the backdoor and that becomes the way that the criminals will try and target the decryption. It's a terrible thing to do, but I understand why governments want to do it. Because if you can't see what's blowing through messaging channels, how can you run successful intelligence units? It’s intelligence units that keep us all safe from criminal activity. It's a real problem; I understand that. But I think encryption, that should give the public some assurance when governments are asking for a key to decrypt the messages. It just shows you how encrypted they actually are, that the government can't decrypt them currently.
Ben: [00:28:46]
Good. We're going to move on to topic three, which is incident response in the age of AI speed attacks. Robin, you're on a roll, so I'm going to put the next question to you as well. When attacks unfold at machine speed, what parts of the incident responses can be automated and what parts still need to be done by humans or need to be human-led?
Robin Bratel: [00:29:08]
This is a really deep technical question, and I'm not deep enough technically. I'm going to caveat my answer by saying I'm not a forensic incident response specialist. I've met them, and they're serious individuals. I've got investors in our company who are building automated breach response technology, and I've met startups, well scale-ups really, that are doing this extremely well. I think you have to try and stop incidents at machine speed. There's no way human beings can do it. Although again, I don’t know about the other panelists, but I had an assumption that every log in a log file was being checked. I recently found out that there's so many logs in big companies that only 1 in 10,000 logs is being checked. This is to do with data volume. It means that state actors, and I had learned this from a very credible source when I was in, I'll just say I was in Washington DC, how about that? Only 1 in 10,000 logs being checked means that state actors and threat actors can exist in the system for a long time without anyone knowing they're there. Because the calls, the P cap data that goes in the log file, every 1 in 10,000 once every year they might just send a little quick ping out and you can easily miss it. This was another mathematical founded startup that has a compression algorithm that can compress the P cap data down to, I think it was something unbelievable, like 70,000 to 1, which means that they can actually now spot these calls out in the P cap.
That's one problem. If you're dealing with such a huge volume of data flowing through the pipes that you can't see the troubling threat actor traffic or state actor traffic, then it's even hard to operate at machine speed to detect that and close it down. I think yes, you've got to operate at machine speed on that side. From our side, once that incident's happened and maybe the data has been exfiltrated, then you definitely can't use human beings to try and analyze a terabyte of data and work out what your risk is. You definitely need AI to do that job for you. That's where, again, we feel we can step in. But in terms of human beings, there's a lot of important decisions that need to be made when the incident is happening. You want to try and get the machines to close the thing down. You've got to get all of your infrastructure closed down probably, as we saw with say co-op and Marks and Spencer's, they unplugged themselves from the internet. It is huge damage to the organization for a short period of time, but that was a human decision and it probably saved them. Then they've got to find their way to get back up and running again. That's the complicated thing. If the criminals are in the backups and have been for some time, then it can be very hard to shake them out and find out what's gone on. That's when you need to analyze huge volumes of log information. I'll hand the floor over. That's about as far as I can go on.
Ben: [00:32:52]
No, that was great. I don’t know who to come to next. I think maybe Filip, we'll come to you next. Maybe ask, I don’t know how much you're dealing with remediation and incident response and so on, but from what you are seeing in your business how much earlier do you think incidents could be detected if organizations were more aware of their exposed data footprint?
Filip Stojkovski: [00:33:28]
That's a good one. For incidents response, I've been working in incident response for 15 years.
Ben: [00:33:34]
Okay. It is your field.
Filip Stojkovski: [00:33:36]
Yes, definitely. As far as how it goes, how earlier it can be, I think there are two moving pieces here. First is how earlier you can detect it, it depends as what Robin was mentioning, what you're monitoring. I think we arrive to the point now that we have a lot of telemetry, we have a lot of data. We can monitor a lot of it, but then now we don't have the human power to process all of that. That's where AI comes to play. All that data that we were gathering all these years and we're saying we don't have enough analysts to process it, now we can process it at machine speed. But then it generates the, I would say the second bottleneck, which means we need to respond to it. When it comes to incident response and you need to go to respond to remediate, you don't want AI going and blocking all the laptops in your company because it thinks there's a threat and that's the safest action to do. You need a human to approve the action and to guide the AI to the right course of remediation and response.
Going back to earlier, yes we can, because we can process more data, we can definitely now monitor more stuff so we can detect them earlier. As I was saying earlier, AI is really good that making all those summaries. It can alert us now in couple of minutes and tell us this is where the problem is. It's something that here as well at Blink we do a lot and we try to improve, is how fast you can detect these specific threats. Yes, we can detect them way earlier, reducing the time from what was previously hours, now we can reduce it to minutes. But now the idea is how fast we can remediate those.
[00:35:28]
I think that's the biggest challenge that we're having today on where to put the human in the loop, when it makes sense the AI to go and remediating things on itself. Then the next thing is how we reduce the noise, because there's another thing that we usually create when it comes to detection. We have all these data and we need to create these rules that will show and point that something is malicious. Which means that we need to find a way that we need to fine tune them well enough, because then it's generating too much cost. You cannot just throw unlimited AI at all your logs and skyrocket at your cost at the end of the day.
Ben: [00:36:12]
Correct. Thank you so much. I didn't realize this was your absolute field of excellence. That was great. Over to you, Sonal, how much does exec deal with incident response? Is there danger in trying to respond too quickly? Can that lead to mistakes?
Sonal Rattan: [00:36:40]
The pace that, especially now looking at how quickly your organization can get infiltrated with AI, there's a lot of new techniques out there and a lot of code can be generated very quickly. I don't think you really have a choice to slow down. I think you've got that the bullet's been shot and you've got to try and outrun it, is really the best way I can describe it. Every organization's going to be there and mistakes are going to be made. That's what I guess state actors and anyone that's trying to actually access that information from you is hoping that you do make a mistake. But it's very cheap for them to put large scale attacks on you. The last one I saw was $18. Some code that was generated by MIT it, I think it was under an hour they were already into the system. They had a seven-hour window, less than an hour they were in and had access to all of their underlying systems. It's very, very scary how quickly this is becoming a problem.
For us in terms of incident response, we're almost like a proactive mechanism. You've got other tools that don't deal with things reactively. The only part that we help with on an incident response perspective is if there is a bleed, that we can go into crisis mode and we can stop data going out, and only have really specific operations to be allowed to actually access that data. They call it a crisis manifesto. This is where organizations can actually say, well, we've got this bleed. I don't want to take, as Robin was mentioning, completely having to actually come off the internet. This is now being really very controlled. But a lot of organizations don't have that in place. The ones that we are working with are starting to think about this in terms of, if we are under attack and we don't know where it's coming from, do we want to completely take ourselves off or do we want certain operations to still continue then we can just monitor those to start with and then we can start reducing other places where data is actually getting leaked out from, and to almost see where that leak's coming from? But the attack surface has just become so large and so quick, I just don't see how you can slow down in this space right now.
Ben: [00:39:06]
Thank you so much. Eamonn, what everybody says, a lot of consensus on this question too, which is the surface area is getting larger, the speed of attacks is increasing, a lot of response is quite crude coming off the internet. Are organizations structurally prepared for the world in which we're heading into?
Eamonn Maguire: [00:39:32]
I don't think many of them are. Traditional incident response playbooks are built around mental models of discreet, say more human pace attacks, where you've got very identifiable stages like the kill chain and what to do. But AI-driven attacks can compress or collapse those stages. An autonomous agent can move laterally, exfiltrate cover tracks much faster than you'll have a SOC analyst be able to triage the queue. The playbook assumes that you have time to deliberate and look at stuff properly, but AI attacks don't necessarily grant that. Also, I think more fundamentally, a lot of IR playbooks assume a human adversary with motivation that you can model, whether that be insider threat or external threat. Autonomous AI agents don't necessarily have an intent in a traditional sense. They have an objective function that they're trying to optimize for, which makes behavioral analysis much harder, because it’s not as predictable as what you would've got from necessarily a human.
[00:40:37]
You're seeing rules were written for human attack patterns, not machine patterns. Even though the machine patterns are typically learned also off human attack patterns, but they can generalize also based on that.
Then I think Robin also alluded to that, there's the organizational design problems. Like your CISO, your legal communications team still operate largely in siloed escalation chains, which are optimized for, we have a few days before this is public or something. But when you've got AI, ML, whatever, accelerated breaches, it can become public maybe within minutes or hours. It takes them much less time. You can just have a pipeline which is sending them out. The organizations that tend to be more ahead are those that are moved towards continuous validation, where they look at breach and attack simulation, not necessarily just using their pen test once a year to tick a box. They've really invested in proper detection engineering and they've also got multi-layered defenses for this.
I think also maybe what we define as being an incident these days is going to change. There are certain things now which have historically been performed by humans, which are now being performed by perhaps agents. Whereas maybe people with really bad code quality, for example, would be a HR issue, or if they messed up a legal contract and it ends up costing the company millions, or if they leak data, that would be a HR issue from before. Now it's not necessarily. It's an incident response issue from the security team, because people start having these agents perform these tasks more autonomously than they've ever done before. Well, we've never had it before in such a scale.
Ben: [00:42:25]
I know you can't see this, the poll. We asked people: how confident do you feel your organization could respond effectively to an incident? Most people have responded, we're somewhat confident, but we rely heavily on humans. Based on what you just said, they shouldn't feel somewhat confident because an overreliance of humans is probably not going to mean they can effectively respond. Is that right?
Eamonn Maguire: [00:42:55]
Can I continue on that?
Ben: [00:42:56]
Sure. Yeah, I just wanted to ask. You've got the mic to speak.
Eamonn Maguire: [00:43:00]
I wasn't sure because before I jumped in by accident. Yeah, that's true. People are generally overconfident anyway on their ability to do anything. That's no surprise that that happens. But yeah, people should be less confident I think, and more paranoid, about what's coming up. That's typically a good way to behave as a security person or a security engineer, detection engineer. But also the governance aspect, which now came to before, this is super important because people don't know their exposure. It is easy if you've got a closed room and you can control every exit of your data and the people coming in and out. But when you've opened up your system to say, third parties, say I open up my assistant to ChatGPT, ChatGPT does a web search, how does it do a web search? It does it with some third-party API. How does it call this other system, for example, to do some other check? There's all these third parties which are connected, to which companies have no idea where their data is going. They may say that you can remove API and stuff from these queries, but there is no guarantee whatsoever that your identifiable information is not inside those queries, the third parties. You have no idea.
Ben: [00:44:20]
We're going to move on to the fourth and final topic, which is around cybersecurity as a strategic advantage, not just a cost. The first question is to you, Sonal. What does it actually take to move cybersecurity from a cost center or what is perceived to be a cost center maybe, to a board level strategic priority? Just to follow up on what Eamonn said, are people paranoid enough?
Sonal Rattan: [00:44:48]
No, they're not. Again, this overconfidence part of it. It's shocking that cybersecurity is not already a part of the board. We're in a completely different age at the moment, and it's just how quickly organizations understand that and start reacting to it, what we are just watching and seeing. To the point that we're looking at a board level, even internally within organizations, it's so fragmented. Especially with the large organizations that we're speaking to, it is very fragmented on who owns what. There's the autonomy to do different things. There's the autonomy to have their own mistakes. You're giving all of this to your teams and saying, go faster, do more, use AI; potentially causing even more issues. But if no one's managing that and governing it at the board level, I'd be a little bit worried that these decisions have been made. It could be brilliant and it could cause additional pain as well. It should be managed and governed already. It should be a strategic imperative. It should be there, but it is a shame it's not. Because it's something that's probably one of the largest threats. It's not just coming from actors, it is coming from governments. You're seeing so many different things, so many places where you're getting attacked.
It's so cheap to attack as well on top of it. How can it not be something that is getting funded? For example, we spoke to one of the biggest banks in the world, and they would not give budget for their API security program. Because they were saying, we've got other priorities that we need to think about. But if it was at the board level, you'd at least expect budgets to be assigned properly to this, because that overconfidence is there. “No, no, it's fine. We're already doing these things for it.” They had a breach. We are seeing it already.
Ben: [00:46:57]
I'm just asking this quickly because when we're in the green room, we were talking about, AI seems to be hoovering up a lot of entry level jobs. I agree that we should be more paranoid and the threats are growing all the time. But is one positive upside that it will create more employment for cybersecurity professionals?
Robin Bratel: [00:47:20]
There aren't enough people. There just aren't enough cybersecurity professionals. Sorry to cut in.
Ben: [00:47:28]
If any parent is watching this worried about their child graduating from university or whatever, not having a job, maybe they should study some sort of cybersecurity degree. Okay, that's good.
Sonal Rattan: [00:47:40]
I have one point to add there, which I found interesting. Again, it is slightly away from it, but Imperial College, the university, has now consolidated the computer science and maths divisions into one faculty, which means that they're expecting less jobs in the computer science market. Where it was two, they're now combining it because again, that overconfidence in what AI is going to do, which means you're going to get less and less people going into cybersecurity. We've got the opposite problem right now. The signals that are being sent out academically are saying the jobs are going to get automated by AI anyway, so why would you want to do that as a job? It shouldn't be the right attitude towards it.
Ben: [00:48:26]
No, it's a bit more nuance. You wrote, people should study cybersecurity or farming. We'll let you elaborate on that in a second. But to talk a bit more about Proton, you guys have defined yourself by being a privacy first security company. How do you take that differentiates into this world of AI? What is Proton doing in terms of helping your organizations to safely move in this world of using LLMs and so on? Maybe you could slightly elaborate on that and how privacy first meets this desire to use open AI and LMS and these things.
Eamonn Maguire: [00:49:18]
Our approach is more about data minimization. The principle is that if you go for local processing, minimal data retention, federated approaches, end-to-end encryption, you shrink the breach surface. You can't leak what you don't hold. If someone came into Proton, an attacker got into Proton, they wouldn't get anything useful. Everything is encrypted, which is the way it should be. It wouldn't be great for Proton because it would look bad if someone got into our systems, but they would not be able to get anything out of it. Most companies are under pressure to be able to utilize the data that they get from people. They're looking to either monetize it, they're looking for advertising, they're looking for analytics to understand customer behavior that maybe they either sell to some third party or keep it for themselves to optimize their processes. But for us, we don't have any of that. For us privacy first isn't just a value statement that we put on the marketing pages for the website. It's a security architecture choice that makes your risk profile genuinely better. You can credibly communicate to sophisticated enterprise buyers and regulators. We don't have any of that information because this is how our entire security architecture is focused. Yeah, I think the trust dynamic is probably shifting a bit because users are becoming more sophisticated maybe, after years of high-profile breaches and the emergence of maybe AI products that are visibly consuming their data. Also, you have people like Sam Halpin for example, who tell people don't put any of your sensitive information in the ChatGPT. But people do it anyway, which is the foundational reason for why we created Lumo, which is what I work on.
Ben: [00:51:09]
What is Lumo? Sorry, I don't want to allow yourself to give too much of a plug, but what is Lumo? People may not know.
Eamonn Maguire: [00:51:15]
It's Proton's answer to an AI assistant like ChatGPT or Claude. But your data is your data. We have no idea of what you're doing on the platform. You can use it for sensitive queries without having to worry about your data turning up somewhere or being used against you in the future for political office or something like that. It's the same about what we do; the same reason rationale for why we did Proton mill. AI is coming into the forefront. The attack surface for everyone is AI tools. Instead of putting all their data into Google searches before, and now they're starting to converse with AI over which products to buy or what's wrong with their bios, for example, all these things are now going into a different system than what they were going into before, which is largely Google search. As that threat service has emerged in terms of the privacy in particular, not just insecurity, it brought the need to build something like Lumo to put the data back in the user’s control.
Ben: [00:52:22]
Fantastic. Filip, a question for you. You're working with a lot of CISOs. How do they go about elevating the strategic importance of what they do, getting a big budget? To what extent is it possible to build bigger budgets just based on the sense of unknown unknowns? A lot of what they do is try to prevent what may not happen. What are the secrets in terms of building, getting in more and raising the profile of the cybersecurity team?
Filip Stojkovski: [00:53:00]
I think at least with the CISOs that I've worked with, the key is that you transform the risk into dollar amount. That means that you want to show the risk that you have, how much they would cost the company if they're not remediated or handled. I think there are two types of risks CISOs are handling. The ones that they're reduced, so you have some sort of controls around them. You have monitoring, you have detections, you have people that they monitor, the infrastructure and so on. Then you have the assumed risk, which is on the other side, which is stuff that you cannot do and you just assume and leave them as that. Yes, if the worst happens, then the executives need to be fine with the fines or the amount that will come with the specific breach that will occur.
[00:53:57]
But yeah, I'm seeing more and more, especially from the CISOs when they’re coming in and looking at our products, one of the key areas that they're asking is how we can turn all these incident response metrics into dollar amount. I want to show to my executive that, yes, we are handling threats faster, but how that looks like in dollar amount and how much risk we are reducing now. The more risk that they're reducing, they can ask for more budget. On the other side, the more accurate their assumed risk metrics, which in many cases is assumed, you say, “I have 10% of my environment that I'm not monitoring and that costs 10 million in fines if something happens there,” they might get 1 million investment just to cover that part. I think that's the best route to go with and get some extra budget.
Ben: [00:54:52]
Fantastic. Robin, final thoughts for me on this topic, because in your case, it's fair to say that you are creating a new category within cybersecurity. How do you go about getting the attention of CISOs? What's your biggest argument? Is it about trust? Is it about once trust is lost, trust is lost forever? What do you find works in terms of raising the profile of what you do on the agenda of organizations?
Robin Bratel: [00:55:25]
Well, for us, because we've analyzed 288 million or 290 million files, we can turn up and show them how they're exposed. Often they don't know, which tends to grab attention fairly quickly. For all the CISOs I've met, they have my full support and sympathy. All organizations are under siege, is the way I think boards should consider this. If you think about your house and the burglars were just set outside constantly, waiting for you to go out or waiting for you to not close a window, you would have a different approach to security. You'd either not leave anything valuable inside the house, or be absolutely 100% sure that you'd locked everything tight and put bars up. Well, you can imagine yourself. I believe that's pretty much the reality for all CISO security teams and organizations today, especially the large ones, is that they're permanently under attack all the time within your organization. I also have the view that unless you are in farming, Eamonn, you're pretty much running digital businesses these days, which run on the flow of data. Even farming's being taken over by OT and IOT; they've got drones and automated things. Imagine in 20 years, if your farms run automatically and then someone hacks it, they could destroy all the crops.
When we talk about this security problem, a lot of people naturally think about it as people interfacing within the company, computers, laptops and stuff like that. They're not actually thinking about operational technology, SCADA systems, IOT devices and so on. At the perimeter and the managing director recently spoke at an event and talked about the fact that a large proportion of incident these days are caused by IOT or devices at the edge. If you consider that house metaphor and your house is permanently under attack, if you keep building sheds and structures in the garden that are loosely umbilically connected to the main property, but you're not securing them very well, you put a new camera system in, you connect it to the internet and its login is admin and password because IOT devices ship without sophisticated encryption, you're just creating a window into the company and that's how they'll get in.
[00:58:06]
I can't speak on behalf of boards. I think a lot of boards are taking it seriously. What I can do is plug something. I'll plug this book. I didn't write it, but one of my brilliant advisors, Andy Brown, did. He's on the board of Zscaler, a massive security company. He was the former global CTO at UBS. You can buy it; it says it's 25 bucks on the back. It's seven steps for board directors. It's a step-through guide for directors on how they should be taking security and cybersecurity much more seriously. I do think they need to think it's intimidating, everyone's out there, they're trying to attack the time.
Final thoughts are, if you do suffer an incident, I've seen multiple times now there's a direct relationship between an incident and the valuation of a company, you can see it on the stock market. There's loads of research on this. If your stock price goes down and you trust worthiness goes down, you'll probably have customer attrition. If you have customer attrition, then your customer acquisition and retention is going to suffer, and that's going to hit the bottom line. If it hits the bottom line, you're going to see valuation depreciation, maybe trust in the brand goes down. If you suffered sustained issues, M&S, co-op, Land Rover, then what's that done to your brand value globally? What's that done to your trust globally? What's that done to the valuation of the business globally? The impact's going to be probably far greater than the recovery costs, which in the case of JLR were half a billion and M&S were 300 million. What's the actual impact of the bottom line long-term for those things? I just think you've got to take it more seriously. There's got to be much more investment, and security teams need a lot of support because they're dealing with an extremely complex problem. Going right back to something Sonal said at the beginning, you've got inside actors working against you, which is the hardest thing of all. It's either a mistake or intentional, they could just be out to harm you deliberately. Very challenging topic for sure.
Ben: [01:00:16]
Thank you very much. We need to wrap there because we're out of time, sadly. We didn't have many questions I think, which reflects the fact that this is a complex subject. We are very grateful to have four real experts with us today. I certainly have learned a lot and I think some of my misconceptions of being corrected. We've clearly identified that this is a growing threat and people should be more paranoid, but maybe we finish with at least one optimistic note that we'll need more cybersecurity professionals.
Thank you everybody for joining. Thank you for plugging Andy Brown's book. Maybe Andy Brown will come on one of our 4x4’s in future. Thank you to our speakers, thanks so much for giving up the time and sharing your expertise with our listeners and viewers. Thank you very much.
Subscribe to our newsletter
Join our newsletter to stay up to date on features and releases.






