Securonix SIEMple Talks

Testing Deepfakes: The Reality of AI Impersonation Attacks

Securonix

Send us a text

What happens when your company's cyber defenses face a deepfake attack impersonating leadership? Bill Shearstone, Director of Information Security in the energy sector, shares the eye-opening results from a penetration test where his team used AI-generated deepfake technology to trick an employee into resetting credentials. Despite technical limitations and the employee's "gut feeling that something wasn't right," the attack succeeded - revealing crucial lessons about human psychology in security.

Drawing from both his extensive commercial experience and previous work at the NSA during the global war on terrorism, Shearstone offers practical insights on how organizations should approach penetration testing. Rather than repeatedly testing external defenses, he advocates starting tests with internal access to thoroughly evaluate detection capabilities, incident response procedures, and lateral movement controls. This approach uncovered a critical finding: security tools detected suspicious activity but failed to provide the complete picture of what was happening.

Shearstone emphasizes why cybersecurity's strength lies in continuous improvement and incident response preparation: "If I look at an attack coming in and I'm able to contain it without impacting business operations tremendously, to me that's just as good as preventing an attack." His pragmatic approach acknowledges that perfect prevention is impossible, making effective detection and response capabilities equally crucial for organizational resilience.

Speaker 2:

Welcome to another episode of Simple Talks. Today I have as my guest Mr Bill Shearstone, director of Information Security in the Energy Sector. Bill has over 15 years of experience focused on IT security with the Department of Defense and the commercial sector, where he is today. So let's welcome, bill. Hello, can you tell us a bit about yourself?

Speaker 1:

Hi, how are you Again? My name is Bill Shearstone. I've been in the information security business for quite a few years. I had an interesting stint with the Air Force and the Department of Defense and then, once I finished up there, I decided to go into the civilian sector, where I've worked for companies in the insurance industry. And here I am here now finally working in the energy sector.

Speaker 2:

And today, Ray, you are in a leadership position. So what do you think is the biggest challenge for a security leader today?

Speaker 1:

You know what? It's almost cliche-ish, but it's AI and we're looking at it different ways. We're a regulated industry and then we also have some intellectual property concerns too, and then you have to balance that with the benefits that are coming with these new tools and technologies, with AI. So one of the things right now is we realize we have to embrace it and we have to bring it in. So we're just trying to figure out how best way to bring it in in a controlled manner that we do reduce our risk and then still capitalize on the benefits that AI is showing us. We've been focusing on that. Right now we're working at doing a couple of pilots different technologies and right now, before we go ahead and proceed with these pilots, we've got to give our executive team confidence that the risks are controlled and the risks are known.

Speaker 2:

Perfect, and I'll tell you, you are now the owner of the record that we brought up AI in the podcast one minute from the start. I think it would be very hard to break that record, so I think you're going to hold on that one for a very long time, right? But yeah, you're mentioning kind of having some initial kind of work and experience with AI on the defense side, but I know, right kind of from previous conversations that you also have experimented with that as part of pen testing activities, so why don't you tell us a little more about that?

Speaker 1:

Sure, I'll frame this a little bit. You know, when we do our pen tests, we have a good relationship with our vendor that we use. We do our traditional ones where we just say what's more vulnerable on the outside a particular pen test. We don't tell them much. But then right away we get into more of a collaborative work with our pen testers. We give them a lot of access to our internal systems so they get a good view of what the internal vulnerabilities are quickly, aside from trying to break in through a phishing attack or social engineering and what have you. So when we're doing this dialogue and this is almost a year or so ago now the testers say we've done some social engineering with you before phishing. Why don't we try a deep fake? And I thought, well, that's interesting. And then so we thought about it a little bit, and then that came up with a scenario and then I thought, well, I think this would be a great training opportunity to go through this. And then that way, you know, if we do have some good, you know, information that comes out of this, I can use this for security awareness. So we decided to do, or come up with a scenario where we would try to fake our our help desk technician into resetting a person's you know password or an MFA, and then we were talking through. This is like you know what I don't really want to target somebody in my organization to do that. So what we ended up doing doing is we brought our pen testers in as contractors. So basically, we gave them a contractor account on our system and then we'll go through the scenario. So when we're doing initial planning for this, we had to do a couple of dry runs.

Speaker 1:

Believe it or not, the technology at the time we did this wasn't as mature as it was now. You know, at the time you read the article about, you know the deepfake that happened with the Hong Kong company and how they got up millions of dollars. Well, the tools that we were using, our testers were using, weren't as good. For example, the audio wasn't there and the video was a little bit choppy. So we went through that. It's okay. I'm not so sure it's going to work. They came back a couple of weeks later with an upgrade to this tool. Before I come back a little bit, this tool is open source. It's free. Anybody can get it.

Speaker 2:

Right, it's not like they're expanding insane amounts of money to do a deep work. They're using open source software.

Speaker 1:

It's open source. The only challenge was having enough compute power actually to use it. So, you know, was one of the reasons it didn't work. So they actually came up with a, you know, a dry run with me and I was the person they're going to impersonate. And you know, I have hair, I'm clean, shaving, uh. So what they did is they threw up a video of the pen tester using my face. Now, think about a typical pen tester. All right, they have the long goatee beard, you know. And this person happened to be bald. So when I first saw this, I saw my face with a goatee and bald. Oh, that's hilarious, it's hilarious. They were laughing. So I was like all right, you know what? I think it's good enough to get it working, but you'll have to tweak it a little bit. So all he decided to do is the tester decided he's gonna shave and he's gonna wear a hat. Okay, that may work.

Speaker 1:

So then we decided to go through this exercise. So we have two people here, or four people that are involved in this. Okay, we're gonna call this one the target. It's actually the pen tester who we want to get the MFA reset to. Okay, we're going to have the impersonator, the person who impersonates me and that's going to be the person who's actually going to be running the deepfake. And then we have our victim, which is going to be a help desk technician within our company. And then you know the way we have it set up. It's hard for somebody on the external side to go ahead and initiate a Teams call. We wanna do this via Teams, so we actually needed somebody to facilitate this conversation. So we ended up using another person on my security team to be a facilitator on this. So basically, we set up our help desk. You know, again, this would've been a lot harder to do if we didn't have some inside help on this. But again, the purpose of this was not to really embarrass or trick the target. The purpose was to come up with a training scenario or just a feasibility scenario so we can use for training. So we went along with it.

Speaker 1:

So the person on my security team set up this call, reached out to the help desk guy and said hey, you know we're having a hard time getting this contractor to reset his MFA, can you reset his MFA form? And when we went through the scenario, the person was impersonating me, didn't have audio. So because the tool actually couldn't handle the audio. So you know, imagine in a video waving his hands, you know no audio, and he's typing. It's like hey, can you reset this contractors MFA? So you know, because the the call was initiated from someone on the inside. You know he had that, even though the facilitator didn't say a word. He saw my face and that was like you know what okay, yep, this is legit, I'll go ahead and do it. So it was good enough to convince the help desk technician to reset the contractor's MFA. Alright. So that happened and we closed it all out. And then we had a feedback session afterwards.

Speaker 1:

I was actually out of the office on vacation at the time and I come back and they recorded the video. So I looked at the video and, oh my god, it was uneasy seeing my face interacting. That wasn't me. So what he ended up doing is he ended up, you know, being clean shaving and because he was bald, he wore a hat. I never wore a hat, but still, just seeing my face on that was amazing. So then I decided to break the news to the help deck technicians, like, hey, I'm sorry we set you up.

Speaker 1:

The reason we did this is for training.

Speaker 1:

He was embarrassed and he was like you know what, I'm sorry, you know, things didn't seem right to me.

Speaker 1:

It's like what do you mean?

Speaker 1:

It's like you know that I just had this gut feeling that things weren't right.

Speaker 1:

But because I saw your face and you know my partner was on the other call, even though he didn't say a word, he just saw his team's thing, it brought legitimacy to it.

Speaker 1:

And then we walked through the video and I showed them as like, well, listen, did you? If you noticed? Because it was an inside call, the contractor who has had my face even had to dip the contractor name on the bottom of his Teams logo. But because he saw my face, he didn't even pick up that it was a different name on that Teams. So again, he was embarrassed and I said you know what? I apologize, but we wanted to do this for training. So the outcome of this is we actually proved that even a company like me, a deep fake, is actually feasible and can be done. The good news is, you know, I took pictures of this Again, we didn't sell the guy out, we didn't want to embarrass him anymore. So you know, every year we do in-person security training. I put that up to show everybody that yes, it's feasible. And it's neat to see the interactions from them, because people are like you know, you read about it. If you actually see that can actually happen, it really set in.

Speaker 2:

It makes it closer right to reality. Right, one thing is reading the news and the other is seeing the training. Right.

Speaker 1:

It happened right in a test in the company I work for with people that I'm seeing here live. It brings kind of it's far closer than just reading in the news. And uh, when I went through the training scenario and everything like that, everyone's like well, who fell for? It's like no, I'm not letting his name out he ended up, uh, telling people himself, you know he's like, yep, it was me, it was me, which I didn't want him to, but you know he felt want him to, but he felt the need to get that off his chest. I guess that it was him, so it's good. So, because of that, I think it was an important tool to again to give some realism that things like that can happen.

Speaker 2:

Yeah, that was very good, and there are two points and there are two points. An interesting thing about listening to this exercise is there are two points that caught my attention and they are not necessarily related to the technology or the innovation related to deepfakes, but the first one I want to bring that back later that is related to you running these exercises already given some level of access or kind of internal access to the pen tester. I really like that approach and I want to talk more about that, because I think a lot of people kind of do the traditional black box from outside pen test and they waste a lot of time and resources doing it over and over again. But what I wanted to discuss now is the training outcome of this exercise, because when you are showing this to people, right, what we want them to learn is that they may be tricked by right a technology like this and that they should really be more suspicious about non-standard requests or things that are not following the required process. And one of the outcomes for that would be we are asking them to challenge authority in a more frequent manner.

Speaker 2:

Hard right, we know how much people normally fear going into a conversation with someone that looks like the CEO, that sounds like the CEO, and having to challenge them in terms of the requests that they're making. Right, kind of no, I won't do what you're asking because it's not following the establishment process and this is already kind of hard enough for the, I'll say, the regular employee. Now, one piece that cannot be neglected for this to work is actually the training for people in authority positions right kind of leadership that they will need to be used and to accept being challenged in situations like that. Right, because it doesn't help at all if you go and try to train your entire team about challenging directors, vice presidents, c-level people in the organization and then when you do that, they are berated because they are getting in front of the business, they're not letting them do what they need, et cetera. So have you experienced that type of conversation as part of the training efforts that you've gone through after this exercise?

Speaker 1:

You know what? That's very interesting. I never looked at it that perspective. I'm glad you brought that up. So when we did our in-house training, our executive team was part of it, but I never addressed to them. Hey, if we want people to encourage them, if something doesn't seem right to challenge you, are you willing to accept that and agree with that and actually commend that? I did not address that. So, thank you, that's something I will definitely look to. I meet with my executive team a couple times a year to give them a status and that's something I will definitely bring up, thank you.

Speaker 2:

We are asking people to challenge you if you ask non-standard things. So please don't go back on them if they do it, because that's what they're being trained for. That's something that we really need to think about when going through this type of effort. Now let me go back to that point related to the pen tests and running that with scenarios internally or with some help even from internal resources. When I see pen tests it's very often that, as I said just a few minutes before, the black box scenario coming from outside and people saying try to get in, and I believe that's a very big waste of resources because you may be breached quickly by doing that in the beginning. But if you have any type of continuous improvement capability, you will start kind of closing those initial access doors kind of quite fast. So the things that kind of may be interesting from an external vulnerability point of view.

Speaker 2:

It may be hard for the pentesters to find in the following tests, but then they're going to start resorting into things like phishing, for example, or going through kind of putting kind of USB drives around the building or kind of getting kind of physical pen testing et cetera.

Speaker 2:

But the fact is the initial entry point is almost always guaranteed to succeed if they try hard enough.

Speaker 2:

Right? We know that it's pretty hard to completely eliminate the initial access step of attacks. So why we keep kind of trying to test and test that scenario over and over again when we know that eventually of course we can lower the probability of that happening? But we know that that will happen and if we set the objective and the standard of the pen test to go through that, sometimes the pen testers will wait a lot of resources doing it and they won't spend much time in the following steps where you will test your ability to resist situations where the initial access have been obtained. So I really like, when you mentioned your pen test, that you provided the initial access already for them in a scenario, because it really shows that you are testing layers of security that are very often neglected as part of these pen test exercises. I wanted you to tell me a little more about how you see that and what type of variation you put in these pen test scenarios so you're able to test the different kind of security controls that you have in place.

Speaker 1:

Yeah, I totally agree with you, and that's why I take this stance on, because it's just a matter of time before someone falls for a phishing link. You know, a vulnerability that we're not aware of today may come up tomorrow and give access, and so that's 100% why we look at that. I do got to say, though, it's always good, even though, just to have that cursory check on our perimeter, so we just make sure we definitely do at least a cursory check. So and there's a couple of reasons why I do the the more collaborative and internal access. We have our internal vulnerability scanner. Absolutely, we see the highest vulnerabilities and we mitigate them, you know, towards the priority of the asset. But even though our tool tells them they're high-risk, does that mean they're exploitable? And you don't know until you actually have someone try to exploit those. Secondly, another thing that I like about doing this internal aspect is we test our response, one of the things, too, as I look at it, are these guys going to trip my EDR? Are they going to trip my SIM? So that's another big area where I want to make sure that, yes, they are. If we give them some access in there, they can start lighting up some of these alerts that can go ahead and have us exercise our incident response plan ahead and have us exercise our instant response plan.

Speaker 1:

So, with that in mind, what we do then is we give them. You know, we there's two things that we do. We let them put in there they call it the drone we connect it to our network and it's it's basically unfettered. You know, we do have some physical controls on that. We plug it in and then we make sure we get these alerts on those and then we'll go ahead and release some of these physical controls basically our neck. And then the second piece is we give them a user account. That's not a privileged user, so that way they have an entrance into our environment and then, to see what we can do, we give this user account using one of our virtual desktops.

Speaker 1:

I have found that that's a challenge because a lot of stuff that they try to do gets tripped up by our EDR. So they're actually trying to battle our EDR and then trying to run their attacks. So that's why it's good to have them do some looks on that again, exercise our EDR tripping, but then they're spending time trying to beat a commercial EDR and that's just as almost as challenging as trying to break in from perimeter. You're wasting resources. So the nice thing about having that unfeathered access from that physical endpoint is they're able to scan through things and, again, they're using their vulnerability tools, but they're able to pick out vulnerabilities that they think they can exploit, not ones that are deemed high risk by our tool. And that's where you see the bang for the buck, because our pentesters have been doing this for a while. They do this, they get good at it, and the testers that we use actually are married with our instant response retainer vendor. So there's some dialogue there what they're seeing out in the wild, what's being executed and what's being, you know, successful, and they're building apply those things.

Speaker 1:

So when we have them inside and they're doing their, you know, but trying their lateral movement, that's where we're testing our sim capabilities, and the neat thing that happened is this last test is they actually, you know, they lit up our sim and they lit up some other tools that we have. So when I met up the next morning, I was like, hey, these are the alerts that we got and I came up with my actions. This is what I think you did and I thought I had it Like, wow, this is great. You know, these are the alerts I mapped out and these are the services and servers that I think you were on when they told me what they did. There was some overlap, but it wasn't exact.

Speaker 1:

So to me that was extremely eye-opening, because what that tells me is my SIM and my tools may tell me that something's going on, but it may not tell me exactly what's going on. So we see something like that and if it's, you know telling me exactly what's going on. So we see something like that and if it's, you know, if it's out of my league, right away I'll execute my instant response plan. Get my you know IR team that I have a retainer coming in. But it's neat to see that the information you're getting from your tools may not be exactly what's happening. Now, if we would have gone through our plan and containerized some of these tools and these accounts, it probably probably would have slowed them down absolutely. But it wasn't an end-all be-all to see what was compromised again. So, even if I see something like this again, I got to make sure, I got to bring in the experts to make sure, even if I contain it, I got to make sure that I really did contain, it really did eradicate it.

Speaker 2:

Perfect and there are some very valuable lessons there in kind of what you're describing. I think the first is the value of investigation. Normally we look into the flow, like, okay, there is detection and then you go and respond. You do the containment actions as you mentioned. But, as you described, there were actions that they took as part of their attack that you didn't see. You saw some of those. There was enough right for you to get suspicious, eventually kind of trigger a response scenario, right kind of to a risk mitigation action. But there are still things that happened that you didn't see.

Speaker 2:

So we see kind of how important it is to have an investigation step after that initial detection so you can really understand the full characteristic, the full reach of the attack that is going on.

Speaker 2:

And I believe that sometimes, when you look into how technology vendors present, how their tool works, many times they paint a scenario where, from the detection point, you have the full picture of what happened.

Speaker 2:

But while kind of this investigation step is very important because there will be pieces that may have touched blind spots in your environment or things that you haven't covered from a threat action point of view or they were just not suspicious enough, even if you're looking using something like anomaly detection-based, it may be something that it didn't hit a threshold that will raise suspicions enough to be looked on. So I think that really shows the importance of investigation and many times there's a type of investigation that still requires humans. We talk a lot about bringing AI to the defense side, and I think it can really help a lot. Right Kind of sometimes and I think it's feasible we may talk about eliminating level one type of analysts just kind of with AI capabilities. But when you hit this point of investigation, when you're looking for things that you weren't able to detect initially, that's really kind of where humans shine and I think that's kind of where we're going to keep seeing humans kind of being very different from AI for some time.

Speaker 1:

It's neat the person I work with. He's all for looking to see how AI will take care of that level one and I think, if you look at it, ai is just a tool. Okay, at least today, you can't totally rely on that. It helps you 100%. But you're absolutely right, I think you need that human validation, human, you know, inquisitive piece on this, like that human gut feel, as I mentioned with our know the victim that we had on our deep fake. It's like something didn't feel right and I think sometimes when you look at things and you piece it together, you may not have it quantitative in front of you, but like it does, you have the gut feel that this isn't right. We need to do some more digging and I need to bring in some help on that. So I think that piece is the intuitive nature of the human is important at least you know from my aspect when I look at these things and that's something that can't be replaced by any type of technology.

Speaker 2:

Perfect. And another thing that you mentioned kind of in terms of these pen test exercises coming from the inside was that battle right between the pen tester and the EDR technologies. And you know, I think that happens a lot when they are very focused on replicating the behavior of typical malware, because these EDR technologies are very good in detecting malicious software, right. But if I wonder if the pentester goes more towards using the kind of living of the land type of techniques or even looking more on application layer when they're trying to move laterally or obtain additional permissions or additional privileges, getting access to information right. So instead of trying to run software on that endpoint or trying to move laterally using scanners on the network side, all that, we know that happens and we have a lot of instrumentation to detect what would happen if the pentester just start opening the business applications that they can see either on the desktop or kind of from the intranet many times, and start trying to get in to those systems.

Speaker 2:

What I used to do when I was a pentester, right kind of in these internal scenarios I would start doing kind of SQL injection on internal apps. So you see all these organizations taking a lot of time to secure their external facing technologies applications, but their internal applications are really easy to break in. We just use typical SQL injection and get full access to a database full of proprietary information, sensitive information that wouldn't require me to run a single malware or malicious tool on the desktop right on the endpoint. So what do you think about kind of this difference in the behavior of the pentesting right and, instead of kind of trying to replicate malware, going more towards an attacker profile that is looking to the business level on the application level to try to accomplish their objectives?

Speaker 1:

You know, you're absolutely right. If you just have a compromised user and they're just using their own hands behind the keyboard, it's a lot harder to detect. What our testers tried to do, though, is they tried to use their tools to make that job easier, and the tools that they were using were getting flagged by the EDR. I think the challenge there is right. If I had, if I was able to give this pen tester a month of time, sure he can go ahead and he can do that. You know hands-on the keyboard testing, but since you don't have that large time, try. They try to use their tools. So when we do do the, we give them the account.

Speaker 1:

There are some apps that we point them to look at, and they do try to do those cursory looks on those apps. So you're right, it's important to actually test some of those. The only problem is the challenge is the time and the resources that you have, because we only have them from a limited time. Yeah, in a perfect world, 100%, it's almost like another level of application testing. That's right. Another level of testing our roles in our database, for example. If you have the time to do that 100%, that's definitely the way to go.

Speaker 2:

Yeah, and to be fair, we also want to replicate the threats that are more likely to be present or to happen in our environment, and we know that we are most likely to face this typical malware-based attack than someone that will spend time behind the keyboard trying to break into internal applications. Of course that may be a more extreme scenario. Of course that may be a more extreme scenario, but sometimes I just wonder kind of why pen testers kind of keep kind of using kind of the standard toolbox, of kind of malicious-like type of tools that will be so easily detected by a well-deployed EDR, and kind of we end up kind of just making sure that we detect things that we are already prepared to detect, right, detect and respond to. And also, I think that also enables the adoption of solutions like breach and attack simulation, because many of these solutions end up replicating this type of behavior of the pentester. And then you have something that if you instrument your environment with them, you can do that more frequently and in a very consistent manner, right? So it gives you an additional level of assurance to know that your environment is working properly.

Speaker 2:

Now let me change gears a bit here and, as I told you, right kind of when we are chatting. Initially I was going right kind of to your profile and I noticed a three-letter agency there, right Kind of in your as kind of some of your previous jobs, and that's always brings a lot of curiosity right. So we look and say, oh, nsa here, that's cool. So tell me a bit about working for a three-letter agency, especially kind of the very, always very suspicious NSA kind of. How does it look like? What is the job? How does the job feel? What are the typical challenges that you have in an environment like that?

Speaker 1:

I'm retired Air Force and you know I got to say my career in the Air Force is absolutely rewarding. The people I worked with were top notch, having the mission oriented, you know, culture was great. I'm just so thankful I was able to culminate it with working for the NSA. It is by far the most challenging and rewarding job that I did in my career. Nsa has two pieces. You have your offensive side and then your defensive side.

Speaker 1:

Well, my first assignment there, I actually worked on the defensive side, actually worked on a defensive side. So at the time we're looking at the point, you know, deploying potential satellite technologies. That was basically using, you know, the networking protocols that we have here on, basically on the earth. So they're looking at, well, how do we gonna secure these, basically these routers in the sky? So I got a chance to work at that. You work with some of the requirements and then work with the contractors, identify with you know some of the attacks that happen on a network that's on the earth. Well, that's, those types of attacks can happen out with our satellite system. So you kind of need this at the same type of controls and you know I'm dating myself. So these are basically firewalls and what have you.

Speaker 1:

And then I had the opportunity to to work on the offensive side, and this was during the global war on terrorism. You know, just like any spy agency, you have these assets, you have these accesses that you have that you gleam intelligence, you gleam information on. So my job at the time was, you know, during the global war on terrorism sometimes we needed to give this information to the boots on the ground to take action. So I had to do what was called an intelligence gain loss assessment. So whenever you're gaining the intelligence on some sort of access and you go ahead and you do an operation, a lot of times you're going to give up that intelligent asset. It's basically out in the open now, so the bad guys can't use that.

Speaker 1:

So it's my job to quantify how important the intelligence was coming that and then the, the commanders you know the horses on the ground with this sidewall is this action warrant me losing that intelligence? So that's basically called an Intel gain-loss assessment. Sometimes they decided, you know what, that information is too important for us, we're going to hold off on this. Other times it says nope, this mission is too important, we're going to go ahead and press.

Speaker 1:

You know, sometimes lives were on the line with this information, so that was a no-brainer. It's like no matter what this Intel was, we're going to go ahead and we're going to try and use this to save the, you know, when people's lives are in jeopardy. Unfortunately, I can't go into any more detail on that, but I do got to say that it's neat because it's like an attacker If you relate it to what I'm doing now. If somebody has access to a network, are they going to kind of sit tight and wait, or are they going to kind of take a little bit more risk and see what more they can do and run the risk of getting caught?

Speaker 2:

So it's similar to that aspect and it applies, you know, still applies today. Yes, it is an interesting trade-off assessment, right? I remember the reading kind of the memories related to the Bletchley Park operation with Alan Turing etc. And they have a similar situation when they start breaking the Enigma codes they could see the position of the U-boats from the German side moving to go after the convoys, the ships in the North Atlantic, and they have to think, ok, are we going to save these ships and make it clear that we are listening to their communications, or should they let them get those so we can use this information or disadvantage in a more strategic situation? So it is a very hard trade-off to assess, especially because, as you said right, kind of many times it involves lives. I can't imagine how hard it is to go to that type of assessment.

Speaker 1:

Yep, and you're exactly right. That's exactly the same thing we're doing back in the 1940s is the same things we're doing today Exactly.

Speaker 2:

And Bill, let me ask you it's probably my favorite question for the podcast that I'm asking everyone since we began what do you think that we, as in the cybersecurity community industry, what are we doing right?

Speaker 1:

We're not resting on our laurels. We, uh, we realize the threat is consistently improving, it's dynamic, they're finding different vectors to get in and we realize that. So we always have to keep on working to to get better. And we do that by realizing that our technologies can't keep up with the bad guys. So we have to make sure that our incident response plans are thought through, actionable practice, so then, when it does happen, we're actually able to contain a threat.

Speaker 1:

If I look at an attack coming in and I'm able to contain it and not have it impact my business operations tremendously, to me that's just as good as preventing an attack, because my business worked fine, you know, it did not cost my company any money, I did not get any data exposure. So that's a win to me that that's not an exposure, that's not a problem, you're, it's just a something that you did to combat that threat. You know, granted, it'd be nice if you didn't have to do it in the. You know, granted, it'd be nice if you didn't have to do it in the first place. Sure, but again, to my mind it's still a success if you are able to contain it, able to recover and able to move on without, you know, really impacting the business.

Speaker 2:

Great, yeah, that's true, right, I think we wouldn't be able to rest in our laurels anyway, because in a couple of years there wouldn't be able to rest in our laurels anyway, because in a couple of years there wouldn't be any laurels left. Everything would be breached very fast. We are forced to evolve. I think it is indeed a good thing, in terms of what we do, that we keep evolving. We keep up with the threats. I also remember a quote from Marcus Reynon. He said well how unsafe the internet will be. Right, it'll be as unsecure and safe as we afford it to be. We do right. We keep doing our work in a way that we can live with the risk that is out there. Of course, we won't do much more than that, because it's quite expensive. As much as we try to avoid, we get in the way of people doing business doing their thing. So we try to avoid disruption from our side as well. But I like that point that he makes and we're going to keep being as insecure as we can afford to be.

Speaker 2:

And before we close here, as insecure as we can afford to be, and before we close here, I wanna just bring something that you were discussing and bring some technology to the conversation here. First, because record of my employer is a SIM provider. So and you mentioned the kind of using, kind of or looking at seeing some of that activity from the pen tests on the SIM and kind of all the work on disrupting the pen test activity done by the EDR, and many times I end up seeing questions related to why to keep a SIM or why to have a SIM if the EDR is doing such a great job. And so let me ask you that then kind of, why do you have a SIMIEM instead of just relying on that EDR that is doing a good job?

Speaker 1:

The EDR is on that endpoint. Granted, it does a great job on that endpoint. Well, I do a lot of cloud services. So then I have my identity, and my identity isn't really covered by that edr. So I need those things coming in also. You know we do have network traffic. You know, basically, firewall logs. There's some stuff there I need, need that as well too. And then I have my, you know, other systems that feed into that.

Speaker 1:

You know, you know my vpn, uh, other things, and the thing about the SIM is like, when we have that you know I'm trying to track those pen testers it'd be hard for me to go to my EDR, hard to be, go for my VPN, check on my firewall logs, go ahead and check on my, you know, id provider logs that's a lot, whereas I can rely on my one pane of glass to go ahead and see all my logs that come in there. Secondly, is you, since you have these disparate sources, you need to be able to kind of put them all together and kind of pick up those anomalies. And that's where I think, yeah, you need to correlate these things from my identity side, from my my network side, from my EDR side, and be able to put those together and then marry them together like email logs, for example. My email is a good input in that and I think, because of that correlation, because of those behavior analytics that you see, hey well, this normally doesn't happen. This picks it up.

Speaker 1:

That's where you have the value of the sim and, honestly, I, you know, I, I can't, I would be very hesitant to work for any organization that does not have that type of technology that brings in these disparate sources to give me my single pane of glass, something that I look to not only for my threat hunting but also for my response. And granted, you know, I might get triggered from the SIM and I might have to go to my EDR and look at some more specifics of that, and that's okay. But still, having the correlation of all those disparate sources, I think is invaluable.

Speaker 2:

Yeah, that's right. Having a point where you can have this unified view is really crucial. We know that because of how fast some of the other detection can have this unified view. It's really crucial. We know that kind of because of how fast some of the other detection and response capabilities grow. We're always playing catch up in trying to unify everything on the same right Kind of.

Speaker 2:

You are in that point where you're about to have everything there and then you just have to acquire or to buy something that it's not fully integrated into the SIM. Right, oh, I'm buying a new cloud detection, threat detection technology and that's not fully integrated into the SIM. So now you have this side console and you need to do a few things. But normally I think these days it's very hard to find a technology that can at least send alerts to the SIM so you can start the decision-making process of kind of, should I go and look further, should I do something about it? From that central point. And I think that function is really very important.

Speaker 2:

I think some people come and say, oh, it's the most important piece of the architecture. No, I like to say it's a foundational piece, but it is a base for everything else that you have and what I believe. Of course I may be a little biased because of who I work for, but I believe it's going to be around and still going to have a very strong role in a secured architecture for a long time, I agree. Okay, we are kind of on that time limit here. So I want to thank you for coming to the podcast. It was a great conversation. I really like going in-depth in those pen test scenarios, how we define which scenarios are going to run, where the pen test will happen from all these points about using the deepfakes and kind of the outcomes and kind of what are the follow-up actions related to training, training the employees, training the leaders. I think we end up having a very interesting conversation. So I'd like to thank you for coming and going through that.

Speaker 1:

Well, thank you, this was very enjoyable. I really enjoyed this dialogue and you know, it's nice when you're able to interact with something like this and you, you take a piece back, and from this conversation I'm going to take a piece back to make sure my executive team is aware that, hey, it's okay to get challenged, to make sure it is you who is you who's actually directing that action. So so, thank you.

Speaker 2:

All right, great. So thanks everyone for listening and see you in the next episode. Bye.

People on this episode