Breach Ready Radio
Breach Ready Radio is a series of candid conversations with the practitioners, researchers, and security leaders who are changing how defense actually happens. These are the people building new approaches, experimenting with new ideas, and pushing security operations forward in real environments.
Each episode explores what they are working on, what they are seeing in the wild, and how security is evolving across the SOC, threat intelligence, AI, and incident response.
The best insights usually come from the stories. The investigation that took an unexpected turn. The tool that changed how a team works. The moment someone realized the industry needed to rethink an old assumption.
We talk to the people behind modern defense. What they are building. What they are learning. And how security operations is changing in real time.
Hosted by Sean Ferguson, Securonix.
Breach Ready Radio
Fast Answers, New Problems with AI in the SOC
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI is moving into security operations fast, but the gap between a strong demo and something you can trust in production is still bigger than most teams want to admit. That gap is where risk starts. Eddie frames that early by pushing back on the idea that AI is about reducing headcount and arguing that the teams getting the most value are using it to amplify their best people instead.
In this episode of Breach Ready Radio, I sit down with Eddie Kim, Principal Advisor in AI Modern Data Strategy at AWS, for a practical conversation about what it really takes to make AI useful inside security teams. We get into the difference between an assistant and an agent, why trust changes the moment a system can take action, and why clear boundaries, logging, limits, and auditability are the real bar for live environments.
We also dig into what breaks as organizations move from one agent to many. Specialization is powerful, but coordination, explainability, governance, and failure handling all get harder in a mesh environment. Eddie walks through why production readiness is not just about model quality. It is about infrastructure, observability, session handling, tool connectivity, and knowing how the system behaves over time at scale.
The conversation gets especially practical when we talk about what leaders should actually measure. Not agent counts. Not token spend. Outcomes. Faster response times. Fewer false positives. More incidents closed with the same team. Less burnout. Better work. That is the difference between real value and an expensive demo.
We close on the leadership challenge. Security teams cannot afford to show up late. Eddie makes the case for partnering early with the business, reading past the marketing speak, and asking harder questions before trusting any vendor claim. If you are sorting through AI promises in the SOC right now, this episode will give you a better lens on what matters and what to push on.
Welcome To Breach Ready Radio
SPEAKER_00Welcome to Breach Ready Radio. This is where we sit down with the people shaping cybersecurity to talk about what they're seeing, what they've learned the hard way, and what's really happening behind the headlines. From real-world breach stories to sharp perspectives on where the industry is heading, we keep it practical, honest, and useful. I'm your host, Sean Ferguson with Secure Onyx. Let's get into it.
From Business Dev To AI Risk
SPEAKER_02Just as by an introduction, I get I get the pleasure of working with industry leaders like Databricks, Snowflake, and Anthropic in the AI and data space. And also some names that you might know just every day DoorDash Box, Zoom, and overall maybe a couple hundred technology companies who are building out their AI strategy. So yeah, fun fact with that background, I'm in the unique position of not being a security expert. I actually come from the side of business development and product development. And then from that lens, needing to understand legal and compliance with respect to applied AI use cases in a way where we're balancing out that innovation and security. So really excited to be here.
SPEAKER_01Well, thanks for joining us, Suddy. Actually, I'm gonna bounce off of that too a little bit too. It's actually kind of neat to see that there is some kind of translation and transition into from what you were doing into the field, you know, similar to mine too, with greater direction into brand and communications. You know, you already got a taste of some of the legalities of certain things, and you're able to bring that into this and kind of lean and use your expertise into um into your position and and obviously succeed at it.
SPEAKER_02Yeah, like I think just by way of career development, uh, these are things that you don't plan, right? And so I've been with AWS for maybe five years, before Oracle for four years, building out the cloud infrastructure, and then Microsoft for 10 before that in a corporate finance role. And all these little experiences that crew up where you have a well-rounded picture for what a business is going through, uh, what change management looks like. I mean, at Microsoft, if you ever remember having a CD with Office in it and buying that CD from Best Fine, putting that into your computer and installing Office manually, um, and now you ego who who would ever think of that? It's crazy. But back in the day, 10-15 years ago, that was uh that was seen by Wall Street as uh career suicide, or just you know, shooting down that cash cow, and they were deathly afraid of it. But yeah, um, now we're in this movement to to AI and agentic AI, and it's almost like the uh what's old is new again, right?
The Biggest Agentic AI Myth
SPEAKER_01Yeah, which is it's it's funny too about that with the CDs. It was it's kind of um, you know, now you have a lot of people lamenting, you know, Adobe subscription base and an office subscription base. And then they want to go back to the CDs, you know, go back to the just hey, I want to pay once, put the CD in there. And then I do I actually had there was a couple days ago too that I do miss walking into a staples or a Best Buy or even like a GameStop and not having like the computer section or the PC section, just this like back alley, like behind the red curtain, no one ever goes there, off limits kind of thing. The nostalgia of uh yeah, the rack of CDs back in the day. Love it. When you said you weren't ever gonna be growing up, that person who's like, oh yeah, back in the day, and yeah, here we are. AWS has identified agentic AI as like the next major multi-billion frontier in cloud computing. Obviously, you've seen that, you know, Athropic, we we got we have Microsoft in play, AWS in play, Snowflake, and so on. They're all just expanding and scaling. When you hear agentic AI, what's kind of the first thing that you think people get wrong?
SPEAKER_02Yeah, honestly, I I think the first thing I hear, and I hear it quite a bit. It's almost a pattern of, you know, hey, agentic AI is going to replace everyone. And part of that's a media narrative, part of it's just a very human reaction, right? And so I get it. When you see an agent just autonomously triaging alerts or writing reports or closing tickets without a human touching it, the instinct is, oh my God, what do the analysts do now? But when you actually peel back the curtain, like here's what I actually see in the room with customers. It's the organizations who are getting the most value from agentich AI are the ones who are trying to reduce headcount. They're actually trying to amplify what their best people can do. And in fact, I've seen AI save careers, either because it created a new function in AI development or AI operations or some new function, or it just helped someone get past a mental block like a reading or writing impediment. And it helped that individual get past that block so then they could actually have an impact. And so, you know, with that kind of as a background, for us in corporate, it really is a leadership moment, right? It's you've got to lead with empathy, uh, just understanding that some people and in your organization, it might be the majority, depending on where you are, might be afraid of this technology, and rightfully so. And so there is a need to kind of reframe the discussion and also meet people where they are so they can benefit from the AI tools. Like I don't think the question is will agentic AI replace us. Uh I think it really becomes what's the true potential of my team if we're fully leveraging agenci technology. And I think I think that's where the technology or the real conversation starts.
SPEAKER_01I'm seeing that too. And we're a big proponent of secure ontics about human in the loop. You know, there has to be governance, there has to be oversight there. What I'm seeing with this new security analysts coming out, and even in other related fields in tech, they're embracing it in the sense of, oh, I didn't know how I could do this, or I didn't know that there was a gap in my skill set, and I'm utilizing this. I know this is kind of outside of the agentec AI and the security space, but things like ChatGTP, like this, all this information was at my fingertips, and it allowed me to expand my knowledge, find gaps, uh helping me augment in the security space, level one, level two, you know, so I can focus on the actual important things. I have security analysts coming out of uh coming out of their certification saying, Oh, they never taught me this, but now I can see, oh, this is how this does this. This is where I can expand. Oh, I never thought of it that way. So I yes, there is there, there's always going to be some concern. There's always there's nothing that is say uh true to form and will always be, you know, perfect. There's always gonna have to be some oversight there. So I agree with you uh 100%. The shift kind of goes a little bit now from what you can do to almost what can it do in scale, as well as you have companies who now are more focusing on their brand reputation, new minds, brilliant minds coming out and not going to these companies or going to the ones who are like, no, we're augmenting you, we're not gonna completely, you know, nuke the level one level and just have a much higher bunch of level twos on our on our field. No, we want our level ones to be level twos.
SPEAKER_02Yeah, in the way that you're touching on early in career, you hire, ran out of college, wanting to embrace these tools and being kind of quick on the uptake, totally true. And I think in two years, minimum the kids in my college are gonna be AI native. And it flips the whole perspective, like totally on a 180, because they'll be coming out of the workforce going, why don't you have AI tools? Like, why does nothing connect? Why do I have to click all these buttons? And uh yeah, that shift I think is gonna be coming quite soon.
Assistant Versus Agent In Practice
SPEAKER_01All right. The key question: what's the difference between an AI assistant and an agent that you trust in the live environment?
SPEAKER_02Yeah, Sean, uh important question because the line between AI assistant and an agent gets blurred all the time. And the way to think about it simply, at least today, with an AI assistant, it answers questions. And with an agent, it takes that next step and it takes action. And that distinction matters enormously. When you can actually create or can execute an API and write to a database or trigger a workflow, set a communication, the stakes go up in a way that's totally different. It's not just is the answer accurate, but it then becomes what happens if I'm wrong? The whole thing about the context of using an agent, trust is super important. It's not just about model performance, it's about observability. It's what did it do and can I audit it? Um, can I stop it? And does it know when to handoff to a human, or does it just go off the rails? Kind of going back to the whole early in career discussion. If you have a brand new employee, you don't give him the keys to the truck and just give him no directions. You have to build trust over time. And it's through a track record and through demonstrated judgment. And I don't think agents are very different. The agents that I trust would be ones in a live environment that have clear boundaries. They log everything, they know their limits. And honestly, uh, you just have to be able to make that the bar because if you don't and you're too aspirational without the right protections, things do get broken and you do have things go sideways. And at AWS, we try to at least think through the process of what to do to make enterprise great AI. And we've got things like Amazon Bedrock and Amazon Agent Core, which we'll talk through a little bit. But those are designed to make achieving that bar possible by most companies where it's not just aspirational.
Guardrails Monitoring And Data Boundaries
SPEAKER_01So it drives to like the next question that I had too. What has to be true for you to say, yeah, the agent is safe to run?
SPEAKER_02Yeah, I might think about this in a number of different ways. And one from the security angle, but then two also from like the data angle. Because the way I like to, the way the companies approach it now, it can be thinking through great, we've got the best model. And given that, we can give it some prompt engineering and some context engineering. You know, it may work sometimes, but when you put it into a production setting where it's being tested thousands of times an hour, 24 hours a day, if you don't have the red reservability and the monitoring of the responses to detect, you know, one sentiment analysis, but then two, model drift or, you know, is it changing the answers over time? Then you do run into issues where, you know, you've got something in the wild that's giving answers that may or may not be correct, you know, at the end of the day aren't going to be safe. So at the end of the day, where you have things like a gentic AI talking to your data, there has to be a contract between the two, giving them the lens for great, here's what we have access to. Here are the limits of what we can do, and here are kind of the range of responsible decisions that you can make. And without that, like I wouldn't consider an agentic application to be safe at all.
Strands And Agent Core Explained
SPEAKER_01Yeah, it reminds me of last year when you had the agent in the live production, no guardrails, no governance, nothing. And it pretty much just went rogue and deleted the entire company's database. I mean, I think I think that article is probably the perfect example of hey, you need, you know, you need oversight, you need explicit rules, you need these guardrails, you need governance, you know, when you are putting these agents in there. So, with that, why did AWS put energy into strands and what does it make easier for builders?
SPEAKER_02Yeah, Sean, I love this question because the relationship between these two is something that comes up quite a bit when we talk to a technical audience. And I think the the analogy that works the best is uh strands is how you build the agent, and then agent core is how you run it over time. Um, strands is a framework similar to MCP that helps you define how your agent thinks and acts. It's the logic, it's the decision making, the tool use. And with that, it gives you the flexibility to build something that actually fits your use case with some element of standardization. Then you have agent core, and that becomes the operational layer. It handles the things that are just genuinely just hard to build yourself, you know, from building an AI system from scratch. That would include things like memory management, session handling, tool connectivity, the security controls. And, you know, when you're in experimentation mode, you can get away with it. But the moment that you're thinking through production, again, multiple agents, multiple users, real data, real estates 24 hours a day, then you do need that infrastructure. You need to know that the agent is behaving the way that you intended, uh, not just in testing, but at scale over time, you know, across different inputs, together that that helps make the jump from cool prototype to something that we actually trust. And that jump is a whole game right now.
SPEAKER_01When a company wants to expand to a mesh and they're moving one agent to many, what changes for them? What are things that they have to they're scaling their agents, but what would they have to scale in their thinking and what they're trying to accomplish?
SPEAKER_02Yeah, yeah. When you move from one agent to many and you need to put it into a mesh where they talk together, everything gets more complex. And with that complexity, failure modes multiply in ways that aren't actually obvious until you're in it. With one agent, you're just managing one set of behaviors, one set of permissions, one set of outputs, and you can reason about that. When you have a mesh of agents, now you have to think about how they communicate, how they hand off tasks, how you prevent one agent's mistake from cascading through the system where we get perpetuation of hallucination, right? You just can't have that. And so there coordination becomes a primary first-class problem. And that question of who's responsible if something goes wrong, um, that actually gets a lot harder to answer if you have a number of agents working with each other. I think that the teams who navigate this well are the ones who treat the mesh like a system. And you have, again, the tracking and observability of what decisions are made by whom. And when I say whom, I mean which individual agent. So there you have to think about the whole as well as the individuals. And here we're talking about maybe non-human identities being an individual agent, but you need to think through the interfaces, the failure modes, and the governance model, not just each individual part. You actually have to think about how they all mesh together, right?
SPEAKER_01It's interesting when I see people expand from one to many, and then from many to many, the results are there, the ROI is being seen there, the values being seen there. But there's always some kind of blind spot there of you know, your oversight has to increase, your human in the loop has to increase, you have to have more hands in the bucket watching these. It can't just be one person watching 40 agents and making sure like pinging in each one.
SPEAKER_02Sean, let me ask you actually, yeah, with mesh and with agents and with security and with the products that you have, would love to see from your side over maybe the past six months where you've seen so much innovation.
SPEAKER_01I would say um, I wouldn't, I'm not gonna say their name, but the really crazy stat that we had when we were doing the Forester ROI report and we're starting our breach ready uh board ready campaign. It was a a healthcare company that we have as a customer, and they had other competitors getting paid with$600,000 compliance issues. You know, they something wasn't covered, somebody got breached, there's a gap. And when they started launching some of their agents, they found some of these gaps, and they also saw that their security analysts were actually operating at a higher level. And that was just that kind of what sold me because I was one of the ones that was skeptical coming in here. I mean, I I do come from a creative background, I do come from a communications background, which some of these things that they're doing now was again unfathomable back back in like when I was with Alert Logic before they got acquired by Ford. And it's just it's it's exciting. I don't like being on a brand team where our products don't deliver what we say. Oh, because it just yeah, I I can't wake up and get excited about that. I can wake up and get excited about being, hey, this customer says this thing, and I want to go put this on a billboard and put it right in front of our competitors' office, you know, for uh 30 days. So the mesh approach, too. I know you touched on this a little bit too, but what's exciting about it, but also what's risky about it?
SPEAKER_02What's exciting to me is the specialization now, right? You can have agents that are really good at one thing, and it could be threat detection, it could be uh alert triage, it could be incident response, it could be reporting. And being able to have specialists that are really good at that, and then being able to orchestrate them together to handle complex workflows, that that becomes super interesting. Uh, because now we do something that no single agent could do alone. Super powerful. I think what becomes risky, it's gonna come back to the transparency. It really is the more agents that you have working together, the harder it is to understand why a particular outcome happened, especially if you have multiple probabilistic models trying to talk to one another. That explainability in a mesh environment is an open problem now. And so that's something that we have to get through to make sure that decision tracking is something that we can fully solve. And so for security use cases, you know, that matters. For audits, that matters. It because when something goes wrong, you need to be able to explain it, whether it's to your team or to your shareholders, to leadership, or sometimes even regulators.
SPEAKER_01Traffic management as well, too. What's what's feeding the data into these agents and then what's watching, as you said, oversight, the data coming out. I think as we scale, there's a lot of companies that think it's business as usual. But in reality, is is is yes, these these can operate and they can communicate and they can do these things in seconds that would take us hours, days to do in the past. But no one's thinking of outside of that, you know, and threat intelligence feeds, the information getting put in there, the information, the database it's pulling from, you know, is there some kind of oversight around that as well? UEBA, too. You know, do we start seeing a lot more insider threats than we did in the past now that you do have a lot of this autonomy? And I think that's going to be the risk is in in the next couple of years, and you'll start probably seeing a lot more UEBA offerings, start seeing a lot more UEBA pitching happening as well.
SPEAKER_02Yeah, yeah.
SPEAKER_01Uh, what should leaders measure to know they're getting real value and not just cool tech?
SPEAKER_02Yeah, great question. I I always push leaders, executives, whoever, to start with a business outcome. You have to talk about what they actually care about and then work backwards from there. Um in terms of metrics, it's not how many agents did we deploy or how many tokens we spent or how many hours the AI ran. Those are all vanity metrics because what you want to know are the outcomes at the end of the day. Did analysts' response time go down, right? Did we reduce positive or false positives? Did we close more incidents with the same headcount? And then did we free up people to do the work that they actually need to do? And those are the numbers that matter. It's like, are we making people more productive at the end of the day? Those are the numbers that tell you who who's actually getting real value versus whoever might just be running an expensive demo. Also, on the human element, right? If we measure the human experience and not just the operational metrics, you know, you'll get a sense for your analysts. Are they less burned out? Do they feel like they're doing more interesting work? Are they learning more career-building skills? Are they staying? Because if you can get those metrics to trend in the right direction, you know, you won't have your best people leave because they feel like they're being replaced. In fact, the opposite, their work in in a lot of times will become more meaningful. So those are the kind of things I'd love for, just in terms of making sure that you're getting true value for your company and your organization.
Advice For Skeptical Security Leaders
SPEAKER_01Awesome. Yeah, I'm gonna throw out the marketing word too, the productivity-based ROI there. Um, but I do like that you touched on the cultural aspect too. A lot of there is a cultural risk when running this. And I feel like you get the most value out of it and ROI, as you said, when you can measure the productivity and you can also measure the the employee sentiment. You can measure that there is this cohesive process happening between the two. Um last but not least, what advice would you give a security leader who's curious but skeptical right now?
SPEAKER_02I would say I'm gonna put my product development hat on. And from that lens, from the product development lens, because innovation is happening so quickly, and if I'm embracing AI, uh, I actually don't know where all the risks are. And so for a security leader who's aware of all the things that could happen, um, I would just encourage you to, you know, get ahead of the curve and partner early with your business partners to understand where it is that they want to go to the degree that you can help them see around corners and give them a roadmap. That'll have a ton of value, just being proactive. Um, And helping kind of develop the roadmap as they get there instead of being the roadblock at the end of the day. Instead of in once you do that, instead of being the blocker, you become the hero because you're enabling them to innovate in a way that's safe. So yeah, uh get ahead of the curve, partner early. Your business partners are gonna be looking to you for proactive leadership in this area.
SPEAKER_01And I would add my little own there too is read through the marketing speak. You know, you see a Gentic AI, you see a lot of people saying they can do what they can do. Um, get a demo, ask them these questions too. Ask them what the value is, you know, what's the ROI that you're gonna see? Let them prove to you what they say because we're getting moving past the the buzzword soup that we're seeing right now. And now we're actually getting into real use cases.
SPEAKER_02Yeah, yeah, absolutely. I mean, this has been an awesome conversation. I mean, thank you.
SPEAKER_01Eddie, this has been a great conversation. Thank you so much for coming on for uh Breach Ready Radio uh and really appreciate your time. Thank you.
SPEAKER_02Yeah, no, thanks, Sean. I mean, if for the folks who are building in this space, uh both AWS and Secure Onyx have a ton of resources to help. Uh, from our side, it's Amazon Bedrock and Agent Corps. And we are genuinely invested in helping teams get this right and not just get it shipped, but to get it right over time. And Secure Onyx, what I love about the partnership is that we're doing such important work in making AI-driven security operations real for enterprise customers. And uh, I'm super excited about what we can build together. Thank you.
SPEAKER_00Secure Onyx helps security teams detect real threats faster, cut through noise, and respond with confidence. With unified analytics, intelligent investigation, and AI built to support human decision making, teams can move from reacting to attacks to staying ahead of them. Learn more at Securonyx.com. Interested in being on the podcast? Have a wild story to tell? Reach out to us at podcast at securonyx.com.