Closing Security Gaps With the Zero Trust Model
Dell Technologies’ Global Security Investigations Lead, Bo Bohanan, says most breaches start the same way: someone convinces you they belong.
In this episode, Bo shares how his team approaches fraud prevention and investigations, and why Zero Trust is as much about culture and process as it is about tools and technology. From role-based access to physical tailgating, he offers practical ways to tighten security without crushing productivity.
Key takeaways:
- What modern impersonation fraud looks like in the age of AI
- How to reduce risk with smarter access rules and ongoing checks for unusual activity
- Why fusion centers reduce silos by routing threat intelligence to the right teams
Transcript
(Automatically transcribed)Peter Steinfeld: Hello and welcome to The Employee Safety Podcast from AlertMedia, where you’ll hear advice from experienced industry leaders on how to protect your people and business. I’m Peter Steinfeld. Today’s guest is Bo Bohanan, Global Security Investigations Lead at Dell Technologies.
Bo leads Dell’s efforts to prevent and investigate fraud and advances the use of technology to strengthen the company’s overall security posture.
With experience across both cyber and physical security, Bo shares how organizations can align people, processes and technology under a zero trust approach to build more resilient security programs. Here’s our conversation. Hey, Bo, thanks so much for being here. Thanks for joining me in the studio.
Bo Bohanan: It is great to be here. Thanks for having me.
Peter Steinfeld: Excellent. Well, let’s get into it. What does your role at Dell look like and what kinds of challenges do you and your team focus on?
Bo Bohanan: So I probably have one of the best jobs at Dell. I work with a great team of security investigators that basically stop people from trying to steal stuff. So that’s always goodness.
Peter Steinfeld: Yes.
Bo Bohanan: But my day to day includes working with our investigators that deal mostly with cyber fraud investigations. However, my secondary and in some days primary role is the development and implementation of our fraud detection and prevention strategy as far as our technology tools. So that involves dealing with our data analysts, dealing with our data scientists, dealing with our investigators, figuring out the best ways to implement tools and practices that we can use to detect and prevent, and then if we can’t, it reduces the investigatory cycle time so that we can kind of keep churning and keep all of our customers safe.
Peter Steinfeld: It sounds like this is something that’s evolved a lot just over the last few years. Is that a true statement?
Bo Bohanan: Absolutely. Absolutely. I will say perhaps I should be thankful for our adversaries because they ensure job security, right?
Peter Steinfeld: Exactly.
Bo Bohanan: They come up with some of the most innovative ways of doing things, but you look at their creativity, it forces us to be creative. And as you might expect with artificial intelligence, that’s made life very, very interesting.
Peter Steinfeld: From your perspective, what kinds of risks or challenges are most top of mind for security leaders today?
Bo Bohanan: Wow. So I really don’t like throwing out the whole buzzword with AI. So I’ll break it down to a more core element for us. When you think about, let’s say, fraud, very specifically, it is theft by deception. You know, you have bad actors that have to pretend to be something that they’re not. So they will leverage technology but also leverage just standard social engineering. In each of these, though, there’s the identity piece. I am not who I say I am, and I am attempting to access something that I should not have access to in order to enrich myself in some way, shape or form. So the identity and access management piece would be where I would kind of imagine is probably one of the bigger top of mind things. And it also crosses pretty much all levels of security. So whether you’re talking cyber, whether you’re talking physical, all of these things deal with identity and access management — roles based access, another pretty big thing. In other words, it’s all well and good that I am who I say I am or I’m pretending to be who I say I am. But why are you here? Do you absolutely need to have access to this or to that thing? So those are some of the things that I know are definitely top of mind. You look at deepfakes identity, you look at the ability to craft better phishing emails. Okay, so now you’re getting into your social engineering to try to make someone believe you are in a state or a position or something that you’re not necessarily in.
Peter Steinfeld: Do you find that the bad actors come up with these movie-like schemes where they think of everything and it’s this amazing thing that they do effectuate all at once or do they just try a bunch of bad stuff and then they just get better and better over time? So you see the pattern start to emerge.
Bo Bohanan: You see everything from the more ham-handed approaches where — call them script kiddies — so someone got something that they downloaded from the Internet or from a YouTube video or something and they try to come in and affect something. But you do see some folks who absolutely have thought of several things. There are several stages to whatever this scheme is. They’ll develop an innocuous relationship maybe with someone on the inside. So now you’ve got kind of an insider risk sort of thing happening and then they’ll affect something that kind of triggers the action or the activity that brings about whatever it is they’re hoping to achieve. TTP — the techniques, tactics and procedures that some of these folks use — indicate a level of sophistication for some groups. But I think if there’s one thing our adversaries have over us, it’s the adaptability. Something happens, we have to go into the after action review and then we have to go into this and then you have to actually implement whatever this new control is. In many cases, depending upon the size of your enterprise, it could be fairly extensive, maybe not. We have silos. So that adaptability is probably one of the biggest strengths and one of the areas that we probably need to focus on a little bit more.
Peter Steinfeld: Yeah. And it sounds like AI is making it easier for them to adapt and iterate.
Bo Bohanan: Significantly easier. It allows that script kiddie that I just mentioned a moment ago to get online and see and do things that up to that point in time they would have had to have gone to a Telegram or even worse, the dark web where even they probably didn’t necessarily belong.
Peter Steinfeld: Right.
Bo Bohanan: They don’t have these challenges anymore. I’ll get on one of the various nefarious GPTs and we’ll ask it a question. It’s going to produce what I need and I’m going to execute.
Peter Steinfeld: You have more people coming after you.
Bo Bohanan: And they’re all more advanced, all more advanced, significantly easier. The other thing is they’re able to cross so many different mediums so much faster than they used to. The attacks are significantly more persistent than they were or had been.
Peter Steinfeld: So something you said before that caught my attention was this concept of insider threats. Yes. And I’ve experienced that before. I worked at a place where we had someone who was doing nefarious stuff and working with outside forces. I’ve also been almost the victim of an insider threat attack when I was dealing with a vendor and someone on the inside was sharing information with a bad actor. And it’s actually happened twice. So how much of a deal is that? What do you need to do to look after that insider threat?
Bo Bohanan: So that is huge. The fact that an otherwise good person can somehow be manipulated in one way or another to give that information away is a significant deal. As you might imagine, you’ve got your true insider. Okay. Then you’ve got all your various and sundry vendors, then you’ve got your customers. So you’re operating across these multiple platforms and you’re having to deal with individuals who have access to data that maybe they should or should not have access to. So that’s where that role-based access control that I mentioned before comes into play. So if you have a salesperson and they should only have access to these tools, then now what I need to do is monitor those tools and then you have your data loss prevention tools and processes. So when you start seeing an exfiltration, you can kind of maybe take some action. Is this innocuous? Is this me merely emailing a file to myself or is it me sending off company secrets? Right. And then the same thing with the vendor. I’m having a, what I believe to be an NDA-covered discussion with a vendor, but this vendor somehow being potentially maliciously or voluntarily manipulated by someone — they’re gonna take the information that I give them and use that. And so now they’re getting my secrets without necessarily having direct access to my secrets. We’re having a conversation like we are now. We’re talking over coffee or some such thing. And, you know, hey, so I’ve got this really tough project or some such thing, and then you feel obliged to maybe share something that you’re working on.
Peter Steinfeld: Right.
Bo Bohanan: And then they take that. And if they’ve got two or three other people like you that they’re just having random conversations with — I would say paranoia is not always a bad thing. If you have a security culture that ensures that everyone is aware that the things that they say and do, even within the organization, can be potentially leveraged in some nefarious way. That’s not to say that you shouldn’t have those coffee conversations. You should just be a little bit more aware of what it is that you’re sharing, who you’re sharing it with.
Peter Steinfeld: Well, let’s talk about zero trust. A lot of organizations are adopting those principles to strengthen their overall programs. How do you actually define zero trust, and how does it fit into the convergence model between physical and digital?
Bo Bohanan: So zero trust for me is almost as it states: you need to prove who you are at every stage of the process of authentication. And then there’s a constant process of revalidation. So I had access yesterday to this area. When I log in, it needs to check and see — are my credentials still valid for this area? The other question that kind of adds on to that for me is, why are you here? Why do you need access to this particular area? Or why do you need access at this time or in this state or whatever the case is. So it’s a constant checking and rechecking of not just access. My badge works today. It gets me in the front door. Does my badge need to get me into this particular room? And if so, why do I need to be in this room today at this time? Yeah, right. So all of these things are going to be very, very important for a security organization to consider. The physical aspects, to me, are almost more apparent. The system is set up in such a way that I need to have access and permissions. There needs to be the awareness of individuals whose job it is to protect the physical environment. They need to know that I’m there. They need to know who I am. They need to know why I’m here. And zero trust would, in a physical space, include: okay, I’ve got a visitor badge. But I am potentially in an area that is traditionally — additionally — I’m in the cube area where other employees are. What am I doing there? So it’s the access control piece. It is ensuring that I belong where I’m at at that moment. And then here’s the other piece that kind of throws people off a bit. What happens if I’m not supposed to be there? What are the procedures? It’s all well and good that I’m in that area, but is there a culture of security that has someone feel empowered to ask me, what are you doing here? Is there a culture of security to say, hey, you didn’t badge in? You know, somebody who kind of is tailgating? You know, all of these things matter.
Peter Steinfeld: I would argue no. Most people would just get embarrassed and they want to be polite, at least in the culture in America, generally speaking. And they won’t say anything. They’ll just kind of put their head down at their desk.
Bo Bohanan: And you know, there is a cultural piece to that, quite honestly. Just kind of sitting here, just two guys talking. I’m just wondering — if you go to New York versus, hey, what are you doing here exactly? A downtown New York versus a place in Florida. Are you in a place that’s generally available to the public?
Peter Steinfeld: Think healthcare — supposed to be wide open in a lot of places.
Bo Bohanan: So yeah, so now I get to — I somehow managed to get to the back area in a hospital or a clinic or whatever. You know, there’s maybe a presumption. I’ve done this quite a bit. You walk around like you belong there and people will just presume that you do.
Peter Steinfeld: Yeah. So zero trust is important, but it can impede productivity. So how do you balance that?
Bo Bohanan: That is the million dollar question. The balance, I think, does go back down to culture, but it is also one of those things where there’s an upfront application of effort to ensure how smooth it is. And that upfront application of effort includes — during the implementation of whatever security controls you have — taking a step back. Single sign-on, passkeys, things that are designed to allow a person to prove who they are as quickly and as securely as possible. Single sign-on to implement — I’ve had to do that. I’m thinking going to the dentist is kind of up there with that same level of trying to implement the single sign-on. However, it is probably the point of least friction when I go to sign in. Now I can sign into a single place. I’m validated and a lot of the activities happen in the background, but it proves that I am who I say I am using a whole bunch of different controls in the background. So that’s an example. Same thing with passkeys. For those of us just using stuff on our devices, signing into Google or other applications, the ability to say I am who I say I am without having to go through — okay, now I’ve got to get this on my phone, I’ve got to get this on my tablet or check my email. It says that because I’m signing in in the exact same manner with the exact same behaviors as I did before, I’m going to be validated again. So that creates a lot less friction. But the implementation and all of the systems that are running in the background — it’s like the duck that’s going across the surface of the water and their feet are furiously paddling in the background.
Peter Steinfeld: Well, can you share some more real-world examples of how the zero trust approach might show up in investigations or incidents?
Bo Bohanan: So when I think about zero trust, depending on how it was implemented or if it was implemented, it provides the digital evidence that we need to work backwards. So now I’ve got the logs, I’ve got the data to kind of say — okay, on or around this time this activity happened, that activity happened — and if it was a legitimate transaction or if it was a legitimate activity, life’s good. However, if we suspect some issue or some negative activity, generally speaking, what’s not there is almost as telling as what is there. And at a minimum it helps with remediation. So now I’m able to kind of — somebody did something and then went from left to right and it should have just gone left to left. I know that something is not right. I can at least stop whatever is going on as quickly as I can while I investigate further.
Peter Steinfeld: So essentially what you’re doing is putting up a lot of barriers or spider webs and then you’re doing pattern recognition as things hit those and you’re seeing what’s out of the ordinary and then you can quickly investigate that. And I assume that AI and technology are helping you do that faster and you can do more of it because you’re getting more hits. Yeah, at least that’s the idea.
Bo Bohanan: I would say we’re working in that direction. Yes, absolutely. We are not there yet. One of the things that I will have to say — while it is not the most fun thing to manage through — I like the level of control that our organization has with the implementation of AI to try to make sure it’s done properly and securely with the appropriate controls in place. We leverage machine learning, we leverage things that could be considered AI. We’re huge on behavioral analytics and we’ve actually been able to leverage those across the board. So maybe not specifically a physical security kind of thing, but definitely with our supply chain where we’ve captured information from other incidents and investigations and fed that over. And they were able to leverage that — wow — for their own investigations and to kind of stop things. We’ve implemented it in such a way that it’s actually kind of changed how things are managed from a process perspective. Looping back around as we talked about the zero trust examples — one of the most telling for me, that was just absolutely amazing — and it’s such a case study in how zero trust appropriately applied could have prevented a fairly significant breach of a casino several years ago. Very, very large casino organization and property. And it started with something that would not necessarily have been able to be detected digitally. So it started with social engineering. An individual contacted the help desk, which — oh by the way — is absolutely your front line of defense for not just technical but also your customer care agents. These are the folks that they’re going to get hit first. In this particular case, they pretended to be an employee. They had just enough information that they had gained through intelligence, just kind of going online, figuring out who’s who in the zoo. And they were able to basically manipulate the individual into believing they were in fact this employee, that they needed their password reset. Password was reset. And now they were fully into this person’s account — and it was a high-level person — and they were able to, because they had these credentials, make changes and updates and implement other things that they had kind of pre-positioned. So we start with social engineering, we start with someone who had access potentially to areas that maybe they shouldn’t necessarily have had access to in that way. A bad actor was given access. And the areas that they were able to access after that basically elevated their privileges.
Peter Steinfeld: Oh yeah, right.
Bo Bohanan: Once they’re in and then all of these areas — so all of your cyber and your information technology, all of these things were all commingled with physical systems. So you had your access control systems that are touching your climate control systems that are touching your gaming systems that are touching your other security systems. And once they got in, they were able to impact all of that. So now not only do I have access to your data that I can now steal and leverage however I choose to — the doors, your access controls, your physical access controls, just completely gone. They were able to negatively impact the access of people who should have had access to certain things. And so things were very chaotic, as you might imagine. But had they segmented off who has access to what — role-based access control — had they asked a simple question with the initial access, but then as the elevation of privileges kept going up, had they continued to ask the person in the hallway, what are you doing here? Yes, I have to have access, but why are you here? None of these questions were asked. And subsequently that organization suffered — I want to say about a hundred million dollars or something like that.
Peter Steinfeld: Wow, that’s incredible.
Bo Bohanan: Yeah. All for want of some somewhat straightforward controls.
Peter Steinfeld: A lot of lessons learned. I mean, it’s important for people to investigate that and say, what are we doing that might be similar to this?
Bo Bohanan: Exactly. You’re exactly right. You know, if we’re being realistic, social engineering is so challenging to work against. But you can limit the damage that is done by implementing some fairly straightforward controls, I think.
Peter Steinfeld: Yeah, a classic accounting rule — like the guy who balances the checkbook can’t write checks. Little things like that.
Bo Bohanan: Exactly.
Peter Steinfeld: It could just stop it in its tracks so it doesn’t spread to a $100 million loss in 24 hours.
Bo Bohanan: Exactly.
Peter Steinfeld: Well, it’s just going to get more and more difficult. All these threats become more sophisticated. So how will technology continue to evolve to support all this convergence that you’re talking about?
Bo Bohanan: I feel like there’s technology, but I also feel there are some of the less-considered things in the technology space. So it’s the leadership piece, it’s the adaptability piece. We have all of these different great and wonderful tools at our discretion. These tools cost money.
Peter Steinfeld: Yes.
Bo Bohanan: So now we’ve got the ROI discussion, but I want to make sure that we discuss the interoperability of tools, people, and processes as we talk about the convergence. So the tools are a part of it. You can have your detection systems in place, you can have your zero trust infrastructure, you can have your zero trust policy — implement all of these things. And it’s great that they’ve got these things that they’re doing. For me, it’s also about the processes that you put in place and the empowerment of your people to implement them. I would say this — if we had to look at technology as an arc — I know some people may have seen AI coming, but how they viewed it and how they viewed it from a security perspective is one thing. But even as I’m viewing it, it’s almost like I see the car. It’s coming down the street. It’s a little bit difficult to tell the speed, especially if you’re straight on. So if the car’s coming at you, it’s coming at you. It’s a little pink AI on the windshield of this car, and you’re thinking to yourself, okay, this is coming. This could be a good thing. I could take this car and do some great things with it. But then as it gets closer, you’re like, this is moving pretty fast. And then as it gets a little bit closer, you see the flames coming from the back of it. And then now you’re getting a much greater sense of the speed with which it’s coming. And now you’ve got a decision. Do I get out of the way? Do I just let it run me over? Do I maybe change my perspective? Do I move to the side and maybe even start moving towards it so that I can get a better sense of how I should react? So now I can kind of get slightly ahead of it and say, okay, now I see this, I see that, and I want to be back here. So I think that as we get this arc, you have to kind of put your futurist hat on.
There are some things that are going to be exactly the same. Fraud has been the same since the dawn of time — people pretending to be something that they’re not. Some behaviors are going to remain the same. You know that you’ve got your controls in place. If I build my tools in such a way that they can pivot left and pivot right — this is something that I’ve done — maybe I leverage AI to help me to put some audit capabilities within my tools so that they’re constantly checking themselves. Maybe I use agents to help constantly check my tools. And then when the behaviors change of my adversaries, the TTPs that they’re using to try to do their various incendiary exploits, now I can pivot just a little bit faster. I will probably still be slightly behind, or maybe I’ll be right at speed. For right now, the optimist in me says, perhaps at some point, I’ll have enough time to kind of game out what they might do next. What’s the next greatest fraud scheme or the next greatest data heist or something along those lines? What might they use? But it’s the tools, but it’s also the processes, and then the people — top of mind. It has to be a culture. We’ve got all these various phishing kind of things. I like gamification. If you start having — I don’t know — contests to see which group detects more of these things, which people report the most tailgating or whatever it is, and you provide some level of reward or recognition for having done this, you kind of get that little bit more of a cultural shift. And I want to say it’s kind of happening. We get a lot of cases of people that send stuff in and say, this doesn’t look right. And we love it.
Peter Steinfeld: That’s awesome. I love that way of looking at it. You think about a general — the ones that fail are the ones that fight and prepare for the last war, not the next one. But the next one’s probably going to be pretty similar, but it’s a little bit different. So you just have to adapt. Looking ahead, how can organizations continue advancing convergence, innovation and measurable results in their security programs?
Bo Bohanan: I’m going to say cooperation first and foremost. So there’s something — a concept — we’ll call it the fusion center, where you have a lot of data and information. Your threat intelligence is going to come in there. And it’s like I alluded to before — data comes in, it’s worked on by one group. Maybe your security response center, maybe your investigations team, maybe some other team. But it’s shared. As long as you’re sharing the data across the enterprise, then that’s going to allow other individuals to react to that information. You can allow them access to whatever tools you use to do those sorts of things. But it’s going to require cooperation, first and foremost. Each group has to talk to each other. To the extent that you can establish a central repository where the relevant data is parsed off to the group that it should go to, that’s even better. Threat intelligence is great when it’s shared — not quite so much if you’re kind of keeping it in a box.
Peter Steinfeld: No more silos.
Bo Bohanan: I would say yes, we need to totally start at least blending the silos. Or if you can imagine your little PowerPoint charts — I’ve seen this quite a bit — you’ve got three columns and you’ve got the one block across the top.
Peter Steinfeld: Right.
Bo Bohanan: Just kind of visualize that. And that block across the top would kind of be the area — the central repository where the information kind of comes into and flows through.
Peter Steinfeld: Fantastic. Well, Bo, this has been great. I feel like we could have gone on for another couple of hours. So thank you for coming in today. I really appreciate it.
Bo Bohanan: Thank you for having me again. I really appreciate it.
Peter Steinfeld: Absolutely. To learn more about Bo and his work with Dell, click the links in the episode description. You can also watch video highlights from this episode on AlertMedia’s YouTube. Don’t forget to subscribe, rate and review the show wherever you get your podcasts. Stay safe out there.
Outro: Thank you for listening to The Employee Safety Podcast from AlertMedia, the industry’s most intuitive emergency communication and threat intelligence solution. To learn more about how to protect your people and business during critical events, visit alertmedia.com. Until next time.
Dell Technologies’ Global Security Investigations Lead

More Episodes You May Be Interested In
-
Modernizing Executive Protection With Digital and Protective IntelligenceExecutive protection is evolving from traditional bodyguard work to a more intelligent, data-driven discipline that enables organizations to prevent threats before they materialize. Steve Hernandez, CEO of The North Group, shares how modern executive protection programs rely on predictive intelligence, thorough threat assessments, and a clear understanding of the organization’s risk profile. Listen to learn:…
-
Top Threats Shaping Organizational Risk in 2026Organizations faced a record number of interconnected threats in 2025, from global political volatility and multi-city protests to emerging technology disruptions and extreme weather events. The sheer volume and variety of incidents forced leaders to rethink how they monitor risk, protect employees, and maintain business continuity in an increasingly unpredictable world. In this episode, AlertMedia’s…
-
Inside U.S. Preparations for the Milano Cortina 2026 Winter Olympic GamesBehind every Olympic Games is years of careful planning to keep athletes, officials, and spectators safe. As the 2026 Winter Olympic Games kick off in Italy, security efforts stretch far beyond the venues, spanning multiple regions, agencies, and international partners. Nick Fanelli, Olympic Security Coordinator at the Diplomatic Security Service (DSS), joins the show to…




