The Gap Between AI Adoption and Real-World Application
AI adoption is accelerating, but many organizations still lack the governance and domain expertise needed to use it effectively. Jason Hoss, a risk and resilience practitioner, shares what he’s seeing across risk and continuity teams as they work to operationalize AI.
This conversation goes deeper into how companies are applying AI in real workflows. Jason outlines where it adds value, where it creates risk, and why human judgment remains essential when decisions carry real-world consequences.
Questions to ask as you evaluate operational AI usage in your risk and resilience strategies:
- Are we using AI to support decisions or replace them?
- Do we have clear rules for how our teams can use AI?
- Who is responsible for validating AI outputs?
- Where could AI introduce risk into our workflows?
Transcript
(Automatically transcribed)Peter Steinfeld: Hello and welcome to the The Employee Safety Podcast from AlertMedia where you’ll hear advice from industry leaders on how to protect your people and business. I’m Peter Steinfeld.
Today’s guest is Jason Hoss, a resilience practitioner and consultant who helps organizations navigate disruption and make better decisions under pressure through his work with risk, security and business continuity professionals. Jason has a front row seat to see how companies are using AI today, where it’s adding value and where it’s creating new challenges.
Here’s our conversation. Hey Jason, thanks for being here.
Jason Hoss: Thanks Peter. Thank you so much for having me today. Looking forward to a great conversation.
Peter Steinfeld: Can you give us an overview of your background and maybe a little bit about the services that you provide to organizations?
Jason Hoss: So like most resilience practitioners, this wasn’t what I had in mind when I entered the workforce. I’m a recovering professional photographer and that’s what I did before this, that’s what my degree’s in.
But you know, life trauma, things like that happen and I fell into this thing of business continuity, ITDR crisis.
So that has now led me to Whirlybird Labs, which focuses on AI governance, education and training in the business, resilience continuity, ITDR and crisis management space, helping practitioners and organizations and understand what this big nefarious thing of AI is and how to leverage it as enablement for your workforce.
Peter Steinfeld: Well, what are you seeing out there right now from organizations that are using AI to support their efforts around risk management?
Jason Hoss: What I am seeing and what I am hearing is it’s still this cool new whiz bang tool that’s bolted onto everything. All you have to do is look at the news and look at the next product that you might be considering.
AI is bolted on to a lot of but what I am seeing is organizations are hoping and rapidly adopting this, but they don’t necessarily understand how to get full value out of this. A lot of organizations don’t even have governance around AI. And just think about this.
If you had in your business continuity program, no governance around it sounds kind of chaotic.
So there’s this tremendous opportunities for organizations to understand the guide rails that cans and can’t do’s with AI and how it can help their organization. So I’m seeing a lot of organizations needing some help and guidance and how to apply AI in the workforce.
Peter Steinfeld: So how important are those guardrails and all that governance that should be put in place?
Jason Hoss: So governance in our organizations is absolutely critical.
Whether you’re in regulated or unregulated environments that governance can provide those guardrails and understanding of what we can and can’t do with AI in our organization, with our customer data and how that may interact with that regulated environment, whether that’s financial services, healthcare, energy, public sector, whatever it might be.
So having that understanding, especially if you’re going with European Union, AI act or the nist, NIST itself has its own artificial intelligence risk management framework out there as well as ISO International Standards Organization has 42001 and most of us as resilience continuity practitioners probably have a good understanding of ISO 22301 which is one of those guiding principles and foundations that we use to conduct our work.
So when you’re thinking about how to apply AI governance in your organization, think and look to those ISOs or if you’re US based, those NIST type guidelines and if you’re doing any business in the European Union, that EU AI act comes into big consideration because there are things around that where, if it’s that operational type technology, things where it has life safety type impacts is there’s a lot of regulation and governance around how AI can be applied and supported in your organization, like needing governance, understanding those guide rails, what you can and can’t do with AI, the data and the decision making around it. So it’s absolutely critical.
Peter Steinfeld: Yeah. And it’s a little different from other technologies that have come up in the past.
It seems like you could just charge headfirst and to start using it. But with AI they really go hand in hand and you really have to think about both those together in tandem.
Jason Hoss: That’s never stopped an enterprise or people diving in headfirst into the shallow end and taking those lumps and those hard lessons learned. And AI is this continuous moving target. What was new yesterday is old today and it’s changing again in the future.
Things are moving faster than regulation and governance can keep up with, which then puts a lot of onus and importance to internal governance and organizations to try to keep up with this moving target.
Peter Steinfeld: That’s a really good point. It is evolving way faster than anything else in the past. So that’s great to know.
Well, as you think about disaster recovery and resiliency, when disruptions happen, where does decision making tend to break down?
Jason Hoss: It really depends on the maturity of the organization.
Organizations that are, well, mature have their crisis communications plan, they practice everything, they understand the orchestration of chaos, so they usually can meet those demands and those implications of a crisis.
Well, but those are kind of unicorn organizations, the ones that I have partnered with or have worked with or have Lived experiences with adversity presents itself in the most unique way.
And as much planning and documentation as we do in this discipline of business continuity, business resilience also can’t accommodate for all hazards. The way I look at it is plans won’t save you, people will. You will. So planning is important.
Practicing, training, exercising, education, going through not only the motions, but having those conversations under blue sky conditions is absolutely critical for preparing any organization for their worst day ever. You’re not going to simply pick up a business continuity plan or a crisis communications plan and all of a sudden find a way to exit adversity.
Chances are what that adversity you’re experiencing isn’t part of the planning. Think of it. There’s so many things that can go wrong right now. Just look at the news. Is that in your plan? Chances are no.
So it’s about being nimble, it’s about practicing, and it’s about being open enough to that all hazards approach that you can be nimble enough to pivot to accommodate that adversity and realizing it’s not going to go away.
So it’s best if you lean into it using emotional intelligence, being smart about things, knowing your audience, reading the room, and just doing the right thing by being human.
Peter Steinfeld: So knowing that these situations can blow up out of nowhere and they can be quite overwhelming. It sounds like, you know, people are important, but they can be overwhelmed.
So where and when are organizations leveraging AI for risk and response workflows, where you end up seeing a lot of impact and changing outcomes?
Jason Hoss: So AI, hopefully in your organization or how you use it, you’re not using that to make your decisions.
Hopefully you’re leveraging AI and your humanness, your intuition, lived experience, war stories, what I call the domain translator, that subject matter expertise that we all have and bring to the table as the professionals we are, leverage AI to help find the signal in the noise, the needle in the haystack.
AI can analyze immense amount of data, analyze it and bubble up, hopefully an accurate assumption or summary of what we’re experiencing so we can make decisions a little bit quicker. At the end of the day though, humans own the outcome.
If you’re using AI for black box decision making, which is pretty much the way the tool works, it’s a bit of a Wizard of Oz behind the black curtain, and we don’t necessarily understand how it came to the results that it gave us. And that is risk. So when it analyzes data, again using that domain translator, subject matter expertise, you know what is true, what isn’t true.
What is facts versus fiction? And does that align with the ethos of your organization? Is that the best way for you to get out of that crisis that you’re in?
So AI is a great tool for finding the signal in the noise, but is not the one that’s going to solve the crisis for you. If you anything, if you rely too heavily on it, it makes the crisis worse.
Peter Steinfeld: That’s a really good point. I think about AI as something that can help tease out the answer I already have in my brain.
But it’s like that water cooler buddy that you go to and you just start chatting with them and they ask you a really interesting thought provoking question or just give you one little piece of information and you’re like, aha, now I know what I was looking for. So it’s a great thing that you can bounce ideas back and forth with to get you to that final resolution that you’re looking for.
Jason Hoss: Right.
And that’s really the whole purpose of the tool again is that efficiency, that augmentation of our efforts to find that signal in the noise so we can do what we do best as people. And using our own emotional intelligence to figure out what’s right versus wrong.
AI, even when it’s lying, is very convincing that it’s just giving you the best solution out there and that you’re doing great, you could be on fire. And AI saying everything’s going to be okay. And that is not necessarily true.
So keeping that in mind when you interact with AI systems and things like that is it will tell you truthful lies with a lot of confidence. And what I mean by truthful lies is it just says that with such conviction that everything’s great, that there’s no problem and that we got this.
And that isn’t always true.
Peter Steinfeld: Yeah, I’d like to explore that a little bit further. Like the other day I asked where a certain zip code was and the AI was convinced that it was in a particular city where it was not.
It was hundreds of miles away from there. And I challenged it and it’s like, oh yeah, you’re right actually. Right.
So where, where is AI not living up to expectations in those workflows of response?
Jason Hoss: In thinking of my own lived experience around leveraging AI outside of my own domain expertise. So I guest lecture at MIT and I was really proud of this tabletop exercise I had designed using AI.
I create these intro videos for them using a service called Pictory AI. And the group had selected a nuclear power plant explosion in the Denver area.
Now we don’t necessarily have nuclear power plants really close to us, but there are some in the state. So I played along and using AI to craft a scenario on what does that look like? Again, lacking domain expertise, created this really cool video.
And it was a massive explosion in downtown Denver, mushroom cloud and all. And I’m presenting that at the MIT Crisis Management Business Resilience course run by a former nuclear engineer. He is the domain expert in that.
So I’m showing that video in class, you know, proud Jason moment. Look at this cool thing I did with AI. And he pauses me. I’m in front of this class, 60, 70 of my peers around the world, and he says, that’s wrong.
And I’m so glad he did that. Mortifying in the moment and figuring out how to keep calm during crisis. That was my own personal crisis at the moment.
But it reminded me, and it was a really important lesson of AI will take you down rabbit holes you’re not prepared to be accountable for if you let it. I did something that I shouldn’t have. I trusted AI too much.
And it created a nuclear bomb explosion in downtown Denver, which, by the way, there’s no nuclear reactors in Denver, so it shouldn’t have been in downtown Denver. And when power plants have an event like an explosion, it doesn’t look like a nuclear bomb. Really cool. Visually, very impactful for my audience.
But that was one of those lessons learned of AI will take you down rabbit holes confidently and set you up for embarrassing failures if you let it. And at that moment, I had let it.
Peter Steinfeld: Yeah. And I’ve had that same experience. Maybe not exactly like yours.
It was more of just a personal thing, trying to chase something down I didn’t know anything about. And literally, like an hour going back and forth, and the AI had me convinced I knew exactly what I was talking about now.
And then I asked it one more challenging question, and then it completely flipped the conversation. And then I started asking, why did you lead me down this other path? And it said, well, you weren’t specific enough.
Jason Hoss: Right.
Peter Steinfeld: If you don’t have a really good sense of what you’re talking about already, it can absolutely lead you off course very quickly.
Jason Hoss: Agreed.
Peter Steinfeld: As I think about that, I know that you are a big proponent of this concept of, you know, the future of AI is human, and technology should empower people, not replace them. So how are roles changing for risk, security and intelligence professionals as they start to incorporate AI in their jobs?
Jason Hoss: Part of the function of the domain translator in their enterprise or their organization is to challenge Leadership. When they say we’re having AI replace your job, we see it in the news here and there and there’s challenges with that.
If AI is coming for our jobs, replacing us, we’re going to lose human agency. How are we going to support the economy? So we’re going to have economic displacement.
Last time I checked, everything costs money and we use what work to pay for supporting everything that we do, at least in the us. So with that said, when leadership approaches with those type of concepts, one is to challenge it in the best professional way.
The way I view AI and the enterprise is this is a tool of augmentation and thinking of the role of a business continuity practitioner.
A lot of times we’re single threaded, we’re a team of one, yet we’re supporting large complex enterprises with thousands, tens of thousands of people and maybe a verse geographic footprint. That’s really hard for one person to do, especially if you’re conducting bias in a manual traditional sense.
And how if you’re doing traditional business continuity planning, risk management, exercise design, training, governance, all these duties that usually fall onto us and as a single contributor, we have to be able to offload some of that. They’re all important. And what AI does helps me offload some of those things that are. Time sucks, maybe that aren’t the desirable part of the job.
Like having meetings and breaking down, you know, an hour long, two hour long, three hour long transcript. Ugh, that is brutal. And that takes a lot of time.
It’s an important skill that you should hone and keep practicing because AI is technology and technology fails.
However, when it is operational and things are working well, you can use that to break down transcripts pretty quick and find that signal in the noise.
You can use that to augment yourself in tabletop exercise design to create reasonable scenarios around the data that you have within your organization. You should have governance around it and understand what you can and can’t do with internal AI models as well as external AI models.
Don’t just take your organizational data and throw it in any AI without understanding whether you can or can’t. That is what I will call a resume generating an event. And it does not work well for anyone. Let’s assume you have good governance.
You understand what you can and can’t do with AI.
All of a sudden you can start breaking down a lot of historical data within your organization and potentially look at past incidents, look at your current planning, how you’re training, how you’re exercising, and enhance that based on lived experiences in the organization. You can take news items and impacts in the world, plenty of them out there. How does that impact your business, your supply chain, your data centers?
AI can break down that signal and the noise, analyzing it, incident data, crisis data. It can help you develop better communication plans.
In going from wordy to less wordy to more concise, it can break down mapping of internal controls to those different standards out there.
Whether you’re following the international standards organizations or other regulated type governance or internal governance and controls, you can crosswalk those standards and controls across other standards and understand what works well, what is similar, what is different and focus on those differences. You can leverage it for Dr. Orchestration.
There’s plenty of AI based tools out there that can help you automate and orchestrate some of your doctor Workflow.
Or maybe it’s a business continuity activation, or maybe it’s using a crisis communications platform like AlertMedia to automatically trigger certain types of communications based on the event experienced, or delivering documents as a just in time delivery to support different crises and things like that. So there’s tons of opportunity for AI, but it’s also being realistic is you need to plan for when you don’t have it over.
Reliance on any technology is definite plans for failure.
Peter Steinfeld: And I guess what I’m hearing as you’re talking there is that AI is essentially elevating you in the role that you have today to someone who provides more value to an organization instead of focusing on drudgery, which a business says, okay, I know we have to pay someone to do this now, instead of doing all that and spending your time there, you can spend your time actually creating additional value that the business will recognize and say, wow, that’s what we want to pay for, not the drudgery work. And we’re glad AI is empowering you not to have to do that anymore.
Jason Hoss: And that’s a great call out. And the way I look at it, ignoring AI will never benefit you.
How much you choose to embrace it will either enable your career to go maybe a little bit further, a little bit faster. AI is here, it’s not going away.
So those who choose to embrace AI as a capability capacity multiplier, I believe will exceed the ones who choose to only see it as a threat. So think about that, think about your role, think about the changing technology landscape.
You know, if you’re back in the day of three ring binders for continuity plans and now we have these digital BCMSs, what if you hung onto your three ring binder, how would that help you in that evolution of the technology? And we’re at another one. Just like we went from landlines to cell phones. Another point in time evolution.
And now we’re going into this thing of artificial intelligence. I prefer to call it augmented intelligence, to really underscore the importance of the human in the loop.
Peter Steinfeld: Well, organizations are increasingly using AI to inform their decisions. But where does this human judgment become most critical for them? We’ve touched on a little bit, but I’d like to dig a little bit deeper there, right.
Jason Hoss: And I’ve thrown out the term a few times, the domain translator, which is based on what I call three pillars. So, one, you have your subject matter expertise. It’s your job, it’s your role. It’s that discipline.
It could be your certifications, it could be that intimate knowledge of your enterprise. And then you have AI fluency on that. And AI fluency doesn’t mean you need code, doesn’t mean you need to be a technologist in this.
It means you need to understand the technology, the different models out there.
And then the next one is problem identification, which then ties your subject matter expertise, that AI fluency and realizing what problems are worth solving for using what tool you have available to you. Sometimes problem solving shouldn’t involve AI. Maybe that’s a more human event. Are we dealing with human resources?
Are we dealing with capacity and potential reduction in force? AI might not be appropriate for that. However, your domain expertise, whatever that may be, will help guide you in that.
But in thinking about those problems that we solve for as practitioners, especially in the business continuity ITDR crisis space is we know best of what problem is we’re solving using AI versus what’s not, based on our lived experiences, our war stories, and that tribal knowledge that we’ve gained over a period of time in our organization. So then that brings clarity to, you know, again, solving for problems. And sometimes it’s AI and sometimes it’s not.
Peter Steinfeld: Maybe a good way to look at AI is almost like you’ve hired a consultant, like you’ve hired a lawyer or an accountant, or fill in the blank, whatever it is, to just give you advice. They’re not making the decision for you, but it’s someone that can just give you input into the situation that you know way more about than they do.
And then that can influence your decision you end up making.
Jason Hoss: Right. And how you choose to engage with AI.
Let’s assume you’re going in through like a ChatGPT, Claude, or whatever the model of choice is for you, and you’re working through it in a conversational way, which Then you need to understand how to form a prompt. What is the intent that you’re trying to get out of that prompt? What is the context of this ask and this intent?
Are you giving it organizational data? Are you relying on the training data with an AI and then defining what success looks like?
So if we can craft a good prompt, create intent and context, define what success looks like, and then engage with AI, we have a better chance of it being a partner with us instead of a teenager who’s crashing out and isn’t vibing with you at the moment and is just going to placate you with what you wanted to hear and encouraging you to do the worst thing ever and that you’re doing a great job.
Peter Steinfeld: You know, as you were talking there, I was thinking that humans tend to anthropomorphize things. And I think when we interact with AI, we think there’s like a little homunculus, a human in the background that’s answering and interacting with us.
Jason Hoss: It’s not that at all.
Peter Steinfeld: So what should people be visualizing in their head as they’re presenting information and questions and soliciting help from an AI agent or bot?
Jason Hoss: Right. And in thinking of that anthropomorphism, you know, I look at it like mirror, mirror on the wall, you know, in thinking of that, that fairy tale.
And I’m polite to AI. I’m originally from the Midwest, so it’s please, thank you, all that. And guess what?
AI doesn’t care if you’re nice, if you’re concise, if you’re rude. However, the way I recommend to people is engage with AI in a way that makes sense for you.
I’m going to talk to AI in a way that I was raised, and I was raised to be respectful.
And I also think someday, like if you’ve ever seen the movie Maximum Overdrive, Stephen King, the machines, you know may rule the world one day, or if you look at the Terminator movies, maybe they’ll remember I said please and thank you maybe 99% of the time, and they’ll show me a little bit of mercy or grace. I don’t know, we’ll find out eventually. But AI doesn’t need niceties.
It needs prompting, intent, contexting, and understanding what success looks like. It needs instructions, not a bedtime story.
Peter Steinfeld: Well, what are organizations doing? Well, when it comes to governing AI in risk and response use cases, I.
Jason Hoss: Think organizations are doing well and they’re considering how AI can be a benefit to their organization, its people, their customers, their products, to see how it can just benefit maybe society as a whole or the enterprise as a whole. What I am seeing with organizations though is the ones who are operationalizing before governing is they’re struggling.
And in a regulated environment, that’s going to be a lot of risk, especially when the regulators come in and try to understand how AI came out with the outputs it did. And if you don’t have process in place, it’s going to be a challenge.
If you have to align to the EU AI act well, you might be prevented from doing business in the EU and all those other things. So businesses are doing great at seeing what the opportunity is.
Unfortunately, not all the executives are looking at it holistically and how it can help its people. Some organizations are looking at it how it can replace its people. And so there’s challenges with that.
To me, it’s more of that morality type of thing. I support people first and use technology as a tool.
Just like my cell phone is a tool, my laptop is a tool, my printer, I guess it’s a tool when it works. But AI again, is that tool. I’m not looking for it to replace my experience. It doesn’t know my experience. It doesn’t know my lived stories.
It doesn’t know my tribal knowledge that I’ve gained through adversity. AI can’t read the room yet. It doesn’t know that maybe the decision maker is overwhelmed.
And presenting you with a really detailed report might not be the way to address adversity. So I see organizations trying to operationalize it, but they don’t understand what success looks like in their institution.
So a lot of those implementations end up failing. Think of the last time big technology was maybe adopted in your organization.
And if there wasn’t a plan around it and what success looks like and how it can benefit the company. And what’s that? Return on investment? Well, chances are you’re just throwing money into a hole.
And that’s what I’m seeing right now with organizations. They hear it’s the hot new thing, but they don’t know how to apply it within their organization.
Peter Steinfeld: And you mentioned all those things that AI doesn’t do or doesn’t understand. One big one I would say is AI doesn’t care.
Jason Hoss: Right. You’re spot on. AI doesn’t know emotions, it doesn’t know people.
It’s trained on stuff on the Internet, people’s writings, people’s intellectual property. Which means that also it’s trained on bias too.
It’s trained on our imperfections, which will just amplify it even more so AI isn’t a replacement for us. These type of conversations, AI probably in our lifetime won’t necessarily get there to understand emotional intelligence.
And that’s what kind of separates us from the computers.
Peter Steinfeld: Well, where have you seen AI meaningfully improve security functions?
Jason Hoss: To be honest, I haven’t. There was a large AI first governance organization that had an accusation on Reddit that it was producing inaccurate governance like SoC2S.
Maybe it’s HIPAS. This large technology AI first company had in essence automated and orchestrated governance as a service.
But through the accusations that were made in a public forum saying that the company was doing storytelling and not true actual governance for the different things like SOC2S or compliance or alignment to external governance and rules and whatnot, is that it was fabricating information. Now think of your organization.
If you relied on SOC2S and other types of critical third party governance so you could conduct business in certain spaces, whether it’s with different governments or other organizations, whatever it might be, some of those are absolutely critical. And all of a sudden what you thought you had was real. Isn’t. That’s risk, that’s litigation, that is loss of narrative, that is loss of reputation.
So we have to be careful when we embrace AI for automation and orchestration is when things like this happen, the negative side of it is relatively massive and the implications could put you out of business.
Peter Steinfeld: So basically we still need editors. Let the AI generate a lot of work, but you got to have that really smart editor that comes in and says, uh huh, that’s right. Nope, that’s wrong.
Jason Hoss: Right. But even that’s a challenge too.
So the idea of the human in the loop and validating all AI outputs, AI can generate an immense amount of data at the speed of machines. Yeah. Which is a lot more than what us humans can read, validate, question and lean into.
So all of a sudden you’re going to get hit with a tsunami of data that you need to qualify.
The business demands are going to tell you, you know, maybe you have an SLA around that, that you need to do an X amount of time to get that out, which is going to cause burnout, stress and all these other things for that human in the loop. So that idea is great, but it doesn’t necessarily scale for business. Who needs to move fast, make decisions and keep moving forward.
Peter Steinfeld: Yeah. And there’s that temptation to just say, you know what, I’ll just let that information flow through. I can’t keep up with it.
And you just kind of start with one or two and Then all of a sudden you’re letting three or four things through without double checking. And I see how it can get out of control fast.
Are there any other challenges out there that organizations are running into as they scale AI across their workflows that you think are of note?
Jason Hoss: Definitely. AI isn’t free.
It’s extremely expensive, which then creates disadvantage for organizations or people who don’t necessarily prioritize that or can afford that. So enterprise AI isn’t $20 a month like maybe your ChatGPT or Claude or Gemini subscription is. It’s considerably more.
And you need that expertise internally to understand how you can control that data. In an organization that I work for, for, we can do PHI and AI. Phi, by the way, is private health information, or pii, private individual information.
And to put those type of things into a public model for AI could be really challenging.
And it’s a lot of risk to have that in there because it could become part of the model where in enterprises the idea is you have the governance and the technology in place that can scrub that so it doesn’t go back into those models or it’s maintained on your network. But you need expertise for that. It’s not a checkbox. It’s not going into preferences in ChatG and do these type of things.
You need the staff who knows how to configure this technology. You need the staff who understands what governance looks like.
You need the staff that understands how to apply this in your organization in their different domains. Like, should you use AI in hr? Should you use AI in it? Should you use AI in development? How do you control that?
And even our AI companies struggle with this. Anthropic recently disclosed its code base for CLAUDE code, which is one of the primary tools that enterprises use for doing code augmentation.
Peter Steinfeld: Automation.
Jason Hoss: It’s Claude Code and ChatGPT. Oh, Codex. That’s what it is. Those are your two primary tools out there for coding. And ChatGPT disclosed its code base. Oops.
What happens when AI discloses your sensitive IP? What if it talks about mergers and acquisitions? What happens if you use that to identify who you’re going to riff?
What happens when you use that on supply chain management? And maybe because a certain supplier sits in a certain country, that’s critical. They could be flagged as risky.
What if you use it in your hiring process? What if that implicit bias excludes, well, qualified candidates?
Peter Steinfeld: Yeah, there’s a lot to think about there. If you think about it, what are the top two or three things organizations can do to get More out of their AI tools educate your workforce.
Jason Hoss: I mean, that should just be something that’s assumed always upskill your workforce. Those are your competitive advantage. Those people who show up at day to day, remotely or in person are your competitive advantage.
They’re the ones who allow your organization to grow or constrict.
And you should think about how AI can help your key people, your people in your organization, do more to help amplify that competitive advantage that the people helped create and nurture within your organization. So that’s number one, upskill your workforce. Also, governance is the foundation of AI in your organization.
So strongly I almost say, I demand that you have governance in place first before you think about running AI before maybe a pilot project first.
But you need to understand the do’s and don’ts with AI in your organization, how that ties into your regulation or if you’re not regulated, how does that impact third parties and things like that? How does it enable you to do more, maybe with less also besides that, get those tools in.
So if you can upskill your workforce, if you can govern it, you can then pilot project to see how can AI benefit your organization. The idea is you need to understand how AI can benefit your organization. It’s a tool.
It doesn’t necessarily know the destination, doesn’t understand the context and what success criteria is for your organization. You need to define that and then determine whether AI is appropriate in supporting that effort.
Peter Steinfeld: And that governance aspect I see now is so critical, it may be counterintuitive because governance requires you to slow down at first. But by slowing down sometimes you can really speed up when you really understand the lanes that you’re operating within.
Jason Hoss: Right. And governance doesn’t have to be the roadblock to getting things done in your organization. It’s a partnership.
And if we work together and understand why we have governance in place and why it’s necessary, well, the business can then set proper expectations and timeframes for success. Governance doesn’t need to slow things down.
But if you don’t have it, you’re assuming a whole lot of risk that might exceed that appetite of your leadership team.
Peter Steinfeld: Well, as you look ahead, what gives you optimism about the future of AI and organizational resilience?
Jason Hoss: Thinking ahead, what gives me joy and confidence is people. People give me joy and confidence.
Seeing people look at this maybe scary technology and embracing it to a point of trying to at least understand it so they can see how it can potentially amplify them in their career, in their daily work. Also, what I’m hoping that comes from this. As we see more AI out there, this will force the human connection.
Because to me, that’s organizational resilience. Organizational resilience is about the people and at the end of the day, that’s the most important thing.
And we use tools to get us to different objectives and a lot of technology tools. So that is what gives me hope, is that we continue with community, we continue with conversations, we focus on what makes us human. We embrace that.
Our imperfections, our failures, our opportunities, our war stories, our lived experiences. Because that is what’s going to differentiate us in the workplace. That’s what separates us from AI. And that, to me, is exciting.
Peter Steinfeld: Well, Jason, thank you so much for being on the show. A lot of great insight and advice. We really appreciate it.
Jason Hoss: Well, Peter and AlertMedia, thank you so much for this opportunity to share a passion of mine. And thank you so much everyone. If you wish to reach out to me, you can find me on LinkedIn. I’m the only Jason Hoss there.
You can also find me at whirlybirdlabs.com thank you again for this opportunity.
Peter Steinfeld: To learn more about Jason, click the links in the episode description. You can also watch the video Highlights on AlertMedia’s YouTube channel. Don’t forget to subscribe, rate and review the show.
Wherever you get your podcasts, stay safe out there.
Jason Hoss: Thank you for listening to the The Employee Safety Podcast from AlertMedia, the world’s leading provider of risk intelligence and response solutions. To learn more about how to protect your people in business during critical incidents, visit alertmedia.com.
Founder of Whirlybird Labs
Risk & Resilience Practitioner

More Episodes You May Be Interested In
-
Understanding the Rise in Executive TargetingThreats toward business leaders are rising, and security teams are starting to see the shift firsthand. Bob Hayes, Managing Director and Founder of the Security Executive Council (SEC), shares what his team uncovered after analyzing new data on executive targeting. In this episode, Bob explains what’s driving the surge in incidents, how executive exposure is…
-
Closing Security Gaps With the Zero Trust ModelDell Technologies’ Global Security Investigations Lead, Bo Bohanan, says most breaches start the same way: someone convinces you they belong. In this episode, Bo shares how his team approaches fraud prevention and investigations, and why Zero Trust is as much about culture and process as it is about tools and technology. From role-based access to…
-
Trends Defining the Future of Enterprise Security Risk ManagementCorporate security risks are evolving in ways that demand new mitigation strategies. Ryan Schonfeld, Co-Founder and CEO of HiveWatch, an AI-driven physical security software platform, draws on his background in law enforcement and global enterprise security to offer a new perspective on how organizations can better manage today’s challenges. From the increasing risk and reach…




