TL;DR—

  • The defining tension at RSA 2026: generative AI adoption is happening faster than any organization’s ability to manage the risk—and the industry’s primary answer to that is, ironically, more AI, instead of teaching the people trying to use that tech.
  • The sessions were full of practitioners asking how to get employees to use AI without burning the house down.
    • Sandra Joyce: “Activate Industry!: Moving Beyond Defense to Disruption and Active Defense”
    • Randy Rose: “Mental Malware: Why the Human OS Keeps Getting Hacked”
    • TJ Patterson: “Cybersecurity Risk Assessments Can Drive Security Culture”
    • Vasu Jakkal & Mohamed Al Kuwaiti: “Ambient and Autonomous Security: Building Trust in the Agentic AI Era”
  • At our booth this week, one in three security professionals said their current security awareness training was “SO GENERIC, it gives me a rash.”
    • That’s a people problem that long predates the AI hype cycle—and it’s getting worse as we add more technology on top of it.
  • A former White House economist at our CISO lunch said it clearly: people don’t do the right thing because you tell them to. They do it when the incentives align. 
    • Security teams who make the safe behavior the easy behavior stop being blockers—they become enablers
  • One more thing: Human intelligence informed me that Handala was trying to attack the RSAC
    • Adopt first, secure second isn’t just an AI problem—it’s also what happens when you send a sales team to the world’s biggest security conference… and they join the Wi-Fi to send unencrypted emails

Everyone got the memo to adopt generative AI. Now they’re trying to write one to secure it.

The macro mood at RSA 2026 was hard to miss: organizations everywhere are operating under board and CEO mandates to adopt generative AI, and their security teams are in a full sprint trying to catch up. 

What used to be a CISO concern has become a CIO anxiety: shadow AI, ungoverned tool adoption, data quietly leaving the building through productivity apps nobody approved. 

The question isn’t whether your employees are already using AI. They are. 

The question is how you’re enabling employees to use AI without becoming a supply-chain liability.

The sessions named this tension directly:

  • A keynote by Sandra Joyce from Google Threat Intelligence framed the industry’s direction as a shift from passive detection toward active prevention.
    • While her keynote was specifically about disruption at the macro level, the same logic runs straight down to the employee. 
    • We’re not just trying to detect threats faster. We’re trying to stop people from making the decisions that let threats in.
  • Randy Rose from the Center for Internet Security sharpened that with Mental Malware: Why the Human OS Keeps Getting Hacked — the premise being that the human brain is a target with its own exploit categories, its own unpatched vulnerabilities, its own predictable response patterns that attackers have mapped in detail. 
    • In a week full of AI pitches, I needed that grounding reminder.
  • Pair Rose’s talk with TJ Patterson’s session from Star Financial Bank on how cybersecurity risk assessments can actually drive security culture — not just measure it — and a throughline starts forming.
    • The thing we’re trying to secure isn’t just the network. It’s the decision-making of the people inside of it.
  • Building Trust in the Agentic AI Era” — presented by Microsoft Security CVP Vasu Jakkal alongside UAE cyber head Dr. Mohamed Al Kuwaiti — asked the central question out loud: how do you enable your organization to use AI productively without creating a new attack surface in the process?

While these sessions started really interesting and needed conversations, none of them fully answered the problem (if you set aside the silver bullet vendor pitches). 

Honestly, nobody at RSA 2026 fully answered that problem. 

But the fact that these questions were being asked on a main stage, by serious practitioners with resources and talent and time, is a solid start—and something to build towards for the next RSAC.

Security pros told our vending machine what they actually think about the people they’re protecting.

We brought a vending machine to the show floor. (Yes, a literal vending machine. Yes, it dispensed things. The video is above.)

The premise: pick your single biggest gripe with legacy security awareness training, and we’ll give you something you’ll actually want. In the end, the top answers included:

  1. SO GENERIC, it gives me a rash“—the top answer by a significant margin, at roughly one in three responses.
  2. Only checks a box… doesn’t change behavior
  3. A long waste of time — I’d rather get a root canal
  4. Doesn’t match our policy, so no one can trust it”
  5. Ad-hoc write-ins included: 
    1. “Stupid people.” 
    2. “Focus on tech only.” 
    3. “Presented in a way that we forget it in 30 seconds.”

These are security professionals at RSA. People who run awareness programs, attend sessions on human risk, and make decisions about training curricula. 

They describe legacy security awareness the way you’d describe that dentist visit you keep postponing.

“Stupid people” in security awareness programs

The write-in that said “stupid people” is worth sitting with for a moment—because some version of that sentiment came up all week, in sessions and at after-events, and I want to name it clearly and remind everyone:

Security exists to protect people. 

Not in spite of them—for them. 

I know a lot of infosec professionals think of the CompTIA data triad as core to their responsibiltiites: That cybersecurity exists to preserve the availability, integrity, and confidentiality of organizational data.

But those data points? 

Those represent people: People’s work, people’s quirks, people’s responsibilities and goals and future success.

Employees aren’t hired because they’re good at spotting phishing lures. They’re hired because they’re good at the thing that makes the organization function. 

If we start treating the people we exist to protect as the obstacle, we end up designing programs for compliance rather than behavior—and we get exactly the kind of generic checkbox training that one in three RSA security practitioners just told a vending machine gives them a rash.

The vending machine data and the AI conversation are the same conversation. 

The industry is adopting AI fast, layering technical controls around it, and skipping the people part of work—the same pattern that produced a decade of awareness programs that, by the field’s own admission, mostly don’t change anything. 

The technology advances. The people “problem” stays where it was.

An economist made the case for a different approach to the people problem.

On Tuesday, Fable hosted a “steak and human security” lunch with a group of security leaders. The room filled up faster than expected, and the topic that took over — and didn’t let go — was AI adoption.

The energy wasn’t fear. It was the energy of people who’ve actually used AI in their own workflows and know what it feels like when entire categories of work just disappear. 

But there was a tension underneath it: the sequence happening inside almost every organization right now is adopt first, understand the risk second. Because the speed of adoption isn’t waiting for anyone.

Victor Bennett—a former White House economist whose research focuses on incentive structures and risk behavior—was at the table. 

The insight that stuck: people don’t do the right thing because you tell them to. They do it when the incentives are aligned.

Security teams who make the safe behavior the path of least resistance stop being the people who slow things down. They become the enablers the business actually wants. 

The best programs right now aren’t blocking AI adoption—they’re teaching people how to drive it safely. That’s a fundamentally different orientation than “how do we stop employees from doing the risky thing?” 

It’s “how do we make the secure thing, the easier thing?”

That framing—behavior change through incentive alignment, not mandate enforcement— resonated in meeting after meeting during the week. 

  • Reaching people based on what they actually do rather than their assumed job title.
  • Responding to an emerging threat in real time, with training built for the specific behavior being exploited, rather than waiting for a content cycle to catch up. 

That’s what gets security leaders’ attention right now, and the practitioners in the room confirmed it. 

The compliance-checkbox model is genuinely losing the room—and thank goodness it is.

HUMINT: Handala tried to breach RSAC, which was filled with unencrypted financial quote emails on the Wi-Fi.

Before I sign off… 

I won’t reveal my sources, but I heard from multiple individuals in the hallways and events at RSAC that Handala, the Iranian hacktivist group, was active around RSA week—domain typosquatting, possible watering hole attacks, network-level probing.

The hacktivist group wants to make a splash, so what better target than a venue full of security professionals, with their very eager sales and marketing teams?

They could’ve won a headline in taking down the network. They also could’ve secretly compromised many “celebrity” security targets at individual and organizational levels.

This attempted attack is, if you think about it, the most literal possible illustration of the week’s theme.

Adopt first. Secure second. 

We sent frontline employees hired for non-security functions to the world’s biggest security conference. Some hundreds of them were on the conference Wi-Fi sending unencrypted emails with customer data and sales quotes

…because the event moves fast and the VPN is inconvenient. 

Attackers know this. 

They show up because we show up. You become a potential supply chain vector the moment you badge in.

For RSA 2027: Use a personal hotspot or connect to the Wi-Fi via trusted VPN. At the very least, avoid accessing any websites without a security certificate (the “s” in “https”) and do NOT turn off your email encryption.

The people monitoring conference network traffic notice what’s moving in the clear—and the embarrassment of getting called out on it is the least bad outcome on that list.