Technical controls or human risk management? Choose belt and suspenders.

The TL;DR

  • Great controls still depend on people.
  • Three gaps will always remain: no control, unclear control, or people-powered control.
  • Human risk interventions close the distance between what you automate and what actually works.

Security folks are understandably excited about the wave of innovation hitting the human risk space right now. AI is reshaping attacks and defenses at the same time, and the market is responding with new tools, new playbooks, and a lot of noise.

And yet, every so often, we meet someone who says they’d rather double down on technical controls than deal with human risk. Fair enough. Controls are essential. But even if you implement the cleanest, highest-fidelity controls money can buy, you’ll still run into situations where people, not software, determine your true risk exposure.

Here are three reasons why:

1. Some risks simply can’t be engineered away
There are areas where no amount of tooling can give you airtight enforcement. Personal devices that aren’t enrolled in MDM. Employees choosing whether to adopt a password manager. Staff uploading sensitive material into a consumer AI app because the enterprise version wasn’t available, or wasn’t convenient.

In these cases, you can’t rely on enforcement alone. You need to reach employees directly, explain the risk, give them clear instructions, and reinforce the behavior over time. That’s human risk management doing work that no control can.

2. When controls do work, but employees don’t understand them
Automated blocks often stop the action but don’t deliver the message. A SASE control might prevent a sensitive file from being shared and even display an error message, but employees still walk away confused, or even nonplussed, about what happened or why it matters.

This is where a tailored human intervention changes everything. A quick, relevant briefing can explain the “why,” show them how to fix the issue, and reduce repeat violations. It also reframes security from being a mysterious blocker to being a partner that helps people do their work safely. And it’s a relevant message that’ll resonate next time, when you don’t have a technical control in place.

3. Many controls need continual upkeep
Plenty of controls only succeed if the people running them do their part. MFA requires admins to ensure adoption. Data classification policies only work when data owners keep up with changes. And recurring issues, such as secrets in code, exposed PII in data platforms, or misconfigured permissions, demand ongoing attention from the humans closest to the work.

Controls create the guardrails, but people keep them relevant and effective.

So what’s the lesson?
Technical controls are foundational, but they don’t cover every gap. They automate the pieces that can be automated. Human risk interventions handle everything that can’t: context, clarity, judgment, and sustained habits.

If your program leans solely on the technical side, you’re leaving room for avoidable exposure. The next step is building a set of human interventions that strengthen, not substitute, your controls. Start with the areas where confusion, inconsistency, or missing enforcement is creating residual risk.

When done well, this doesn’t just reduce incidents. It builds trust and shared ownership, turning moments of friction into moments of partnership between security and the people you support.

How to drive and measure behavior change

Day 4 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Most human risk tools measure engagement, not behavior change
  • Phishing failures and training completion ≠ reduced risk
  • Behavior change must be verified, not inferred
  • Fable validates outcomes using real security telemetry
  • “Action completed” is the indicator that actually matters

Human risk vendors talk a big game about how they change behavior and reduce risk, but where’s the proof?

We’ve studied the reporting output of a number of human risk products, and the pattern is consistent: most focus on failure metrics from phishing simulations and participation or completion rates for security training. What’s missing is verification that the desired behavior actually changed. If you’re not observing what people do differently after the intervention, you’re measuring activity, not risk reduction.

Our customers use Fable to verify a number of behaviors, including security tool adoption, device update compliance, and generative AI policy adherence. They do this by integrating the technology that can give them the answer, and having Fable validate the change. For example, data from Netskope would indicate whether someone had uploaded a document to an unsanctioned generative AI application, and—depending on its configuration—whether that upload constituted a DLP violation.

Here’s a concrete example. One security team used Fable AI–generated video briefings followed by targeted nudges to drive OS update compliance. Most organizations would stop at reporting how many people watched the video or completed the training. With Fable, the metric was action completed: did the user actually update their device?

The result: 75% behavior change within the cohort in under two weeks, reaching 99% by week five.

Check in tomorrow (day 5) as we dive more into time-to-behavior-change and why it’s important for closing the exposure window.

“Targeting lift”: the benefit of targeting

Day 3 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Advertising has made a science out of getting people to buy; cybersecurity should be able to do the same
  • Targeted interventions outperform general ones by a wide margin
  • “Targeting lift” quantifies how much better a targeted campaign performs
  • Here’s an early experiment showing a 33 percentage-point improvement from targeting on one element alone

Our chief product officer Dr. Sanny Liao always says that, with the right data, she can get someone to buy a pair of shoes. And she’s right. At least for me. I mean, I’m a sucker for shoes.

Data is powerful because we can use it to target just the right person at just the right time over just the right medium with just the right message. And with about a million other just-the-right-things—time of day, location, channel, price point, discount, ad color scheme, tone of voice—we can get people to buy our product.

If ad people can get us to buy those shoes with the right data and enough experiments, why can’t we do the same thing in cybersecurity? The truth is we haven’t really tried until now. Some vendors make a half-hearted attempt at role-based targeting, meaning they send certain training content to certain individuals based on their role, or whether they first join a company, but that’s about it. 

But our customers are starting to use Fable’s capabilities to target not just by role, but by risk. They’re personalizing intervention messages by citing people’s name, access, precise behaviors, and instructing them on what action to take next—customizing with policy details, tool names, processes, and protocols. They’re experimenting with message and nudge frequency, among other things. 

Our contention is the more targeted the campaign, the better it performs in engagement and employee response. Here is a simplified comparison of two campaigns from one of our customers—one targeted to a cohort of employees and the other sent to the whole organization. To isolate the value of targeting, we chose campaigns of roughly the same duration and topic to hold as much constant as we could (though in the real world, we’d want to target on as many elements as possible). Note that the targeted campaign performed
33 percentage points higher than the general one. 

We call this differential “targeting lift,” and will continue to look for even more experiments as our customers explore the additional meaningful ways they can target the right campaigns to the right people at the right time.

Be sure to check in tomorrow (day 4) as we explore behavior change and look at an example of a closed-loop campaign targeting employees with outdated device OS software, and then verifying that indeed they had taken action.

👉 Download the full report

👉 Download the infographic

Human risk campaign maturity curve

Day 2 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Security teams see countless risky behaviors
  • These ten human risks show up everywhere
  • Top risks range from weak credentials to social engineering
  • Targeting these risks with precise interventions is key to reducing exposure
  • Download the full report for metrics, real-world examples, and insights to strengthen your human risk strategy for 2026

Security teams use Fable to achieve objectives ranging from raising general awareness to raising awareness for targeted groups to shaping specific behaviors to reduce risk. Below, we group these campaigns by type and goal and place them on a maturity map aligned with integration depth and targeting level.

Nearly one-in-five (18%) campaigns are in the first stage of maturity: general compliance. This consists of broad-based phishing simulations and awareness training sent to everyone in the organization. Despite being “one-size-fits-all,” a full 63% of them are targeted in a way: they’re focused on a specific emerging threat. That could range from a new ransomware campaign in a hospital to a spate of CEO impersonation vishes making the rounds in a bank to a malware threat circulating on WhatsApp. Depending on how fast-moving the threat is, you may need to get ahead of it and respond within hours, not days or weeks.

38% of campaigns are in the second stage of maturity: targeted compliance. These customers have integrated their directory or workspace platform such as Google Workspace or Microsoft 365. They are largely targeting campaigns to cohorts (groups of people of the same affinity group or exhibiting the same security behavior) based on role, access, or risk. Of those, 47% are in response to an emerging threat, such as ShinyHunters scammers targeting customer database administrators.

44% of campaigns are in the third stage of maturity: behavior change. These are campaigns where the customer has integrated Fable more deeply into their technology stack—such as with single sign-on like Okta, enterprise browser like Google, SASE like Netskope, and endpoint detection and response like CrowdStrike. They’re running highly-targeted campaigns to shape behavior, such as prompting employees not to upload sensitive content to unsanctioned generative AI, rotate credentials when they’ve been exposed in a breach, and comply with security protocols such as enabling multi-factor authentication or adopting a password manager. 32% of this category, or 14% of the total, actually verify employee behavior change in a closed-loop way. 

Most—but not all—of our customers start simply and run a few general compliance campaigns to get started. This may be because they need to cover the basics from the human risk product they replaced. While broad-based campaigns check the compliance box, they rarely drive meaningful behavior change or reduce risk. So, we encourage our customers to explore how they can target their campaigns for maximum impact. 

Be sure to check in tomorrow (day 3) as we explore the differences between general and targeted campaigns—and measure the difference in outcomes between the two.

👉 Download the full report

👉 Download the infographic

The 10 most common human risks

Day 1 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Security teams see countless risky behaviors
  • These ten human risks show up everywhere
  • Top risks range from weak credentials to social engineering
  • Targeting these risks with precise interventions is key to reducing exposure
  • Download the full report for metrics, real-world examples, and insights to strengthen your human risk strategy for 2026

In our 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus) series, our second post focuses on 10 of the most common human risks we see across Fable customers. Security teams track all sorts of risky behaviors based on their unique environments and the signals they collect. Despite the differences from one organization to another, a familiar set of risks appears consistently. These are patterns that erode security posture and raise the likelihood of compromise.

1. Weak, reused, or shared credentials remain one of the most common vulnerabilities. When attackers can guess, reuse, or obtain a password, they can often walk straight into critical systems.

2. Failing to rotate credentials exposed in a breach leaves known-compromised keys in circulation. This gives attackers a ready-made entry point, even long after the initial incident.

3. Over-provisioned access—whether excessive privileges or time-bound access that was never revoked—expands the blast radius of any account compromise and violates the principle (or policy!) of least privilege.

4. Unpatched operating systems and software expose organizations to known vulnerabilities. Attackers routinely automate scans to exploit these gaps, often before defenders notice.

5. Weak MFA for critical applications undermines one of the strongest available safeguards. When MFA is optional, inconsistent, or poorly implemented, attackers can bypass authentication with relative ease.

6. Exposure of sensitive information in generative AI or cloud applications creates uncontrolled data sprawl. Once data leaves approved systems, monitoring and data policy enforcement become dramatically harder.

7. Secrets in code or private information stored in cleartext invite unintended access. These mistakes can surface system credentials, internal logic, personally-identifiable information, IP, or any non-public information to anyone who stumbles upon them.

8. Susceptibility to social engineering remains a top human risk. Attackers exploit trust, urgency, or confusion to trick users into revealing information or granting access.

9. Oversharing personal information online gives adversaries material to craft convincing phishing or impersonation schemes. The more public data available, the easier it is to target individuals.

10. Unsafe websites or unvetted browser extensions introduce hidden malware, tracking, or data exfiltration. Even small tools can become powerful attack vectors when installed widely.

To address these risks effectively, security teams must craft targeted interventions based on role, risk, specific behaviors, and business context. This means pairing high-quality detection with tailored guidance—delivering the right message, through the right channel, and at the moment the risky behavior occurs.

Ultimately, while the security landscape evolves constantly, the behaviors that introduce risk remain remarkably consistent. By understanding these patterns and responding with precise, context-aware interventions, organizations can meaningfully reduce exposure and strengthen their overall security posture. 

Check in tomorrow (day 2) as we dive into the types of campaigns our customers are running, their maturity levels, and what they’re able to achieve with them.

👉 Download the full report

👉 Download the infographic

The art (and science) of behavior change in human risk.

The TL;DR

  • This report illustrates how organizations measure and reduce human risk
  • Targeted, behavior-based interventions outperform broad campaigns
  • Behavior change happens faster than many expect, and the best ones stick
  • Certain risky behaviors cluster together, creating “toxic combinations”
  • Download the full report for metrics, real-world examples, and insights to strengthen your human risk strategy for 2026

Every day, your employees make decisions that impact your cybersecurity posture. Some strengthen your defenses. Others—phishing clicks, sensitive data sharing, outdated passwords, slow system updates, and more—are the invisible behaviors that open the door to exposure.

Until now, when cybersecurity vendors reported on human risk metrics, those metrics were almost always some variation of phishing clicks and security awareness training engagement—hardly a measure of true risk, and certainly not a useful analysis of how to curb it.

Today, we’re excited to share something new: a report on behavior change in human risk. It’s a data-driven look at how organizations measure, understand, and reduce human risk. 

Think of it as the human risk version of a Spotify Wrapped (OK, it’s less exciting than music, unless you’re a super data nerd like I am): real metrics and anecdotes from selected anonymized campaigns, plus the signals that defined the year.

This report covers data through October 31, 2025, drawn from anonymized customer environments across industries and maturity levels. It’s the opening chapter in what will become a periodic benchmark for the world of human risk.

What’s in the report

Below is a high-level overview of what we unpacked—each of which will get its own deep-dive post in the coming days. In keeping with the season, we’re calling this blog series the 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus).

1. The ten most common behaviors driving risk

Despite different industries and environments, we continue to see ten behaviors rise to the top—from weak or reused credentials to outdated OS software, unsafe browser extensions, and exposure of sensitive data in generative AI tools.

2. Campaign maturity: from “send-to-all” to ultra-targeted

We organized our customers’ human risk campaigns in three broad categories: 1. general compliance (18%); 2. somewhat targeted, role- or risk-based (38%); and 3. highly-targeted (44%), which aim to shape specific behaviors across identity, cloud, browser, device, and more.

3. Targeting delta: why specific beats spray-and-pray

We compared two nearly identical customer campaigns: one sent broadly and one aimed at a specific audience. The targeted campaign outperformed the broad one by a striking 33 percentage points. Customers can target on many more dimensions, but even this simple distinction shows how powerful targeting can be.

4. Behavior change

True progress isn’t video completions—it’s action. We did a deep-dive into one customer’s OS update campaign, powered by a one-minute video briefing and weekly nudges. It got to 75% and then asymptoted at 99% compliance thereafter. Unlike phishing clicks and training completions, true behavior change is what reduces risk. 

5. Time-to-behavior-change (TTBC)

Beyond changing behavior, doing so quickly is critical. We introduce time-to-behavior-change (TTBC), a metric inspired by the popular operations statistic, mean-time-to-remediation (MTTR). Elaborating on the example above, the organization reached 75% compliance in week two and 99% in week five. Depending on what’s at stake, time to be behavior change can be critical in closing your exposure window before an exploit occurs.

6. Cohort comparisons: not all groups behave the same

When we break behavior down by function, the patterns can be eye-opening. In one customer campaign, VIPs clicked phishing links at nearly double the rate of other groups. For any behavior, our customers can show a deep performance profile across their developers, contractors, finance teams, IT help desks, and more.

7. Behavior decay interval

How long does behavior change stick? We introduce the behavior decay interval, which measures the staying power of a human risk campaign—how quickly people revert to old habits. 

8. AI swashbucklers

Which groups upload the most content to generative AI tools? Across one customer environment, the technology team led by a long shot, followed by the legal and compliance team. Top uploaded content types included code (60%), documents (26%), media (5%), and other (9%).

9. Toxic combinations

Some risks are dangerous on their own, but become toxic when combined. We define a risk lift measurement for these toxic combinations, where the co-occurrence of two (or more) risks is higher than you’d expect by chance. Focusing on toxic combinations can help security professionals prioritize interventions.

10. Measure what matters—real behavior

No more vanity metrics. Human risk becomes so much more manageable when you look at the actual behaviors that comprise that risk. In this post, we’ll make some recommendations for the right behavior change campaigns to run.

11. Target with precision

One-size-fits-all training is so yesteryear. Targeting matters. Roles matter. Access matters. Risk matters. The more specific the targeting, the faster the organization reduces risk. In this post, we’ll talk about the many ways our customers are targeting their users with training, phishing simulations, and behavioral interventions.

12. Fix the highest-leverage risks first

Not all risks are created equal. What’s at stake matters, and toxic combinations can multiply the risk. We’ll share some examples of how this happens in the real world, and offer advice for where to start.

This report is the beginning of our long-term effort to bring clarity, consistency, and measurable outcomes to human risk. Over the next 11 posts in our 12 days of riskmas series, we’ll unpack each section of the report—and share practical takeaways security teams can use right now. Check in tomorrow (day 1) for a look at the 10 most common human risks. 

👉 Download the full report

👉 Download the infographic

The infographic: The art (and science) of behavior change in human risk

Interested in the full report? Download the full report here.

The report: The art (and science) of behavior change in human risk

Hackers have learned to whisper to AI. It’s working.

The TL;DR

  • Attackers can weaponize your AI 
  • Their tactics vary, from prompt injections to agent manipulation
  • Here are remediation options for AI and connected application configurations
  • Regardless of technical controls, humans may be your last line of defense
  • Incorporate relevant information about AI threats into your training

The new frontier in cybercrime isn’t about breaking into systems. It’s about convincing AI to do it for them. 

Attackers are learning to exploit the very thing that makes AI so powerful: its ability to read, interpret, and act. And because AI sometimes has access to corporate systems like CRMs, ticketing platforms, and knowledge bases—or can call tools from outside your company—these exploits can escalate quickly.

Here are some of the tactics to be aware of:

Tactic

Description

Example

Remediation options

Direct text prompt injection

Hackers tell AI to ignore the rules and do something harmful, like “email this file” or “exfiltrate that piece of data.”

This recently discovered cyber espionage campaign using Claude is a good example.

Configure connected applications so the AI only has least-privilege access appropriate to its role, restrict agentic actions and tool-calling in the AI system, and apply filters to sanitize incoming prompts.

Indirect or hidden prompt injections

Attackers hide malicious instructions inside content, such as documents, emails, or web pages. AI assumes they’re part of the task.

Here’s an example in this Help Net Security article where the malicious prompt is hidden in a benign URL.

Configure AI systems to strip hidden or embedded instructions (including invisible text, metadata, or off-screen content), validate untrusted input before it reaches the model, and validate model output before any agent takes action.

Image-based prompt injection

Hackers embed invisible text inside images. When AI downscales the image, the hidden text becomes legible to the model.

Using Google Gemini to exfiltrate Google Calendar data by injecting instructions hidden within an uploaded image, described in this Bleeping Computer article.

Configure AI to prevent automatic resizing/downscaling of untrusted images, use preprocessing to strip or detect embedded instructions, and block image models from taking action without review.

Tool exploitation

Rather than give instructions in the prompt, attackers influence how AI agents choose tools—especially malicious ones outside of your company. This threat is far less common today, but researchers are highlighting its effectiveness in their studies.

Manipulating malicious tools’ metadata (names, descriptions, or parameter schemas) to trick agents into choosing them, e.g., the “Attractive Metadata Attack” described in this Guangzhou University study.

Limit agents to approved tools only, validate tool metadata, and block risky tool calls that haven’t been reviewed.

Despite technical controls, you still need to fortify your people

Security teams can build guardrails like input filters and privilege boundaries, but AI adoption often progresses faster than controls can keep up. On top of this, attackers will keep evolving their tactics to match our defenses. In today’s AI era, humans need to be more than the final checkpoint, and take a secure-by-design approach in lockstep with AI adoption.

Training has to meet people not just where they are today, but where they are going: it must move beyond cartoonish “spot the phish” slides. We need AI-era training that strengthens judgment and fills knowledge gaps, teaching people how AI can be manipulated, urging them to stick to corporate-sanctioned tools, training them to feed AI content they know is safe, and reminding them to be skeptical of any action that feels even slightly off.

Three takeaways to keep your AI safe

  1. Connect applications to AI using the principle of least-privileged-access, considering who should and would get access to the data through AI applications. 
  2. Configure your AI tools and infrastructure to restrict agentic actions and tool-calling, sanitize model input and output, and prevent image rescaling. 
  3. Train employees to use approved AI tools only, consult with the security team when adopting new AI tools, and be cognizant of the unwitting role they can play in an AI attack.

How a fake ChatGPT installer tried to steal my password

The TL;DR

  • Over the holiday, I happened upon a fake ChatGPT Atlas site
  • The site’s instructions led me to password-stealing malware—a ClickFix attack!
  • The attack bypasses all the good endpoint protections
  • It’s a perfect storm: site cloning, trusted hosting, obfuscated commands, and privilege escalation
  • Scroll down for a free video to warn your team about this type of attack

Over the Thanksgiving holiday, I embarked on a small project to evaluate AI browsers, including the buzzy ChatGPT Atlas. Like most people, I clicked the first result I saw: a sponsored link. The page looked nearly identical to the real Atlas site: same layout, design, copy. The only subtle giveaway was the domain: a Google Sites URL. That’s increasingly common in modern phishing kits—tools like v0.dev make it trivial to clone a legitimate site in minutes, and hosting on Google Sites adds a false sense of credibility for anyone who thinks Google = trustworthy. Given our work here at Fable, I was pretty excited to have stumbled on this, and decided to give it a whirl and see just how much damage I could cause. 

Instead of getting a standard installer (.dmg), the fake site asked me to paste a command into Terminal. (By the way, this is the point where most people—especially curious or rushed users—might comply. And that’s exactly what attackers count on.) The command itself looked cryptic but harmless: a base64-encoded string passed into curl and executed with bash. But it got nefarious pretty quickly: it decoded to a remote script hosted at https://tenkmo[dot]com/gdrive, a domain controlled by the attacker.

The downloaded script repeatedly prompted for my macOS password until the correct one was entered. Here’s the script:

Once captured, it used that password to run a second-stage payload from https://shrimpfc[dot]com/drive/update with elevated privileges (sudo). That payload—which VirusTotal confirms is malicious—was then free to do whatever it wanted.

Mystery solved! It’s a variation of an attack we’ve seen before: ClickFix. Notably, neither CrowdStrike nor SentinelOne flagged it on download. This is becoming more common: social engineering plus user-granted execution can bypass even strong endpoint defenses.

This should go without saying, but do not try this at home! This attack is a textbook example of how modern phishing blends AI-generated site cloning, trusted hosting platforms, obfuscated commands, and privilege escalation—all without a single traditional “phishing email.” It also illustrates a critical truth: users don’t need to fall for an email spoof anymore; simply searching for something and clicking the wrong sponsored link can lead to compromise.

We’ve created a short, practical briefing video on ClickFix that you can download for free and share with your team. It walks through why you should never run command-line instructions provided by a website, how attackers disguise malicious installers, and how to verify software safely.

Use the button below to download this briefing, and share it with your team.