Toxic combinations: where human risk multiplies

Day 9 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Risks that travel together are toxic combinations
  • Risk lift measures how often paired risks co-occur versus random chance
  • In one case, money handlers who fail phishing tests have a risk lift of nearly 2x
  • Targeting overlapping risks can deliver outsized security gains

There will be math. You’ve been warned.

Some risks are dangerous on their own. Others become even more hazardous when they collide. In our human risk report, we focus on toxic combinations: pairs of risky behaviors or exposures that occur together far more often than chance would predict. These overlaps are where security programs tend to lose control quietly, and where attackers find their easiest paths in.

To measure this effect, we look at what we’re calling “risk lift of toxic combinations” (if you think of a more clever name, we’re all ears). In simple terms, it compares how often two risks actually co-occur versus how often they should if they were unrelated. The math is straightforward: P(A∩B)/P(A)×P(B). Anything higher than 1.0 means the co-occurrence is higher than expected, and therefore toxic, meaning they amplify overall exposure.

We took several real-world examples from our anonymized data set, finding in one case that money-handlers who failed phishing simulations show a lift of 1.98—nearly double the overlap you’d expect by chance. That pairing alone signals a dangerous mix of access and susceptibility. In another case, employees with sensitive data access and no multi-factor authentication register a lift of 1.17. And a third example shows IT administrators who reuse passwords coming in at 1.13. Each number may look modest, but they reveal that weaknesses that travel together can stack the risk.

This is the hidden cost of treating risks as independent checkboxes. A phishing failure here, weak authentication there. On paper, each might seem manageable. In reality, the overlap is what matters. That’s where exposure accelerates and where breaches are most likely to begin.

The upside is clarity. Toxic combinations tell security teams exactly where to act. Instead of broad, blunt controls, leaders can target the people and behaviors that deliver the biggest risk reduction for the least effort. Fix the overlaps—not just the outliers—and the payoff compounds fast.

See, that math wasn’t so bad, was it?

Tune in tomorrow for a fun little review about measuring risk.

Who are your AI swashbucklers?

Day 8 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • People are uploading real work into generative AI apps
  • In one company, the tech team leads, with legal/compliance a distant second
  • Code dominates uploaded content (60%), followed by documents (26%)

Who’s sailing closest to the edge with generative AI? That’s the question security leaders are asking as AI tools slip into everyday workflows. In this slice of our human risk report, we examine which cohorts inside a single organization uploaded the most content to generative AI tools over a six-month period, and what kind of data they shared. The goal wasn’t to point fingers, but to understand behavior at scale—because that’s where risk lives.

This early picture is revealing. When teams use generative AI, they don’t just experiment with prompts or harmless examples. They upload real work. Real artifacts. Things like board decks (ai ai ai), financial models, and code, code, and more code. 

As our customers ingest richer telemetry from security tools, this view will sharpen. With tools like SASE, leaders can distinguish between uploads to sanctioned versus unsanctioned AI applications, and Fable will be able to target cohorts of employees who only upload content to unsanctioned applications, where the risk is significant. Or they’ll be able to refine even further and only target those who upload content to an unsanctioned application when it triggers a DLP violation. So stay tuned on this topic.

So who’s loading the most content today? In one customer environment, the technology team led any other group by a wide margin with an average of 129 uploads per person over a six-month period. That may not be surprising—engineers are often early adopters—but the second-place finisher raises eyebrows. Legal and compliance teams ranked next (with an average of 22 uploads), underscoring how quickly AI has permeated even the most risk-aware functions.

The content itself tells an equally important story. Code accounted for 60% of uploads, followed by documents at 26%. Media made up 5%, with the remaining 9% falling into a mixed “other” category. Each file type carries its own exposure, from intellectual property leakage to regulatory risk. Together, they paint a clear picture: generative AI is already embedded in critical workflows.

This is where security programs must evolve. The question is no longer whether employees are using generative AI, but rather how, where, and with what data. Organizations that can map human behavior to AI usage in real time won’t just reduce risk, but gain the clarity needed to let people move fast but also help them stay out of dangerous waters.

Check us out tomorrow for a look at toxic combinations.

What’s the half-life of a security campaign?

Day 7 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Behavior change can fade after your security campaign
  • The behavior decay interval measures how long improvements actually last
  • Point-in-time metrics hide slow drift back to risky habits
  • We think relevant, clear guidance drove lasting change in one example campaign
  • Ongoing monitoring enables timely intervention before risk returns

Does security behavior change actually stick? That’s the question the behavior decay interval is designed to answer. 

The behavior decay interval measures the staying power of a security campaign—how quickly people revert to old habits once the initial attention fades. Without this lens, you may think mistake short-term improvement for lasting progress.

In some programs, behavior improves briefly and then tapers off as people get busy and distracted. This kind of decay is easy to miss if you only look at point-in-time metrics. What matters more is the slope: are behaviors holding steady, or slowly drifting back toward a risky threshold? 

At Fable, we care intensely about all of the factors that go into how you change behavior, how quickly you can change behavior, and how long that change lasts.

The visual below shows two campaigns—one where behavior change began to decay, and another where the change held steady (for at least the last 11 or so months, fingers crossed!). In the right-hand example, developers were inadvertently logging PII to an observability tool. Clear, timely guidance explained what went wrong and exactly how to fix it. The result was 60% compliance in the first two months—and full compliance thereafter. Our strong hunch is that the difference wasn’t volume or repetition, but relevance and clarity. That said, we’re looking across all campaigns and will no doubt have more to say on this topic.

It’s not always obvious why one campaign sticks and another fades. That’s why monitoring behavior over time matters. By tracking decay intervals, teams can spot when performance drops below an acceptable threshold and intervene before risky habits fully resurface.

Behavior change isn’t a one-time event—it’s a system. Measure how long improvements last, watch for decay, and be ready to step in when needed. That’s how security programs move from short-term wins to durable, lasting impact.

Check us out tomorrow for a look at AI swashbucklers (it’s really just about uploading content to generative AI, but we wanted to use the word “swashbucklers”).

Here’s our down payment toward delivering phishing on autopilot

The TL;DR

  • Recurring campaigns is a down payment on our vision: phishing on autopilot
  • You can now automate recurring campaigns for up to a year
  • Choose your duration, cadence, and delivery style
  • Target employees by role or risk with relevant templates
  • Set it and forget it, so you can focus on the strategic stuff

Our goal is to make Fable delightful to use, and one way to do that is by making our phishing simulation campaigns drop-dead simple to create and run. Internally, we’re calling it “phishing autopilot,” and the goal is to make phishing campaigns as set-it-and-forget-it as possible, so you can focus on the results, learnings, and more strategic stuff. Here’s a down payment for that promise: recurring campaign creation.

With Fable’s recurring campaigns, you can plan an entire year of relevant, role- and risk-targeted simulations in minutes. Instead of sending one-offs, you set the duration, frequency, and delivery style, and Fable will schedule dozens of unique simulations per employee automatically. You can set them to run them monthly, biweekly, or weekly – dripped out or delivered all at once – and the program will run continuously without manual work.

Recurring campaigns let you target employees based on real risk. You can create multiple cohorts in a single campaign, ranked by priority, so each employee only receives the most relevant simulation. Those cohorts can be powered by Fable’s human risk data – like missing MFA or risky AI usage – or by custom criteria you define. Each group gets phishing scenarios that actually match how they work, the tools they use, and where they may be exposed.

Every simulation stays fresh by design. Fable makes sure each employee receives a new template every time, so there’s no repetition and no training fatigue. Set it once, trust it to run, and focus on reducing human risk instead of managing campaigns. This is just the start, but we hope it’s a meaningful step in the right direction toward phishing nirvana.

Here’s our awkward breach playbook

The TL;DR

  • OK, deep breath
  • The Mixpanel breach included some Pornhub user data
  • Cybercriminal group ShinyHunters attempted to extort Pornhub
  • This is a reminder that companies don’t need to be breached directly to be at risk
  • We included some advice for employees in this free, downloadable, 2-min. video
  • Security should also stay alert for signs that internal users may be being extorted

Ok. Deep breath as we head into the holidays.

There’s no great way to say this, but there was a recent extortion attempt of Mixpanel involving some Pornhub data attackers got in a breach. Beyond all the embarrassing fallout, it’s a reminder that companies don’t need to be directly hacked for sensitive data to end up in the wrong hands.

Here’s what went down: A hacking group known as Scattered Lapsus$ Hunters exploited a breach at Mixpanel, a widely used analytics provider, to access user data from multiple organizations, including Pornhub, OpenAI, and others. It’s a growing pattern in modern attacks: adversaries go after shared tools and services, then use the data they find to pressure or impersonate downstream targets.

For people, the risk isn’t abstract. Exposed information can include email addresses, locations, and detailed activity logs (in this case, the particular video someone watched—ai ai ai!). That data can be used to craft convincing phishing messages, impersonate trusted services, or attempt extortion by referencing private behavior. For organizations, the impact extends beyond the initial breach. Once attackers have this context, they often target employees directly, hoping urgency or embarrassment will prompt a quick response.

What should security teams do? First off, small actions make a big difference. Through your human risk program, warn people to keep an eye out for unexpected emails, especially those that reference account activity, subscriptions, or alleged security issues. They should pause before responding to unusual requests and verify them through official channels. They should never reuse passwords across sites, update impacted passwords, and enable strong multi-factor authentication. Also, they should try to decouple private behavior from identifying information as much as possible. At work, they should enable multi-factor authentication wherever possible. And if they become the victim of an extortion attempt by a bad actor claiming to have access to sensitive data, they should escalate the issue to the security team. 

Beyond messaging to employees, security teams should stay alert for signs that internal users may be under pressure. Extortion attempts don’t always show up as external attacks—they can manifest as unusual behavior from otherwise trusted employees. Sudden requests for access, attempts to bypass controls, rushed approvals, or deviations from normal workflows can all be indicators that someone is being coerced. This isn’t about suspicion or blame, but recognizing that attackers increasingly target people directly, and ensuring there are clear, safe paths for employees to ask for help before a bad situation escalates.

To make this easier, Fable has created a short, 2-minute video that security teams can share internally to raise awareness about this type of third-party breach and the follow-on risks that come with it. It’s designed to be clear, practical, and non-alarmist, helping employees recognize what’s happening and what to do next. You can download and share the video for free below—and take a simple step toward reducing human risk before attackers have a chance to exploit it.

Human risk isn’t evenly distributed. The secret’s in cohort analysis.

Day 6 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Human risk isn’t uniform across your organization
  • Aggregate metrics hide where risk is concentrated
  • Cohort-based analysis unveils risk in groups, like role, access, or behavior
  • This analysis helps you target and reduce risk more efficiently

Human risk isn’t evenly distributed across your organization. Different roles, access levels, and behavior patterns matter to how people interact with security controls and threats. Treating employees as a monolith can mask meaningful differences in exposure and behavior, making it harder to target interventions where they are most needed.

Cohort-based analysis addresses this gap by grouping employees based on shared characteristics such as function, department, geography, tenure, system access or privileges, or observed behaviors. These cohorts provide a clearer lens for evaluating campaign performance and understanding where interventions are working, where they are stalling, and where you have concentrated risk.

By slicing performance data by cohort, security teams can move beyond aggregate metrics and identify patterns that would otherwise be hidden. For example, a phishing campaign may appear effective at the organizational level while performing poorly within a specific group. In this simple sample analysis, a VIP cohort clicked on phishing messages at more than twice the rate of other functional groups, highlighting a risk concentration that would have been easy to miss in overall results.

This level of insight lets you take more precise action. Instead of broad, one-size-fits-all follow-ups, security teams can tailor training, reinforcement, and controls to the cohorts that need them most. As human risk programs mature, cohort-based analysis becomes essential for prioritization, precision, and meaningful risk reduction.

Check us out tomorrow for a look at behavior decay (it’s not as gruesome as it sounds!).

Like MTTR, TTBC is everything

Day 5 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Behavior change alone doesn’t fully measure human risk reduction
  • Time-to-behavior change (TTBC) captures how long exposure lasts
  • TTBC mirrors MTTR by focusing on exposure windows, not just outcomes
  • Faster action matters more than eventual participation
  • Measuring speed shifts security programs from awareness to action

Behavior change is a critical indicator of human risk reduction, but it is incomplete without a companion metric: time-to-behavior change (TTBC). Modeled after mean time to remediation (MTTR), TTBC measures how long it takes for behavior to shift from intent to action. Because behavior change is typically assessed across a population, TTBC should be anchored to a meaningful threshold, such as the time required for 75 percent of a cohort to complete a desired action. Without a time dimension, behavior change becomes a static outcome, obscuring how long systems or data remain exposed to risk.

Conceptually, TTBC is similar to mean time to remediation (MTTR) in security operations. Both metrics focus on reducing exposure windows rather than simply documenting outcomes. TTBC can be expressed as an absolute duration (e.g., 8 days), or as a relative measure when benchmarked against a control group (e.g., 20 percent of the duration of the control group). In either form, the metric provides a clearer signal of how quickly an organization can move from intent to action.

The importance of TTBC lies in its direct connection to real-world risk. Human risk is not defined by whether people eventually do the right thing, but by how long systems, data, and workflows remain vulnerable in the meantime. Each additional day between awareness and action extends the exposure window. Measuring TTBC shifts the focus from engagement metrics to the speed and effectiveness of action.

Consider the scenario from yesterday: a security team asks employees to update their device OS software to reduce exposure to known vulnerabilities—a relevant scenario in a BYOD or mixed environment. The campaign begins with a short briefing video, followed by targeted Slack nudges to those who have not yet acted. By the end of week two, 75 percent of the cohort updated their devices. By week five, participation leveled off at 99 percent. The final outcome matters, but the speed at which the majority acts is what meaningfully reduces risk.

For security leaders, TTBC offers a more operational lens on human risk management. It connects behavioral programs directly to exposure reduction and provides a way to compare interventions based on how quickly they drive action. As organizations mature beyond awareness metrics, time-to-behavior change should become a standard measure of whether human risk programs are actually working.

Up tomorrow: why we segment employee populations into cohorts for more exact targeting and analysis.

Technical controls or human risk management? Choose belt and suspenders.

The TL;DR

  • Great controls still depend on people.
  • Three gaps will always remain: no control, unclear control, or people-powered control.
  • Human risk interventions close the distance between what you automate and what actually works.

Security folks are understandably excited about the wave of innovation hitting the human risk space right now. AI is reshaping attacks and defenses at the same time, and the market is responding with new tools, new playbooks, and a lot of noise.

And yet, every so often, we meet someone who says they’d rather double down on technical controls than deal with human risk. Fair enough. Controls are essential. But even if you implement the cleanest, highest-fidelity controls money can buy, you’ll still run into situations where people, not software, determine your true risk exposure.

Here are three reasons why:

1. Some risks simply can’t be engineered away
There are areas where no amount of tooling can give you airtight enforcement. Personal devices that aren’t enrolled in MDM. Employees choosing whether to adopt a password manager. Staff uploading sensitive material into a consumer AI app because the enterprise version wasn’t available, or wasn’t convenient.

In these cases, you can’t rely on enforcement alone. You need to reach employees directly, explain the risk, give them clear instructions, and reinforce the behavior over time. That’s human risk management doing work that no control can.

2. When controls do work, but employees don’t understand them
Automated blocks often stop the action but don’t deliver the message. A SASE control might prevent a sensitive file from being shared and even display an error message, but employees still walk away confused, or even nonplussed, about what happened or why it matters.

This is where a tailored human intervention changes everything. A quick, relevant briefing can explain the “why,” show them how to fix the issue, and reduce repeat violations. It also reframes security from being a mysterious blocker to being a partner that helps people do their work safely. And it’s a relevant message that’ll resonate next time, when you don’t have a technical control in place.

3. Many controls need continual upkeep
Plenty of controls only succeed if the people running them do their part. MFA requires admins to ensure adoption. Data classification policies only work when data owners keep up with changes. And recurring issues, such as secrets in code, exposed PII in data platforms, or misconfigured permissions, demand ongoing attention from the humans closest to the work.

Controls create the guardrails, but people keep them relevant and effective.

So what’s the lesson?
Technical controls are foundational, but they don’t cover every gap. They automate the pieces that can be automated. Human risk interventions handle everything that can’t: context, clarity, judgment, and sustained habits.

If your program leans solely on the technical side, you’re leaving room for avoidable exposure. The next step is building a set of human interventions that strengthen, not substitute, your controls. Start with the areas where confusion, inconsistency, or missing enforcement is creating residual risk.

When done well, this doesn’t just reduce incidents. It builds trust and shared ownership, turning moments of friction into moments of partnership between security and the people you support.

How to drive and measure behavior change

Day 4 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Most human risk tools measure engagement, not behavior change
  • Phishing failures and training completion ≠ reduced risk
  • Behavior change must be verified, not inferred
  • Fable validates outcomes using real security telemetry
  • “Action completed” is the indicator that actually matters

Human risk vendors talk a big game about how they change behavior and reduce risk, but where’s the proof?

We’ve studied the reporting output of a number of human risk products, and the pattern is consistent: most focus on failure metrics from phishing simulations and participation or completion rates for security training. What’s missing is verification that the desired behavior actually changed. If you’re not observing what people do differently after the intervention, you’re measuring activity, not risk reduction.

Our customers use Fable to verify a number of behaviors, including security tool adoption, device update compliance, and generative AI policy adherence. They do this by integrating the technology that can give them the answer, and having Fable validate the change. For example, data from Netskope would indicate whether someone had uploaded a document to an unsanctioned generative AI application, and—depending on its configuration—whether that upload constituted a DLP violation.

Here’s a concrete example. One security team used Fable AI–generated video briefings followed by targeted nudges to drive OS update compliance. Most organizations would stop at reporting how many people watched the video or completed the training. With Fable, the metric was action completed: did the user actually update their device?

The result: 75% behavior change within the cohort in under two weeks, reaching 99% by week five.

Check in tomorrow (day 5) as we dive more into time-to-behavior-change and why it’s important for closing the exposure window.

“Targeting lift”: the benefit of targeting

Day 3 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Advertising has made a science out of getting people to buy; cybersecurity should be able to do the same
  • Targeted interventions outperform general ones by a wide margin
  • “Targeting lift” quantifies how much better a targeted campaign performs
  • Here’s an early experiment showing a 33 percentage-point improvement from targeting on one element alone

Our chief product officer Dr. Sanny Liao always says that, with the right data, she can get someone to buy a pair of shoes. And she’s right. At least for me. I mean, I’m a sucker for shoes.

Data is powerful because we can use it to target just the right person at just the right time over just the right medium with just the right message. And with about a million other just-the-right-things—time of day, location, channel, price point, discount, ad color scheme, tone of voice—we can get people to buy our product.

If ad people can get us to buy those shoes with the right data and enough experiments, why can’t we do the same thing in cybersecurity? The truth is we haven’t really tried until now. Some vendors make a half-hearted attempt at role-based targeting, meaning they send certain training content to certain individuals based on their role, or whether they first join a company, but that’s about it. 

But our customers are starting to use Fable’s capabilities to target not just by role, but by risk. They’re personalizing intervention messages by citing people’s name, access, precise behaviors, and instructing them on what action to take next—customizing with policy details, tool names, processes, and protocols. They’re experimenting with message and nudge frequency, among other things. 

Our contention is the more targeted the campaign, the better it performs in engagement and employee response. Here is a simplified comparison of two campaigns from one of our customers—one targeted to a cohort of employees and the other sent to the whole organization. To isolate the value of targeting, we chose campaigns of roughly the same duration and topic to hold as much constant as we could (though in the real world, we’d want to target on as many elements as possible). Note that the targeted campaign performed
33 percentage points higher than the general one. 

We call this differential “targeting lift,” and will continue to look for even more experiments as our customers explore the additional meaningful ways they can target the right campaigns to the right people at the right time.

Be sure to check in tomorrow (day 4) as we explore behavior change and look at an example of a closed-loop campaign targeting employees with outdated device OS software, and then verifying that indeed they had taken action.

👉 Download the full report

👉 Download the infographic