The hidden multiplier in human risk

Day 12 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Some risks travel together
  • Measuring the overlap—toxic combinations—lets you see heightened risk
  • Finding and fixing the toxic combinations helps you zap risk efficiently

Not all risk shows up in individualized packages. Sometimes two or more risks travel together, and when they do, they can create toxic combinations. 

We surface this effect in our latest human risk report, where we look at several risk combinations whose co-occurrence is higher than what you’d expect by chance. When the actual overlap divided by the probability of overlap exceeds 1.0, that’s a toxic combination. 

Finding these patterns helps you suss out what risks to tackle first (and how). Money handlers who fall for phishing. Employees with no MFA and sensitive data access. IT admins who reuse passwords. None of these behaviors is rare. What matters is where they cluster.

Traditional security programs miss this because they treat each issue as a separate control gap. One fix here, another there. But eliminating a single weakness doesn’t help much if the surrounding conditions stay the same.

Real progress comes from prioritizing the combinations that multiply exposure. When teams address those first, they reduce risk faster, with less effort.

This concludes our 12 days of riskmas series.

Why one-size-doesn’t-fit-all

Day 11 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Generic security campaigns raise awareness but rarely change behavior
  • Campaigns tailored to role, access, and behavior perform dramatically better
  • Relevance and message precision drive action…and stickiness
  • Precision targeting also shortens time to risk reduction

Most security campaigns are built for everyone…and resonate with no one! 

Generic messaging might raise awareness, but it rarely changes behavior. Our human risk report makes the case for a sharper approach: precision targeting.

When you tailor your security campaigns to cohorts based on role, access level, or risky behavior, you get results. Targeted campaigns dramatically outperform general ones because they feel relevant. For example, developers who received a campaign highlighting an issue with PII in observability tools, they paid close attention. The intervention message used their name, mentioned the app, spoke about the specific problem, and told them how to avoid the problem in the future—and all in less than two minutes. We believe that kind of relevance is what led to a 60% reduction in month one and 100% compliance thereafter.

Beyond getting people to take action, precision targeting also gets them to move fast, shortening the path to risk reduction. Instead of blanketing the entire organization with generic guidance, security teams can focus on the small set of people whose actions actually move the needle at a given moment—people with elevated access, repeated risky behavior, or direct exposure to critical systems.

Cohort insights show exactly who is struggling with which behaviors, allowing teams to intervene with specific, relevant guidance when it’s most likely to stick. No more guesswork.

In human risk management, precision targeting isn’t a nice-to-have. It’s the difference between activity and outcomes.

Check us out tomorrow as we deep-dive into fixing the highest-leverage risks first.

Vanity metrics are lying to you

Day 10 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Popular security metrics are easy to track but largely meaningless
  • Real risk is about people’s behavior—auth posture, data handling, etc.
  • Context matters—a phishing click isn’t equally risky for every employee
  • It’s not just about behavior change but also speed and durability

Phishing click rates? Training completions? Snooze-fest! 

These metrics are easy to collect and report on, but also a little embarrassing for any slightly self-aware security executive. That’s because they’re pretty much all noise. In our human risk report, the clearest signal is simple: what matters is risk—real behaviors that increase or reduce exposure.

Measuring human risk means tracking what people actually do. Do they reuse passwords? Do they upload sensitive data to unsanctioned tools? Do they report phishing attempts? And yes, do they click. But whether a click is terrible, simply bad, or meh has a lot to do with a person’s security posture. These measures—not annual training scores—tell you whether your organization has mitigated risk and is getting safer…or is just getting better at compliance theater.

Just as important is speed. How quickly do risky behaviors improve after an intervention? And do those improvements last? The report shows that behavior change isn’t binary. It happens over time, and it can decay just as easily as it improves if teams stop paying attention.

When organizations move beyond vanity metrics, priorities shift. Instead of chasing engagement, they focus on outcomes. Instead of asking “Did they finish the training?” they ask “Did the risk actually go down?” That’s the difference between measuring effort and measuring impact.

If you want durable security improvement, measure what matters: risk.

Come back in a few days for a look at targeting with precision.

Toxic combinations: where human risk multiplies

Day 9 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Risks that travel together are toxic combinations
  • Risk lift measures how often paired risks co-occur versus random chance
  • In one case, money handlers who fail phishing tests have a risk lift of nearly 2x
  • Targeting overlapping risks can deliver outsized security gains

There will be math. You’ve been warned.

Some risks are dangerous on their own. Others become even more hazardous when they collide. In our human risk report, we focus on toxic combinations: pairs of risky behaviors or exposures that occur together far more often than chance would predict. These overlaps are where security programs tend to lose control quietly, and where attackers find their easiest paths in.

To measure this effect, we look at what we’re calling “risk lift of toxic combinations” (if you think of a more clever name, we’re all ears). In simple terms, it compares how often two risks actually co-occur versus how often they should if they were unrelated. The math is straightforward: P(A∩B)/P(A)×P(B). Anything higher than 1.0 means the co-occurrence is higher than expected, and therefore toxic, meaning they amplify overall exposure.

We took several real-world examples from our anonymized data set, finding in one case that money-handlers who failed phishing simulations show a lift of 1.98—nearly double the overlap you’d expect by chance. That pairing alone signals a dangerous mix of access and susceptibility. In another case, employees with sensitive data access and no multi-factor authentication register a lift of 1.17. And a third example shows IT administrators who reuse passwords coming in at 1.13. Each number may look modest, but they reveal that weaknesses that travel together can stack the risk.

This is the hidden cost of treating risks as independent checkboxes. A phishing failure here, weak authentication there. On paper, each might seem manageable. In reality, the overlap is what matters. That’s where exposure accelerates and where breaches are most likely to begin.

The upside is clarity. Toxic combinations tell security teams exactly where to act. Instead of broad, blunt controls, leaders can target the people and behaviors that deliver the biggest risk reduction for the least effort. Fix the overlaps—not just the outliers—and the payoff compounds fast.

See, that math wasn’t so bad, was it?

Tune in tomorrow for a fun little review about measuring risk.

Who are your AI swashbucklers?

Day 8 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • People are uploading real work into generative AI apps
  • In one company, the tech team leads, with legal/compliance a distant second
  • Code dominates uploaded content (60%), followed by documents (26%)

Who’s sailing closest to the edge with generative AI? That’s the question security leaders are asking as AI tools slip into everyday workflows. In this slice of our human risk report, we examine which cohorts inside a single organization uploaded the most content to generative AI tools over a six-month period, and what kind of data they shared. The goal wasn’t to point fingers, but to understand behavior at scale—because that’s where risk lives.

This early picture is revealing. When teams use generative AI, they don’t just experiment with prompts or harmless examples. They upload real work. Real artifacts. Things like board decks (ai ai ai), financial models, and code, code, and more code. 

As our customers ingest richer telemetry from security tools, this view will sharpen. With tools like SASE, leaders can distinguish between uploads to sanctioned versus unsanctioned AI applications, and Fable will be able to target cohorts of employees who only upload content to unsanctioned applications, where the risk is significant. Or they’ll be able to refine even further and only target those who upload content to an unsanctioned application when it triggers a DLP violation. So stay tuned on this topic.

So who’s loading the most content today? In one customer environment, the technology team led any other group by a wide margin with an average of 129 uploads per person over a six-month period. That may not be surprising—engineers are often early adopters—but the second-place finisher raises eyebrows. Legal and compliance teams ranked next (with an average of 22 uploads), underscoring how quickly AI has permeated even the most risk-aware functions.

The content itself tells an equally important story. Code accounted for 60% of uploads, followed by documents at 26%. Media made up 5%, with the remaining 9% falling into a mixed “other” category. Each file type carries its own exposure, from intellectual property leakage to regulatory risk. Together, they paint a clear picture: generative AI is already embedded in critical workflows.

This is where security programs must evolve. The question is no longer whether employees are using generative AI, but rather how, where, and with what data. Organizations that can map human behavior to AI usage in real time won’t just reduce risk, but gain the clarity needed to let people move fast but also help them stay out of dangerous waters.

Check us out tomorrow for a look at toxic combinations.

What’s the half-life of a security campaign?

Day 7 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Behavior change can fade after your security campaign
  • The behavior decay interval measures how long improvements actually last
  • Point-in-time metrics hide slow drift back to risky habits
  • We think relevant, clear guidance drove lasting change in one example campaign
  • Ongoing monitoring enables timely intervention before risk returns

Does security behavior change actually stick? That’s the question the behavior decay interval is designed to answer. 

The behavior decay interval measures the staying power of a security campaign—how quickly people revert to old habits once the initial attention fades. Without this lens, you may think mistake short-term improvement for lasting progress.

In some programs, behavior improves briefly and then tapers off as people get busy and distracted. This kind of decay is easy to miss if you only look at point-in-time metrics. What matters more is the slope: are behaviors holding steady, or slowly drifting back toward a risky threshold? 

At Fable, we care intensely about all of the factors that go into how you change behavior, how quickly you can change behavior, and how long that change lasts.

The visual below shows two campaigns—one where behavior change began to decay, and another where the change held steady (for at least the last 11 or so months, fingers crossed!). In the right-hand example, developers were inadvertently logging PII to an observability tool. Clear, timely guidance explained what went wrong and exactly how to fix it. The result was 60% compliance in the first two months—and full compliance thereafter. Our strong hunch is that the difference wasn’t volume or repetition, but relevance and clarity. That said, we’re looking across all campaigns and will no doubt have more to say on this topic.

It’s not always obvious why one campaign sticks and another fades. That’s why monitoring behavior over time matters. By tracking decay intervals, teams can spot when performance drops below an acceptable threshold and intervene before risky habits fully resurface.

Behavior change isn’t a one-time event—it’s a system. Measure how long improvements last, watch for decay, and be ready to step in when needed. That’s how security programs move from short-term wins to durable, lasting impact.

Check us out tomorrow for a look at AI swashbucklers (it’s really just about uploading content to generative AI, but we wanted to use the word “swashbucklers”).

Here’s our down payment toward delivering phishing on autopilot

The TL;DR

  • Recurring campaigns is a down payment on our vision: phishing on autopilot
  • You can now automate recurring campaigns for up to a year
  • Choose your duration, cadence, and delivery style
  • Target employees by role or risk with relevant templates
  • Set it and forget it, so you can focus on the strategic stuff

Our goal is to make Fable delightful to use, and one way to do that is by making our phishing simulation campaigns drop-dead simple to create and run. Internally, we’re calling it “phishing autopilot,” and the goal is to make phishing campaigns as set-it-and-forget-it as possible, so you can focus on the results, learnings, and more strategic stuff. Here’s a down payment for that promise: recurring campaign creation.

With Fable’s recurring campaigns, you can plan an entire year of relevant, role- and risk-targeted simulations in minutes. Instead of sending one-offs, you set the duration, frequency, and delivery style, and Fable will schedule dozens of unique simulations per employee automatically. You can set them to run them monthly, biweekly, or weekly – dripped out or delivered all at once – and the program will run continuously without manual work.

Recurring campaigns let you target employees based on real risk. You can create multiple cohorts in a single campaign, ranked by priority, so each employee only receives the most relevant simulation. Those cohorts can be powered by Fable’s human risk data – like missing MFA or risky AI usage – or by custom criteria you define. Each group gets phishing scenarios that actually match how they work, the tools they use, and where they may be exposed.

Every simulation stays fresh by design. Fable makes sure each employee receives a new template every time, so there’s no repetition and no training fatigue. Set it once, trust it to run, and focus on reducing human risk instead of managing campaigns. This is just the start, but we hope it’s a meaningful step in the right direction toward phishing nirvana.

Here’s our awkward breach playbook

The TL;DR

  • OK, deep breath
  • The Mixpanel breach included some Pornhub user data
  • Cybercriminal group ShinyHunters attempted to extort Pornhub
  • This is a reminder that companies don’t need to be breached directly to be at risk
  • We included some advice for employees in this free, downloadable, 2-min. video
  • Security should also stay alert for signs that internal users may be being extorted

Ok. Deep breath as we head into the holidays.

There’s no great way to say this, but there was a recent extortion attempt of Mixpanel involving some Pornhub data attackers got in a breach. Beyond all the embarrassing fallout, it’s a reminder that companies don’t need to be directly hacked for sensitive data to end up in the wrong hands.

Here’s what went down: A hacking group known as Scattered Lapsus$ Hunters exploited a breach at Mixpanel, a widely used analytics provider, to access user data from multiple organizations, including Pornhub, OpenAI, and others. It’s a growing pattern in modern attacks: adversaries go after shared tools and services, then use the data they find to pressure or impersonate downstream targets.

For people, the risk isn’t abstract. Exposed information can include email addresses, locations, and detailed activity logs (in this case, the particular video someone watched—ai ai ai!). That data can be used to craft convincing phishing messages, impersonate trusted services, or attempt extortion by referencing private behavior. For organizations, the impact extends beyond the initial breach. Once attackers have this context, they often target employees directly, hoping urgency or embarrassment will prompt a quick response.

What should security teams do? First off, small actions make a big difference. Through your human risk program, warn people to keep an eye out for unexpected emails, especially those that reference account activity, subscriptions, or alleged security issues. They should pause before responding to unusual requests and verify them through official channels. They should never reuse passwords across sites, update impacted passwords, and enable strong multi-factor authentication. Also, they should try to decouple private behavior from identifying information as much as possible. At work, they should enable multi-factor authentication wherever possible. And if they become the victim of an extortion attempt by a bad actor claiming to have access to sensitive data, they should escalate the issue to the security team. 

Beyond messaging to employees, security teams should stay alert for signs that internal users may be under pressure. Extortion attempts don’t always show up as external attacks—they can manifest as unusual behavior from otherwise trusted employees. Sudden requests for access, attempts to bypass controls, rushed approvals, or deviations from normal workflows can all be indicators that someone is being coerced. This isn’t about suspicion or blame, but recognizing that attackers increasingly target people directly, and ensuring there are clear, safe paths for employees to ask for help before a bad situation escalates.

To make this easier, Fable has created a short, 2-minute video that security teams can share internally to raise awareness about this type of third-party breach and the follow-on risks that come with it. It’s designed to be clear, practical, and non-alarmist, helping employees recognize what’s happening and what to do next. You can download and share the video for free below—and take a simple step toward reducing human risk before attackers have a chance to exploit it.

Human risk isn’t evenly distributed. The secret’s in cohort analysis.

Day 6 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Human risk isn’t uniform across your organization
  • Aggregate metrics hide where risk is concentrated
  • Cohort-based analysis unveils risk in groups, like role, access, or behavior
  • This analysis helps you target and reduce risk more efficiently

Human risk isn’t evenly distributed across your organization. Different roles, access levels, and behavior patterns matter to how people interact with security controls and threats. Treating employees as a monolith can mask meaningful differences in exposure and behavior, making it harder to target interventions where they are most needed.

Cohort-based analysis addresses this gap by grouping employees based on shared characteristics such as function, department, geography, tenure, system access or privileges, or observed behaviors. These cohorts provide a clearer lens for evaluating campaign performance and understanding where interventions are working, where they are stalling, and where you have concentrated risk.

By slicing performance data by cohort, security teams can move beyond aggregate metrics and identify patterns that would otherwise be hidden. For example, a phishing campaign may appear effective at the organizational level while performing poorly within a specific group. In this simple sample analysis, a VIP cohort clicked on phishing messages at more than twice the rate of other functional groups, highlighting a risk concentration that would have been easy to miss in overall results.

This level of insight lets you take more precise action. Instead of broad, one-size-fits-all follow-ups, security teams can tailor training, reinforcement, and controls to the cohorts that need them most. As human risk programs mature, cohort-based analysis becomes essential for prioritization, precision, and meaningful risk reduction.

Check us out tomorrow for a look at behavior decay (it’s not as gruesome as it sounds!).

Like MTTR, TTBC is everything

Day 5 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Behavior change alone doesn’t fully measure human risk reduction
  • Time-to-behavior change (TTBC) captures how long exposure lasts
  • TTBC mirrors MTTR by focusing on exposure windows, not just outcomes
  • Faster action matters more than eventual participation
  • Measuring speed shifts security programs from awareness to action

Behavior change is a critical indicator of human risk reduction, but it is incomplete without a companion metric: time-to-behavior change (TTBC). Modeled after mean time to remediation (MTTR), TTBC measures how long it takes for behavior to shift from intent to action. Because behavior change is typically assessed across a population, TTBC should be anchored to a meaningful threshold, such as the time required for 75 percent of a cohort to complete a desired action. Without a time dimension, behavior change becomes a static outcome, obscuring how long systems or data remain exposed to risk.

Conceptually, TTBC is similar to mean time to remediation (MTTR) in security operations. Both metrics focus on reducing exposure windows rather than simply documenting outcomes. TTBC can be expressed as an absolute duration (e.g., 8 days), or as a relative measure when benchmarked against a control group (e.g., 20 percent of the duration of the control group). In either form, the metric provides a clearer signal of how quickly an organization can move from intent to action.

The importance of TTBC lies in its direct connection to real-world risk. Human risk is not defined by whether people eventually do the right thing, but by how long systems, data, and workflows remain vulnerable in the meantime. Each additional day between awareness and action extends the exposure window. Measuring TTBC shifts the focus from engagement metrics to the speed and effectiveness of action.

Consider the scenario from yesterday: a security team asks employees to update their device OS software to reduce exposure to known vulnerabilities—a relevant scenario in a BYOD or mixed environment. The campaign begins with a short briefing video, followed by targeted Slack nudges to those who have not yet acted. By the end of week two, 75 percent of the cohort updated their devices. By week five, participation leveled off at 99 percent. The final outcome matters, but the speed at which the majority acts is what meaningfully reduces risk.

For security leaders, TTBC offers a more operational lens on human risk management. It connects behavioral programs directly to exposure reduction and provides a way to compare interventions based on how quickly they drive action. As organizations mature beyond awareness metrics, time-to-behavior change should become a standard measure of whether human risk programs are actually working.

Up tomorrow: why we segment employee populations into cohorts for more exact targeting and analysis.