Day 4 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)
The TL;DR
- Most human risk tools measure engagement, not behavior change
- Phishing failures and training completion ≠ reduced risk
- Behavior change must be verified, not inferred
- Fable validates outcomes using real security telemetry
- “Action completed” is the indicator that actually matters
Human risk vendors talk a big game about how they change behavior and reduce risk, but where’s the proof?
We’ve studied the reporting output of a number of human risk products, and the pattern is consistent: most focus on failure metrics from phishing simulations and participation or completion rates for security training. What’s missing is verification that the desired behavior actually changed. If you’re not observing what people do differently after the intervention, you’re measuring activity, not risk reduction.
Our customers use Fable to verify a number of behaviors, including security tool adoption, device update compliance, and generative AI policy adherence. They do this by integrating the technology that can give them the answer, and having Fable validate the change. For example, data from Netskope would indicate whether someone had uploaded a document to an unsanctioned generative AI application, and—depending on its configuration—whether that upload constituted a DLP violation.
Here’s a concrete example. One security team used Fable AI–generated video briefings followed by targeted nudges to drive OS update compliance. Most organizations would stop at reporting how many people watched the video or completed the training. With Fable, the metric was action completed: did the user actually update their device?
The result: 75% behavior change within the cohort in under two weeks, reaching 99% by week five.
Check in tomorrow (day 5) as we dive more into time-to-behavior-change and why it’s important for closing the exposure window.