Stop sending security training to everyone!

TL;DR—

Part two of six-part blog series on must-ask questions when creating net-new awareness training.

  • Always start with “Why this training, now?”
  • Possible answers include:
    • Trending headline response
    • New threat intelligence or security research
    • Risky behavior patterns
    • Recent security incidents or near-misses
  • See “Five must-ask questions for security training that changes employee behavior” for more questions Fable Security asks our clients before creating short-yet-impactful briefings!

Stopping to ask “why?” can speed your time to security training.

While this question may sound obvious, “Why this, now?” is the question we most often ask our clients after receiving a custom security training request.

Depending on how clients answer, they sometimes don’t actually need a training course. 

Instead, a reassuring “we’re covered!” nudge to employees panicking over the latest trending threat might work just as well as a custom briefing.

Other times, there’s no action that a specific employee cohort could take to mitigate the relevant threat. 

(For example, how often are your customer service staff patching Netscaler gateways, despite the breathless headlines about CitrixBleed2 exploitations?)

And, we can sometimes adapt pre-existing security awareness briefings to discuss specific attacks that reuse common tricks with a new coat of paint. 

The “EvilAI” campaign, for instance, reuses the same infrastructure and general attack pattern as other malvertising and SEO poisoning campaigns; its lures are just reskinned for anyone looking for a Gen AI-related productivity app.

How your “why” changes your security training approach.

So, what’s your reason to want a new security awareness briefing or training module? Did you:

  • Saw a headline somewhere?
    • If so, did you want to create a briefing to proactively warn your users before attackers try it with your employees
    • Or, did you want to send out a notice to reassure your executives and employees that your organization is currently protected against the threat?
  • Read a threat intel report that made your stomach drop
    • If so, consider what specifically your general employees could do to spot a new phishing lure or threat, versus what only your IT administrators or security personnel need to know.
  • Noticed a risky behavior pattern?
    • If so, is that pattern currently trending upwards?
    • Or, are you being proactive to keep the behavior from getting worse?
  • Recently dealt with an internal security incident or near-miss?
    • If so, how many details could you include to reassure the recipients that it’s handled, while keeping the briefing realistic to avoid future incidents?
    • Extra credit if you can include screenshots of the phishing lure, malicious pop-up, or any other artifact someone might see on the front end of an attack!

If this training request was prompted by an external blog or report, we’ll usually ask to see it. Not because we want to copy it, of course, but because it helps anchor the training in reality.

After all, people are much more likely to change their behavior when faced with concrete evidence of actual impacts, rather than hypothetical “this could happen” bad vibes.

We’re diving deep into all five questions we ask our clients and why, so sign up to get each blog as it comes out.

(And, if you want more ideas on fostering a positive, employee-friendly security culture, check out page 17 of “Modern Human Risk Management for Dummies”.)

From headline panic to useful training: Ask “why” first.

TL;DR—

Part two of six-part blog series on must-ask questions when creating net-new awareness training.

  • Always start with “Why this training, now?”
  • Possible answers include:
    • Trending headline response
    • New threat intelligence or security research
    • Risky behavior patterns
    • Recent security incidents or near-misses
  • See “Five must-ask questions for security training that changes employee behavior” for more questions Fable Security asks our clients before creating short-yet-impactful briefings!

Stopping to ask “why?” can speed your time to security training.

While this question may sound obvious, “Why this, now?” is the question we most often ask our clients after receiving a custom security training request.

Depending on how clients answer, they sometimes don’t actually need a training course. 

Instead, a reassuring “we’re covered!” nudge to employees panicking over the latest trending threat might work just as well as a custom briefing.

Other times, there’s no action that a specific employee cohort could take to mitigate the relevant threat. 

(For example, how often are your customer service staff patching Netscaler gateways, despite the breathless headlines about CitrixBleed2 exploitations?)

And, we can sometimes adapt pre-existing security awareness briefings to discuss specific attacks that reuse common tricks with a new coat of paint. 

The “EvilAI” campaign, for instance, reuses the same infrastructure and general attack pattern as other malvertising and SEO poisoning campaigns; its lures are just reskinned for anyone looking for a Gen AI-related productivity app.

How your “why” changes your security training approach.

So, what’s your reason to want a new security awareness briefing or training module? Did you:

  • Saw a headline somewhere?
    • If so, did you want to create a briefing to proactively warn your users before attackers try it with your employees
    • Or, did you want to send out a notice to reassure your executives and employees that your organization is currently protected against the threat?
  • Read a threat intel report that made your stomach drop?
    • If so, consider what specifically your general employees could do to spot a new phishing lure or threat, versus what only your IT administrators or security personnel need to know.
  • Noticed a risky behavior pattern?
    • If so, is that pattern currently trending upwards?
    • Or, are you being proactive to keep the behavior from getting worse?
  • Recently dealt with an internal security incident or near-miss?
    • If so, how many details could you include to reassure the recipients that it’s handled, while keeping the briefing realistic to avoid future incidents?
    • Extra credit if you can include screenshots of the phishing lure, malicious pop-up, or any other artifact someone might see on the front end of an attack!

If this training request was prompted by an external blog or report, we’ll usually ask to see it. Not because we want to copy it, of course, but because it helps anchor the training in reality.

After all, people are much more likely to change their behavior when faced with concrete evidence of actual impacts, rather than hypothetical “this could happen” bad vibes.

We’re diving deep into all five questions we ask our clients and why, so sign up to get each blog as it comes out.

(And, if you want more ideas on fostering a positive, employee-friendly security culture, check out page 17 of “Modern Human Risk Management for Dummies”.)

Five must-ask questions for security training that changes employee behavior

TL;DR—

  • Spinning up security awareness training ideas is easy; packaging them to change behavior—not check boxes—is hard.
  • To create impactful micro-trainings that change user behavior, you must answer these five simple questions:
    • Why this training, now?
    • Who are you trying to reach? (Hint: not everyone!)
    • What can people do?
    • Are you worried about an attack or a behavior?
    • What will people see?
  •  

Build security answers—not more questions or fear.

Most security teams don’t struggle with finding awareness training topics. 

After all, there’s no shortage of scary headlines, threat intel write-ups, or “everyone should know this” moments in our daily news feeds—let alone what you’re seeing on the backend during incidents or need for compliance.

The harder part is turning all that noise into a single briefing or communication that this unique group of people understands, instead of dismissing or panicking over.

Impactful security training relies on these 5 questions.

Here at Fable Security, when our clients request custom briefings, we slow things down and ask these five questions. Not to be difficult—but because these answers shape everything from context and tone, to examples and screenshots. 

  1. Why do you actually need to produce this training or send out this notice right now?
  2. Who specifically needs to see it?
  3. What do you want people to do differently after receiving your training?
  4. Are you worried about this specific attack, or this type of attack?
  5. What would someone see or experience on the front end of this attack?

Miss answering even one of these questions, and your well-intentioned user awareness training will just turn into background radiation instead of changing employee behavior.

Over the next few weeks, we’ll be releasing deep dives into each of these questions—so sign up to get each as they’re released.

For now, though, take a deep breath and ask yourself: Why this training, now, to these people?

(And, for ten more questions to ask yourself when determining the value of your human risk management program, turn to page 25 of “Modern Human Risk Management for Dummies”.)

Emerging threat: Facebook ads push fake Windows 11 update to steal passwords, crypto

TL;DR—

  • Attackers are buying Facebook ads to promote a “0$” fake Windows 11 Pro license download that—if run—steals browser-saved passwords, session tokens, and cryptowallet data.
  • These ads and the fake Microsoft landing page are especially well made, leveraging:
    • Trust-building security language
    • Increased urgency, and 
    • Realistic domains that mimic internal Microsoft cadence.
  • To avoid falling for this and similar phishing attacks, always download updates from official sources and use an adblocker.
  • Check out “One ish, two ish: How to prevent modern phishing” for more about malvertising lures and other social engineering attack, and scroll to the bottom for a short video briefing you can download and share with your employees!

Real Facebook ads to fake Windows page to malicious install

Security researchers at Malwarebytes Labs uncovered a new “malvertising” attack—that is, online paid ads that spread malware instead of Etsy shop links—that uses real Facebook ads to promote a “0$” Windows 11 Pro update.

When victims click the link from a personal or work device, they’ll reach an extremely realistic (but fake!) Microsoft page.

(courtesy of Malwarebytes Labs)

Victims’ only two clues that the page isn’t correct are:

  1. A domain that follows Microsoft convention (“25h2” for the second half of 2025, for example), but isn’t actually the Microsoft downloader page, including:
    1. ms-25h2-download[.]pro
    2. ms-25h2-update[.]pro
    3. ms25h2-download[.]pro
    4. ms25h2-update[.]pro

  2. If they do download the package linked on the “Download Now” button, it’s actually coming from GitHub—not Microsoft!

The package’s installer checks for security researcher tools—immediately stopping if it detects any—but then unfurls information stealing malware to take the victim’s:

  • Logins saved in the victim’s browser; 

  • Cryptocurrency wallet files; and 

  • Session cookies, which can be used to enter a victim’s personal or corporate cloud accounts later.

Why these Facebook ad lures work

(courtesy of Malwarebytes Labs)

At first glance, there are no real red flags… until you look a little closer.

  • Compromised accounts: Notice that these are real Facebook accounts promoting the Windows 11 Pro license upgrade. At first glance, this increases the legitimacy of the lure… however, neither a university nor a saloon would typically promote technology upgrades.

  • Security-based packaging: We’re seeing more and more attackers mixing security-based language into their lures to encourage victims to trust the lure. For example, the university-based ad has the phrases:
    • “Protect and Secure your PC” 
    • “No Data Loss”

  • Urgency: A very common advertising—and social engineering!—tactic is urgency: the more someone can make you think you have to act now, the less likely you’ll evaluate whether you should take an action. For example:
    • The university-based lure has phrases like “Don’t lose your files” (to scare you into downloading right away) and “No Cost Today Only” (so you don’t wait).
    • The saloon-based lure says the offer is “For Presidents’ Day” (putting a natural timer on the alleged free upgrade).
    • Both lures promise a “quick” and “fast” upgrade.

Employees most at-risk of falling for this (and other) malvertising campaigns

This specific campaign’s domains and hash files are relatively simple to block and set detections for, all things considered.

However, this attack exemplifies an increase in malvertising campaign lure experimentation across multiple platforms.

Its technical sophistication—from evading research tools, to leveraging trusted distribution applications, to Microsoft-influenced domain masquerade attempts—means that criminals have invested into this campaign’s toolbox, and will very likely reuse this strategy with different lures, formats, and malware configurations.

With that in mind, the types of employees most at-risk to falling for this specific attack (and ones like it) include:

  • Users on Windows OS endpoints, specifically for this attack; 

  • Employees who don’t use password managers and / or store credentials in browsers or online password keepers; 

  • Individuals who do not have ad blockers installed and have visited Facebook; and

  • People who have cryptocurrency wallets (and have visited crypto-related websites during work hours)—again, for this campaign, though the format can be applied for more corporate-related secret harvesting.
  •  

How to avoid buying into the fake Windows 11 update and similar malvertising messages

  • Only download updates from official sources! As good as the downloader page looked, it’s not real.

  • Use an adblocker. Again, criminals like to use paid advertisements online so their malware reaches those who are most likely to click it. If you don’t see any online ads, then you won’t see their malicious lures, either.

  • Don’t save logins in your browser, and use a password manager instead wherever possible.

  • Double-check the promoting profile. Criminals love to steal real company’s profiles and advertising budgets to spread their malware. If it wouldn’t make sense for that sort of organization to promote the alleged product or service, then it’s probably bad! 
  •  

To learn more about malvertising attacks, check out Fable Security’s free ebook, “One ish, two ish: How to prevent modern phishing”—no email required!

We wrote the book on modern human risk management

The TL;DR

  • Cybersecurity has modernized nearly everywhere, except in human risk.
  • We wrote this book to set the bar for modern human risk management.
  • Programs must be data-driven, targeted, timely, outcomes-focused, and enterprise-grade.
  • Modern human risk management delivers: employee engagement, fewer incidents, fast threat response, and metrics that tie behavior change directly to business impact.

Over the past decade, cybersecurity has grown up. We’ve taken advantage of AI and automation to make enormous strides in malware detection, vulnerability management, secure software development, and more. Engineers now score risk continuously, automate remediation, and harden systems at scale. But there is one attack surface that largely remains untouched: people. While organizations fortify software and infrastructure, they continue to manage human risk with static training and phishing simulations that feel like they’re from the 1990s.

We wrote Modern Human Risk Management for Dummies to close that gap. The book treats human risk as a first-class security discipline, not a side program. It explains how AI-driven threats have reshaped the human attack surface, why traditional awareness efforts fail to change behavior, and what security teams must do differently if they want to reduce risk rather than merely count phishing clicks and training completions.

The book centers on five non-negotiables in modern human risk management: data-driven decision-making, highly targeted interventions, timely delivery, outcomes-focused measurement, and enterprise-grade execution. Instead of broadcasting generic content, security teams need to respond to real behavioral signals and intervene with precision as soon as they detect risk, meeting people in the tools they already use. Teams that follow these principles see the difference quickly: employee engagement, fewer incidents, timely threat response, and metrics that tie behavior change directly to business impact.

We wrote this book for practitioners—CISOs, GRC leaders, and security awareness teams—who understand the threat landscape and want something better than checkbox programs. If you’re ready to bring the human layer into the modern security stack and turn behavior from a chronic liability into a measurable control, this book is a great place to start.

Download the ebook.

Get your copy.

If you’d like risk-based briefings and nudges that are hyper-targeted and customized to your organization, try the Fable platform.

Risk-based targeting isn’t role-based targeting (and the difference matters)

The TL;DR

  • Most “risk-based targeting” is really just role-based targeting with assumed risk.
  • True risk-based targeting responds to observed behavior.
  • Security teams have a finite attention budget, and wasted training erodes impact.
  • Targeting the few who actually cause risk drives better outcomes and trust.

A hot topic in human risk management is risk-based targeting. Everyone knows one-size-fits-all security training is yesteryear, and there’s even a fair body of evidence that it may have the opposite effect than intended. Lots of vendors claim to target risk, but few actually do it. What they really mean is role-based targeting.

To be clear, role-based training is a good thing. It shows employees what “good” looks like at their company, and—delivered in a relevant and specific way—serves as an excellent training starting point. Lots of our customers brief, say, the finance team on an emerging social engineering threat targeting them. Or deploy a particular type of phishing simulation just developers based on their familiar tools. But if your human risk management vendor tells you this is “risk-based targeting,” I’d say what they’re really talking about is just role-based targeting with assumed risk layered on top—not based on actual observed risk. The distinction may sound academic, but in practice it has real consequences for effectiveness, trust, and attention.

Here’s an example of assumed risk: engineers receive training on securing API keys or following cloud storage best practices. These are reasonable guesses, and they’re not wrong. Any solid security content library should absolutely include this material. The problem is that the targeting itself is static. It’s driven by who someone is in the org chart, not by what they’ve actually done. Risk is inferred, not observed. And the bigger question is how many developers do you know that have tolerance for training on all the potential topics they might encounter before they start getting training fatigue? 

Security awareness leaders know the truth: they have a finite amount of attention to make use of. Every unnecessary alert, notification, or, yes—training module—spends just a little bit of that budget. So, anything they send better pack a punch. In one customer, a financial technology company, the security team detected a specific data-handling behavior: their Splunk instance was showing PII violations. They traced them back to Datadog, and then to a bad parser, which about 150 of their nearly 1,000-person engineering team was using. Instead of broadcasting a generic warning to the entire engineering organization, they targeted the 150 with a 90-second Fable video briefing. It was crisp, to the point, highly specific, named the tools, named the violation, and gave a precise call-to-action. The result was they cut those violations by 60% within a month, and then to 0% in the months following, with zero recidivism. The other benefit? All the people who weren’t logging PII didn’t get the briefing. The company interrupted fewer people, preserving others’ attention for issues that were genuinely relevant to them.

Also note the qualitative difference in how these messages land. Sending content about a risk someone might encounter (“Make sure you protect personal data”) feels generic and easy to tune out, especially if it doesn’t map to anything concrete in the recipient’s day-to-day work. By contrast, content that reflects an observed behavior (“You inadvertently logged sensitive data in cleartext to Datadog”) is specific, credible, and hard to ignore. It moves security guidance from the abstract into the real world, where learning actually sticks.

Role-based training is valuable, and there’s a place for it in your human risk management content line-up. It gives employees that “wanted poster,” so they’re reminded what behaviors to steer clear of, but true risk-based targeting starts with behavior, not assumptions. When we anchor targeting in what’s actually happening in the environment, we respect people’s time, deliver highly-specific guidance, increase the impact of our interventions, and build trust that security messages are sent for a reason. In a world where attention is scarce, that makes all the difference.

Beware Microsoft 365 secure authentication requests

The TL;DR

  • Attackers use OAuth “device code” phishing to trick victims into approving unauthorized access to their real Microsoft accounts.
  • The attack uses a real Microsoft login page as victims “re-authorize” their session… but approve the attacker’s session, instead.
  • Attackers can then do and see everything the victim is allowed to do and see—leading to sensitive mailbox access, proprietary data theft, and business email compromise.
  • Urge your people to never approve logins they didn’t personally ask for!
  • Scroll down for a free, 2-minute Fable video briefing you can use

Threat actors can bypass passwords and multi-factor authentication (MFA) controls to access Microsoft 365 accounts for future attacks through the popular OAuth “device code” phishing technique.

Instead of stealing credentials, OAuth device code phishing lures trick their victims into approving attackers’ access using legitimate Microsoft login pages.

For this lure, there’s no bad grammar or strange URLs your employees can spot: just an urgent and unexpected “reauthorization” request that innocently displays the real login page… 

… granting an unseen threat actor’s access to the victim’s Microsoft 365 account for as long as the victim doesn’t need to log in again.

Here’s how OAuth device code phishing lures generally work 

  1. Attackers send a phishing message asking someone to enter a short code – a one time password (OTP) – in a real Microsoft-based URL, because they need to “reauthenticate” their current session.
    1. Some attacks, for example, used the legitimate login page of microsoft.com/devicelogin
  2. Instead of the OTP being used for their personal session, victims are actually authorizing the attacker’s access.
    1. The system is only supposed to grant these tokens after an employee puts in their username, password, or other credentials to guarantee the user’s identity and authorization. 
    2. However, attackers manipulate the  system so the victim re-approves the attacker’s session instead of their own. The system then assumes this second session also used the victim’s credentials.
  3. The attacker can then access the person’s Microsoft account–including email, contacts, and proprietary business information–until the reauthorized session token expires.
    1. Remember: the system thinks that attackers are actually the authorized user, since they have a “real” session token. So, the attacker can look at and do anything the victim is allowed to see or do!

Security researchers have been noting a rise in these campaigns since their initial appearance during the COVID-19 pandemic in 2020-2021 and ramping up in late 2025:

2020-2021: Researchers first see the modern OAuth device code phishing lure used in “high-profile [business email compromise] BEC incidents” in “sophisticated phishing campaigns,” often using COVID-related messages to increase legitimacy and urgency. (Sophos)

Microsoft device login page used in an OAuth phishing attack / Secureworks

February 2025: Microsoft discusses how attackers targeted specific employees with text message lures (“smishing”) over Signal, WhatsApp, and Telegram messaging platforms to encourage victims to authorize the attacker’s session on Microsoft 365 accounts. (Microsoft)

An example of an early “smishing” lure and social engineering attempt used as part of a targeted OAuth attack / Microsoft).
An early example of an OTP used as part of an OAuth phishing attack / Microsoft

May 2025: Researchers continue to demonstrate the wide range of OAuth device code phishing attacks available, including setting up proofs of concept (PoCs) of how the attack technically works across lure formats. (Logpoint)

A researcher’s demonstration of how Microsoft redirects to a legitimate-appearing permissions request of an “unverified” application, as part of an OAuth device code phishing lure attack / Logpoint

November 2025: Cloud security researchers see more OAuth device code bypass attempts in their own security product across their customer base–with 98 suspicious successful authentication attempts, six malicious device registrations, and 7 Windows registration after device code authentications in the last three months. (Wiz)

December 2025: Email security researchers detail rising use of the OAuth device code phishing lure by both nation-state and financially motivated threat actors, now that low-code / no-code versions of the attack are now for sale on the dark web. One phishing email used a fake document about fake bonuses and benefits to encourage victims to click. (Proofpoint)

A phishing email used to trick victims into triggering the OTP for an OAuth session token theft / Proofpoint)
A phishing lure landing page, redirecting victims to a legitimate Microsoft authentication page so the victim can use the real OTP to authenticate the attacker’s session / Proofpoint

How to prevent initial access via OAuth device code phishing lures

In an OAuth attack, there’s no fake login page or obvious red flags you can train your teams to watch for: just a convincingly urgent request to “re-authorize” or “secure” their account. 

That’s why awareness and timing matter! Employees should never enter a device code unless they personally tried to login moments before, and they should treat any unexpected code requests as phishing.

How Fable can help you right now

Here’s a super-short and free downloadable video showing exactly how this attack works, and how employees can watch out for it. We designed this briefing specifically to help anyone recognize this threat before it turns into a real incident. 

Download it, share it, and remind your team: Don’t approve logins you didn’t ask for!

Watch the briefing

And download for your own use below.

If you’d like risk-based briefings and nudges that are hyper-targeted and customized to your organization, try the Fable platform.

The hidden multiplier in human risk

Day 12 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Some risks travel together
  • Measuring the overlap—toxic combinations—lets you see heightened risk
  • Finding and fixing the toxic combinations helps you zap risk efficiently

Not all risk shows up in individualized packages. Sometimes two or more risks travel together, and when they do, they can create toxic combinations. 

We surface this effect in our latest human risk report, where we look at several risk combinations whose co-occurrence is higher than what you’d expect by chance. When the actual overlap divided by the probability of overlap exceeds 1.0, that’s a toxic combination. 

Finding these patterns helps you suss out what risks to tackle first (and how). Money handlers who fall for phishing. Employees with no MFA and sensitive data access. IT admins who reuse passwords. None of these behaviors is rare. What matters is where they cluster.

Traditional security programs miss this because they treat each issue as a separate control gap. One fix here, another there. But eliminating a single weakness doesn’t help much if the surrounding conditions stay the same.

Real progress comes from prioritizing the combinations that multiply exposure. When teams address those first, they reduce risk faster, with less effort.

This concludes our 12 days of riskmas series.

Why one-size-doesn’t-fit-all

Day 11 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Generic security campaigns raise awareness but rarely change behavior
  • Campaigns tailored to role, access, and behavior perform dramatically better
  • Relevance and message precision drive action…and stickiness
  • Precision targeting also shortens time to risk reduction

Most security campaigns are built for everyone…and resonate with no one! 

Generic messaging might raise awareness, but it rarely changes behavior. Our human risk report makes the case for a sharper approach: precision targeting.

When you tailor your security campaigns to cohorts based on role, access level, or risky behavior, you get results. Targeted campaigns dramatically outperform general ones because they feel relevant. For example, developers who received a campaign highlighting an issue with PII in observability tools, they paid close attention. The intervention message used their name, mentioned the app, spoke about the specific problem, and told them how to avoid the problem in the future—and all in less than two minutes. We believe that kind of relevance is what led to a 60% reduction in month one and 100% compliance thereafter.

Beyond getting people to take action, precision targeting also gets them to move fast, shortening the path to risk reduction. Instead of blanketing the entire organization with generic guidance, security teams can focus on the small set of people whose actions actually move the needle at a given moment—people with elevated access, repeated risky behavior, or direct exposure to critical systems.

Cohort insights show exactly who is struggling with which behaviors, allowing teams to intervene with specific, relevant guidance when it’s most likely to stick. No more guesswork.

In human risk management, precision targeting isn’t a nice-to-have. It’s the difference between activity and outcomes.

Check us out tomorrow as we deep-dive into fixing the highest-leverage risks first.

Vanity metrics are lying to you

Day 10 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Popular security metrics are easy to track but largely meaningless
  • Real risk is about people’s behavior—auth posture, data handling, etc.
  • Context matters—a phishing click isn’t equally risky for every employee
  • It’s not just about behavior change but also speed and durability

Phishing click rates? Training completions? Snooze-fest! 

These metrics are easy to collect and report on, but also a little embarrassing for any slightly self-aware security executive. That’s because they’re pretty much all noise. In our human risk report, the clearest signal is simple: what matters is risk—real behaviors that increase or reduce exposure.

Measuring human risk means tracking what people actually do. Do they reuse passwords? Do they upload sensitive data to unsanctioned tools? Do they report phishing attempts? And yes, do they click. But whether a click is terrible, simply bad, or meh has a lot to do with a person’s security posture. These measures—not annual training scores—tell you whether your organization has mitigated risk and is getting safer…or is just getting better at compliance theater.

Just as important is speed. How quickly do risky behaviors improve after an intervention? And do those improvements last? The report shows that behavior change isn’t binary. It happens over time, and it can decay just as easily as it improves if teams stop paying attention.

When organizations move beyond vanity metrics, priorities shift. Instead of chasing engagement, they focus on outcomes. Instead of asking “Did they finish the training?” they ask “Did the risk actually go down?” That’s the difference between measuring effort and measuring impact.

If you want durable security improvement, measure what matters: risk.

Come back in a few days for a look at targeting with precision.