Meet the Fable team at FS-ISAC 2026 Americas Spring Summit

From annual training to real impact: Pennymac’s modern approach to security awareness

The TL;DR

  • Pennymac moved beyond annual training to an ongoing, security behavior program
  • Fable delivers role-specific messaging based on real user risk, and delivers interventions
  • Short-form video dramatically outperforms traditional email training
  • Increased video engagement correlated with faster OS patching and reduced vulnerabilities
  • Pennymac was able to close the loop, measuring whether security behavior changed

As social engineering threats evolve, and grow more convincing with AI, traditional security awareness training is no longer enough. In this customer testimonial video, the Pennymac CISO Cyrus Tibbs explains why annual refresher courses and generic email training fall short, and how his team uses Fable to deliver timely, role-specific security messaging that keeps pace with a rapidly changing threat landscape.

Cyrus describes a fundamental shift in how attackers operate: instead of breaking systems, they target people. That reality pushed Pennymac to rethink security training as an ongoing, behavioral program that understands individual risk, delivers relevant guidance in the moment, and measures whether behavior actually changes. Rather than relying on one-size-fits-all emails, the team adopted an approach closer to social media marketing: short, direct, actionable messages designed to drive engagement and measurable outcomes.

Using Fable, Pennymac automatically segments employees into cohorts based on role and observed behavior. These include money handlers, privileged infrastructure users, developers, and public-facing roles, each with distinct risk profiles and training needs. By eliminating guesswork around who receives which training, the security team ensures messaging is targeted, timely, and relevant, all without the manual toil.

The impact has been both immediate and measurable. A/B testing revealed dramatic differences in engagement between traditional email instructions and Fable’s AI-generated briefing videos, with employees consistently responding better to video. In one case study focused on OS patching, Pennymac integrated Fable with its vulnerability management system and tracked outcomes from video delivery through patch completion, finding a clear correlation between video engagement and reduced vulnerabilities.

Today, Fable has become Pennymac’s default platform for driving organizational change, not just security training. Cyrus notes that Fable’s automation and targeting capabilities free up significant staff time, while employees consistently respond positively to the short-form video format. The result is a security awareness program that scales with the business, adapts to real risk, and earns employee attention.

Genesys security is preparing employees for attacks of tomorrow

The TL;DR

  • The Genesis security replaced checkbox training with modern, short-form briefings that more closely resembles social media content
  • Engagement increased immediately, even after failed simulations
  • Employees reporting suspicious activity rose by double digits
  • Upon a request from incident response, the team delivered custom, threat-specific training in about a day using Fable

Traditional security awareness training often feels like a checkbox—something employees rush through and quickly forget. In this customer testimonial video, the Genesis security team shares how they set out to change that dynamic, using Fable to deliver crisp, short-form training that mirrors the content employees already engage with on platforms like TikTok and Instagram Reels.

Featuring insights from Marlene Galvan, Portfolio Coordinator and Security Awareness Lead, and Jonathan Chow, CISO, the video explores what happened after Genesis moved away from traditional, long-form training and adopted a more modern, human approach to security awareness.

The shift in employee response was immediate. Engagement increased, and for the first time, employees began offering positive feedback, even after failing a simulation. Instead of frustration or embarrassment, many appreciated the briefings for being short, relevant, and delivered in a positive, non-punitive tone.

That engagement quickly translated into measurable results. Genesis saw double-digit percentage point increases in employees reporting suspicious activity—one of the clearest indicators of an effective security awareness program. Not long after launch, the incident response team proactively requested a targeted phishing training video for a specific threat. Using Fable, Genesis delivered a custom video in about a day, tailored precisely to the behavior they needed to change.

The video also highlights how Fable’s AI-powered platform enables rapid, flexible content creation, including company-specific details, custom graphics, and topic-focused messaging that feels relevant rather than generic. By prioritizing short, targeted content, Genesis found that security awareness became easier to absorb, and far more likely to stick. As the team puts it, Fable helps make the “medicine go down easier.”

Finally, Marlene and Jonathan emphasize the partnership itself as a key factor in their success. Rather than a traditional vendor relationship, the Fable team operates as an extension of Genesis’s internal crew, collaborating, exchanging ideas, and working toward a shared mission. The result is a security awareness program that doesn’t just reduce risk, but actively strengthens security culture across the organization.

Risk-based targeting isn’t role-based targeting (and the difference matters)

The TL;DR

  • Most “risk-based targeting” is really just role-based targeting with assumed risk.
  • True risk-based targeting responds to observed behavior.
  • Security teams have a finite attention budget, and wasted training erodes impact.
  • Targeting the few who actually cause risk drives better outcomes and trust.

A hot topic in human risk management is risk-based targeting. Everyone knows one-size-fits-all security training is yesteryear, and there’s even a fair body of evidence that it may have the opposite effect than intended. Lots of vendors claim to target risk, but few actually do it. What they really mean is role-based targeting.

To be clear, role-based training is a good thing. It shows employees what “good” looks like at their company, and—delivered in a relevant and specific way—serves as an excellent training starting point. Lots of our customers brief, say, the finance team on an emerging social engineering threat targeting them. Or deploy a particular type of phishing simulation just developers based on their familiar tools. But if your human risk management vendor tells you this is “risk-based targeting,” I’d say what they’re really talking about is just role-based targeting with assumed risk layered on top—not based on actual observed risk. The distinction may sound academic, but in practice it has real consequences for effectiveness, trust, and attention.

Here’s an example of assumed risk: engineers receive training on securing API keys or following cloud storage best practices. These are reasonable guesses, and they’re not wrong. Any solid security content library should absolutely include this material. The problem is that the targeting itself is static. It’s driven by who someone is in the org chart, not by what they’ve actually done. Risk is inferred, not observed. And the bigger question is how many developers do you know that have tolerance for training on all the potential topics they might encounter before they start getting training fatigue? 

Security awareness leaders know the truth: they have a finite amount of attention to make use of. Every unnecessary alert, notification, or, yes—training module—spends just a little bit of that budget. So, anything they send better pack a punch. In one customer, a financial technology company, the security team detected a specific data-handling behavior: their Splunk instance was showing PII violations. They traced them back to Datadog, and then to a bad parser, which about 150 of their nearly 1,000-person engineering team was using. Instead of broadcasting a generic warning to the entire engineering organization, they targeted the 150 with a 90-second Fable video briefing. It was crisp, to the point, highly specific, named the tools, named the violation, and gave a precise call-to-action. The result was they cut those violations by 60% within a month, and then to 0% in the months following, with zero recidivism. The other benefit? All the people who weren’t logging PII didn’t get the briefing. The company interrupted fewer people, preserving others’ attention for issues that were genuinely relevant to them.

Also note the qualitative difference in how these messages land. Sending content about a risk someone might encounter (“Make sure you protect personal data”) feels generic and easy to tune out, especially if it doesn’t map to anything concrete in the recipient’s day-to-day work. By contrast, content that reflects an observed behavior (“You inadvertently logged sensitive data in cleartext to Datadog”) is specific, credible, and hard to ignore. It moves security guidance from the abstract into the real world, where learning actually sticks.

Role-based training is valuable, and there’s a place for it in your human risk management content line-up. It gives employees that “wanted poster,” so they’re reminded what behaviors to steer clear of, but true risk-based targeting starts with behavior, not assumptions. When we anchor targeting in what’s actually happening in the environment, we respect people’s time, deliver highly-specific guidance, increase the impact of our interventions, and build trust that security messages are sent for a reason. In a world where attention is scarce, that makes all the difference.

Beware Microsoft 365 secure authentication requests

The TL;DR

  • Attackers use OAuth “device code” phishing to trick victims into approving unauthorized access to their real Microsoft accounts.
  • The attack uses a real Microsoft login page as victims “re-authorize” their session… but approve the attacker’s session, instead.
  • Attackers can then do and see everything the victim is allowed to do and see—leading to sensitive mailbox access, proprietary data theft, and business email compromise.
  • Urge your people to never approve logins they didn’t personally ask for!
  • Scroll down for a free, 2-minute Fable video briefing you can use

Threat actors can bypass passwords and multi-factor authentication (MFA) controls to access Microsoft 365 accounts for future attacks through the popular OAuth “device code” phishing technique.

Instead of stealing credentials, OAuth device code phishing lures trick their victims into approving attackers’ access using legitimate Microsoft login pages.

For this lure, there’s no bad grammar or strange URLs your employees can spot: just an urgent and unexpected “reauthorization” request that innocently displays the real login page… 

… granting an unseen threat actor’s access to the victim’s Microsoft 365 account for as long as the victim doesn’t need to log in again.

Here’s how OAuth device code phishing lures generally work 

  1. Attackers send a phishing message asking someone to enter a short code – a one time password (OTP) – in a real Microsoft-based URL, because they need to “reauthenticate” their current session.
    1. Some attacks, for example, used the legitimate login page of microsoft.com/devicelogin
  2. Instead of the OTP being used for their personal session, victims are actually authorizing the attacker’s access.
    1. The system is only supposed to grant these tokens after an employee puts in their username, password, or other credentials to guarantee the user’s identity and authorization. 
    2. However, attackers manipulate the  system so the victim re-approves the attacker’s session instead of their own. The system then assumes this second session also used the victim’s credentials.
  3. The attacker can then access the person’s Microsoft account–including email, contacts, and proprietary business information–until the reauthorized session token expires.
    1. Remember: the system thinks that attackers are actually the authorized user, since they have a “real” session token. So, the attacker can look at and do anything the victim is allowed to see or do!

Security researchers have been noting a rise in these campaigns since their initial appearance during the COVID-19 pandemic in 2020-2021 and ramping up in late 2025:

2020-2021: Researchers first see the modern OAuth device code phishing lure used in “high-profile [business email compromise] BEC incidents” in “sophisticated phishing campaigns,” often using COVID-related messages to increase legitimacy and urgency. (Sophos)

Microsoft device login page used in an OAuth phishing attack / Secureworks

February 2025: Microsoft discusses how attackers targeted specific employees with text message lures (“smishing”) over Signal, WhatsApp, and Telegram messaging platforms to encourage victims to authorize the attacker’s session on Microsoft 365 accounts. (Microsoft)

An example of an early “smishing” lure and social engineering attempt used as part of a targeted OAuth attack / Microsoft).
An early example of an OTP used as part of an OAuth phishing attack / Microsoft

May 2025: Researchers continue to demonstrate the wide range of OAuth device code phishing attacks available, including setting up proofs of concept (PoCs) of how the attack technically works across lure formats. (Logpoint)

A researcher’s demonstration of how Microsoft redirects to a legitimate-appearing permissions request of an “unverified” application, as part of an OAuth device code phishing lure attack / Logpoint

November 2025: Cloud security researchers see more OAuth device code bypass attempts in their own security product across their customer base–with 98 suspicious successful authentication attempts, six malicious device registrations, and 7 Windows registration after device code authentications in the last three months. (Wiz)

December 2025: Email security researchers detail rising use of the OAuth device code phishing lure by both nation-state and financially motivated threat actors, now that low-code / no-code versions of the attack are now for sale on the dark web. One phishing email used a fake document about fake bonuses and benefits to encourage victims to click. (Proofpoint)

A phishing email used to trick victims into triggering the OTP for an OAuth session token theft / Proofpoint)
A phishing lure landing page, redirecting victims to a legitimate Microsoft authentication page so the victim can use the real OTP to authenticate the attacker’s session / Proofpoint

How to prevent initial access via OAuth device code phishing lures

In an OAuth attack, there’s no fake login page or obvious red flags you can train your teams to watch for: just a convincingly urgent request to “re-authorize” or “secure” their account. 

That’s why awareness and timing matter! Employees should never enter a device code unless they personally tried to login moments before, and they should treat any unexpected code requests as phishing.

How Fable can help you right now

Here’s a super-short and free downloadable video showing exactly how this attack works, and how employees can watch out for it. We designed this briefing specifically to help anyone recognize this threat before it turns into a real incident. 

Download it, share it, and remind your team: Don’t approve logins you didn’t ask for!

Watch the briefing

And download for your own use below.

If you’d like risk-based briefings and nudges that are hyper-targeted and customized to your organization, try the Fable platform.

The hidden multiplier in human risk

Day 12 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Some risks travel together
  • Measuring the overlap—toxic combinations—lets you see heightened risk
  • Finding and fixing the toxic combinations helps you zap risk efficiently

Not all risk shows up in individualized packages. Sometimes two or more risks travel together, and when they do, they can create toxic combinations. 

We surface this effect in our latest human risk report, where we look at several risk combinations whose co-occurrence is higher than what you’d expect by chance. When the actual overlap divided by the probability of overlap exceeds 1.0, that’s a toxic combination. 

Finding these patterns helps you suss out what risks to tackle first (and how). Money handlers who fall for phishing. Employees with no MFA and sensitive data access. IT admins who reuse passwords. None of these behaviors is rare. What matters is where they cluster.

Traditional security programs miss this because they treat each issue as a separate control gap. One fix here, another there. But eliminating a single weakness doesn’t help much if the surrounding conditions stay the same.

Real progress comes from prioritizing the combinations that multiply exposure. When teams address those first, they reduce risk faster, with less effort.

This concludes our 12 days of riskmas series.

Why one-size-doesn’t-fit-all

Day 11 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Generic security campaigns raise awareness but rarely change behavior
  • Campaigns tailored to role, access, and behavior perform dramatically better
  • Relevance and message precision drive action…and stickiness
  • Precision targeting also shortens time to risk reduction

Most security campaigns are built for everyone…and resonate with no one! 

Generic messaging might raise awareness, but it rarely changes behavior. Our human risk report makes the case for a sharper approach: precision targeting.

When you tailor your security campaigns to cohorts based on role, access level, or risky behavior, you get results. Targeted campaigns dramatically outperform general ones because they feel relevant. For example, developers who received a campaign highlighting an issue with PII in observability tools, they paid close attention. The intervention message used their name, mentioned the app, spoke about the specific problem, and told them how to avoid the problem in the future—and all in less than two minutes. We believe that kind of relevance is what led to a 60% reduction in month one and 100% compliance thereafter.

Beyond getting people to take action, precision targeting also gets them to move fast, shortening the path to risk reduction. Instead of blanketing the entire organization with generic guidance, security teams can focus on the small set of people whose actions actually move the needle at a given moment—people with elevated access, repeated risky behavior, or direct exposure to critical systems.

Cohort insights show exactly who is struggling with which behaviors, allowing teams to intervene with specific, relevant guidance when it’s most likely to stick. No more guesswork.

In human risk management, precision targeting isn’t a nice-to-have. It’s the difference between activity and outcomes.

Check us out tomorrow as we deep-dive into fixing the highest-leverage risks first.

Vanity metrics are lying to you

Day 10 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Popular security metrics are easy to track but largely meaningless
  • Real risk is about people’s behavior—auth posture, data handling, etc.
  • Context matters—a phishing click isn’t equally risky for every employee
  • It’s not just about behavior change but also speed and durability

Phishing click rates? Training completions? Snooze-fest! 

These metrics are easy to collect and report on, but also a little embarrassing for any slightly self-aware security executive. That’s because they’re pretty much all noise. In our human risk report, the clearest signal is simple: what matters is risk—real behaviors that increase or reduce exposure.

Measuring human risk means tracking what people actually do. Do they reuse passwords? Do they upload sensitive data to unsanctioned tools? Do they report phishing attempts? And yes, do they click. But whether a click is terrible, simply bad, or meh has a lot to do with a person’s security posture. These measures—not annual training scores—tell you whether your organization has mitigated risk and is getting safer…or is just getting better at compliance theater.

Just as important is speed. How quickly do risky behaviors improve after an intervention? And do those improvements last? The report shows that behavior change isn’t binary. It happens over time, and it can decay just as easily as it improves if teams stop paying attention.

When organizations move beyond vanity metrics, priorities shift. Instead of chasing engagement, they focus on outcomes. Instead of asking “Did they finish the training?” they ask “Did the risk actually go down?” That’s the difference between measuring effort and measuring impact.

If you want durable security improvement, measure what matters: risk.

Come back in a few days for a look at targeting with precision.

Toxic combinations: where human risk multiplies

Day 9 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Risks that travel together are toxic combinations
  • Risk lift measures how often paired risks co-occur versus random chance
  • In one case, money handlers who fail phishing tests have a risk lift of nearly 2x
  • Targeting overlapping risks can deliver outsized security gains

There will be math. You’ve been warned.

Some risks are dangerous on their own. Others become even more hazardous when they collide. In our human risk report, we focus on toxic combinations: pairs of risky behaviors or exposures that occur together far more often than chance would predict. These overlaps are where security programs tend to lose control quietly, and where attackers find their easiest paths in.

To measure this effect, we look at what we’re calling “risk lift of toxic combinations” (if you think of a more clever name, we’re all ears). In simple terms, it compares how often two risks actually co-occur versus how often they should if they were unrelated. The math is straightforward: P(A∩B)/P(A)×P(B). Anything higher than 1.0 means the co-occurrence is higher than expected, and therefore toxic, meaning they amplify overall exposure.

We took several real-world examples from our anonymized data set, finding in one case that money-handlers who failed phishing simulations show a lift of 1.98—nearly double the overlap you’d expect by chance. That pairing alone signals a dangerous mix of access and susceptibility. In another case, employees with sensitive data access and no multi-factor authentication register a lift of 1.17. And a third example shows IT administrators who reuse passwords coming in at 1.13. Each number may look modest, but they reveal that weaknesses that travel together can stack the risk.

This is the hidden cost of treating risks as independent checkboxes. A phishing failure here, weak authentication there. On paper, each might seem manageable. In reality, the overlap is what matters. That’s where exposure accelerates and where breaches are most likely to begin.

The upside is clarity. Toxic combinations tell security teams exactly where to act. Instead of broad, blunt controls, leaders can target the people and behaviors that deliver the biggest risk reduction for the least effort. Fix the overlaps—not just the outliers—and the payoff compounds fast.

See, that math wasn’t so bad, was it?

Tune in tomorrow for a fun little review about measuring risk.

Who are your AI swashbucklers?

Day 8 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • People are uploading real work into generative AI apps
  • In one company, the tech team leads, with legal/compliance a distant second
  • Code dominates uploaded content (60%), followed by documents (26%)

Who’s sailing closest to the edge with generative AI? That’s the question security leaders are asking as AI tools slip into everyday workflows. In this slice of our human risk report, we examine which cohorts inside a single organization uploaded the most content to generative AI tools over a six-month period, and what kind of data they shared. The goal wasn’t to point fingers, but to understand behavior at scale—because that’s where risk lives.

This early picture is revealing. When teams use generative AI, they don’t just experiment with prompts or harmless examples. They upload real work. Real artifacts. Things like board decks (ai ai ai), financial models, and code, code, and more code. 

As our customers ingest richer telemetry from security tools, this view will sharpen. With tools like SASE, leaders can distinguish between uploads to sanctioned versus unsanctioned AI applications, and Fable will be able to target cohorts of employees who only upload content to unsanctioned applications, where the risk is significant. Or they’ll be able to refine even further and only target those who upload content to an unsanctioned application when it triggers a DLP violation. So stay tuned on this topic.

So who’s loading the most content today? In one customer environment, the technology team led any other group by a wide margin with an average of 129 uploads per person over a six-month period. That may not be surprising—engineers are often early adopters—but the second-place finisher raises eyebrows. Legal and compliance teams ranked next (with an average of 22 uploads), underscoring how quickly AI has permeated even the most risk-aware functions.

The content itself tells an equally important story. Code accounted for 60% of uploads, followed by documents at 26%. Media made up 5%, with the remaining 9% falling into a mixed “other” category. Each file type carries its own exposure, from intellectual property leakage to regulatory risk. Together, they paint a clear picture: generative AI is already embedded in critical workflows.

This is where security programs must evolve. The question is no longer whether employees are using generative AI, but rather how, where, and with what data. Organizations that can map human behavior to AI usage in real time won’t just reduce risk, but gain the clarity needed to let people move fast but also help them stay out of dangerous waters.

Check us out tomorrow for a look at toxic combinations.

What’s the half-life of a security campaign?

Day 7 of 12 days of riskmas (or, if you prefer, risk-mukah or the non-denominational risk-ivus)

The TL;DR

  • Behavior change can fade after your security campaign
  • The behavior decay interval measures how long improvements actually last
  • Point-in-time metrics hide slow drift back to risky habits
  • We think relevant, clear guidance drove lasting change in one example campaign
  • Ongoing monitoring enables timely intervention before risk returns

Does security behavior change actually stick? That’s the question the behavior decay interval is designed to answer. 

The behavior decay interval measures the staying power of a security campaign—how quickly people revert to old habits once the initial attention fades. Without this lens, you may think mistake short-term improvement for lasting progress.

In some programs, behavior improves briefly and then tapers off as people get busy and distracted. This kind of decay is easy to miss if you only look at point-in-time metrics. What matters more is the slope: are behaviors holding steady, or slowly drifting back toward a risky threshold? 

At Fable, we care intensely about all of the factors that go into how you change behavior, how quickly you can change behavior, and how long that change lasts.

The visual below shows two campaigns—one where behavior change began to decay, and another where the change held steady (for at least the last 11 or so months, fingers crossed!). In the right-hand example, developers were inadvertently logging PII to an observability tool. Clear, timely guidance explained what went wrong and exactly how to fix it. The result was 60% compliance in the first two months—and full compliance thereafter. Our strong hunch is that the difference wasn’t volume or repetition, but relevance and clarity. That said, we’re looking across all campaigns and will no doubt have more to say on this topic.

It’s not always obvious why one campaign sticks and another fades. That’s why monitoring behavior over time matters. By tracking decay intervals, teams can spot when performance drops below an acceptable threshold and intervene before risky habits fully resurface.

Behavior change isn’t a one-time event—it’s a system. Measure how long improvements last, watch for decay, and be ready to step in when needed. That’s how security programs move from short-term wins to durable, lasting impact.

Check us out tomorrow for a look at AI swashbucklers (it’s really just about uploading content to generative AI, but we wanted to use the word “swashbucklers”).