From headline panic to useful training: Ask “why” first.

TL;DR—

Part two of six-part blog series on must-ask questions when creating net-new awareness training.

  • Always start with “Why this training, now?”
  • Possible answers include:
    • Trending headline response
    • New threat intelligence or security research
    • Risky behavior patterns
    • Recent security incidents or near-misses
  • See “Five must-ask questions for security training that changes employee behavior” for more questions Fable Security asks our clients before creating short-yet-impactful briefings!

Stopping to ask “why?” can speed your time to security training.

While this question may sound obvious, “Why this, now?” is the question we most often ask our clients after receiving a custom security training request.

Depending on how clients answer, they sometimes don’t actually need a training course. 

Instead, a reassuring “we’re covered!” nudge to employees panicking over the latest trending threat might work just as well as a custom briefing.

Other times, there’s no action that a specific employee cohort could take to mitigate the relevant threat. 

(For example, how often are your customer service staff patching Netscaler gateways, despite the breathless headlines about CitrixBleed2 exploitations?)

And, we can sometimes adapt pre-existing security awareness briefings to discuss specific attacks that reuse common tricks with a new coat of paint. 

The “EvilAI” campaign, for instance, reuses the same infrastructure and general attack pattern as other malvertising and SEO poisoning campaigns; its lures are just reskinned for anyone looking for a Gen AI-related productivity app.

How your “why” changes your security training approach.

So, what’s your reason to want a new security awareness briefing or training module? Did you:

  • Saw a headline somewhere?
    • If so, did you want to create a briefing to proactively warn your users before attackers try it with your employees
    • Or, did you want to send out a notice to reassure your executives and employees that your organization is currently protected against the threat?
  • Read a threat intel report that made your stomach drop?
    • If so, consider what specifically your general employees could do to spot a new phishing lure or threat, versus what only your IT administrators or security personnel need to know.
  • Noticed a risky behavior pattern?
    • If so, is that pattern currently trending upwards?
    • Or, are you being proactive to keep the behavior from getting worse?
  • Recently dealt with an internal security incident or near-miss?
    • If so, how many details could you include to reassure the recipients that it’s handled, while keeping the briefing realistic to avoid future incidents?
    • Extra credit if you can include screenshots of the phishing lure, malicious pop-up, or any other artifact someone might see on the front end of an attack!

If this training request was prompted by an external blog or report, we’ll usually ask to see it. Not because we want to copy it, of course, but because it helps anchor the training in reality.

After all, people are much more likely to change their behavior when faced with concrete evidence of actual impacts, rather than hypothetical “this could happen” bad vibes.

We’re diving deep into all five questions we ask our clients and why, so sign up to get each blog as it comes out.

(And, if you want more ideas on fostering a positive, employee-friendly security culture, check out page 17 of “Modern Human Risk Management for Dummies”.)

Five must-ask questions for security training that changes employee behavior

TL;DR—

  • Spinning up security awareness training ideas is easy; packaging them to change behavior—not check boxes—is hard.
  • To create impactful micro-trainings that change user behavior, you must answer these five simple questions:
    • Why this training, now?
    • Who are you trying to reach? (Hint: not everyone!)
    • What can people do?
    • Are you worried about an attack or a behavior?
    • What will people see?
  •  

Build security answers—not more questions or fear.

Most security teams don’t struggle with finding awareness training topics. 

After all, there’s no shortage of scary headlines, threat intel write-ups, or “everyone should know this” moments in our daily news feeds—let alone what you’re seeing on the backend during incidents or need for compliance.

The harder part is turning all that noise into a single briefing or communication that this unique group of people understands, instead of dismissing or panicking over.

Impactful security training relies on these 5 questions.

Here at Fable Security, when our clients request custom briefings, we slow things down and ask these five questions. Not to be difficult—but because these answers shape everything from context and tone, to examples and screenshots. 

  1. Why do you actually need to produce this training or send out this notice right now?
  2. Who specifically needs to see it?
  3. What do you want people to do differently after receiving your training?
  4. Are you worried about this specific attack, or this type of attack?
  5. What would someone see or experience on the front end of this attack?

Miss answering even one of these questions, and your well-intentioned user awareness training will just turn into background radiation instead of changing employee behavior.

Over the next few weeks, we’ll be releasing deep dives into each of these questions—so sign up to get each as they’re released.

For now, though, take a deep breath and ask yourself: Why this training, now, to these people?

(And, for ten more questions to ask yourself when determining the value of your human risk management program, turn to page 25 of “Modern Human Risk Management for Dummies”.)

Emerging threat: Facebook ads push fake Windows 11 update to steal passwords, crypto

TL;DR—

  • Attackers are buying Facebook ads to promote a “0$” fake Windows 11 Pro license download that—if run—steals browser-saved passwords, session tokens, and cryptowallet data.
  • These ads and the fake Microsoft landing page are especially well made, leveraging:
    • Trust-building security language
    • Increased urgency, and 
    • Realistic domains that mimic internal Microsoft cadence.
  • To avoid falling for this and similar phishing attacks, always download updates from official sources and use an adblocker.
  • Check out “One ish, two ish: How to prevent modern phishing” for more about malvertising lures and other social engineering attack, and scroll to the bottom for a short video briefing you can download and share with your employees!

Real Facebook ads to fake Windows page to malicious install

Security researchers at Malwarebytes Labs uncovered a new “malvertising” attack—that is, online paid ads that spread malware instead of Etsy shop links—that uses real Facebook ads to promote a “0$” Windows 11 Pro update.

When victims click the link from a personal or work device, they’ll reach an extremely realistic (but fake!) Microsoft page.

(courtesy of Malwarebytes Labs)

Victims’ only two clues that the page isn’t correct are:

  1. A domain that follows Microsoft convention (“25h2” for the second half of 2025, for example), but isn’t actually the Microsoft downloader page, including:
    1. ms-25h2-download[.]pro
    2. ms-25h2-update[.]pro
    3. ms25h2-download[.]pro
    4. ms25h2-update[.]pro

  2. If they do download the package linked on the “Download Now” button, it’s actually coming from GitHub—not Microsoft!

The package’s installer checks for security researcher tools—immediately stopping if it detects any—but then unfurls information stealing malware to take the victim’s:

  • Logins saved in the victim’s browser; 

  • Cryptocurrency wallet files; and 

  • Session cookies, which can be used to enter a victim’s personal or corporate cloud accounts later.

Why these Facebook ad lures work

(courtesy of Malwarebytes Labs)

At first glance, there are no real red flags… until you look a little closer.

  • Compromised accounts: Notice that these are real Facebook accounts promoting the Windows 11 Pro license upgrade. At first glance, this increases the legitimacy of the lure… however, neither a university nor a saloon would typically promote technology upgrades.

  • Security-based packaging: We’re seeing more and more attackers mixing security-based language into their lures to encourage victims to trust the lure. For example, the university-based ad has the phrases:
    • “Protect and Secure your PC” 
    • “No Data Loss”

  • Urgency: A very common advertising—and social engineering!—tactic is urgency: the more someone can make you think you have to act now, the less likely you’ll evaluate whether you should take an action. For example:
    • The university-based lure has phrases like “Don’t lose your files” (to scare you into downloading right away) and “No Cost Today Only” (so you don’t wait).
    • The saloon-based lure says the offer is “For Presidents’ Day” (putting a natural timer on the alleged free upgrade).
    • Both lures promise a “quick” and “fast” upgrade.

Employees most at-risk of falling for this (and other) malvertising campaigns

This specific campaign’s domains and hash files are relatively simple to block and set detections for, all things considered.

However, this attack exemplifies an increase in malvertising campaign lure experimentation across multiple platforms.

Its technical sophistication—from evading research tools, to leveraging trusted distribution applications, to Microsoft-influenced domain masquerade attempts—means that criminals have invested into this campaign’s toolbox, and will very likely reuse this strategy with different lures, formats, and malware configurations.

With that in mind, the types of employees most at-risk to falling for this specific attack (and ones like it) include:

  • Users on Windows OS endpoints, specifically for this attack; 

  • Employees who don’t use password managers and / or store credentials in browsers or online password keepers; 

  • Individuals who do not have ad blockers installed and have visited Facebook; and

  • People who have cryptocurrency wallets (and have visited crypto-related websites during work hours)—again, for this campaign, though the format can be applied for more corporate-related secret harvesting.
  •  

How to avoid buying into the fake Windows 11 update and similar malvertising messages

  • Only download updates from official sources! As good as the downloader page looked, it’s not real.

  • Use an adblocker. Again, criminals like to use paid advertisements online so their malware reaches those who are most likely to click it. If you don’t see any online ads, then you won’t see their malicious lures, either.

  • Don’t save logins in your browser, and use a password manager instead wherever possible.

  • Double-check the promoting profile. Criminals love to steal real company’s profiles and advertising budgets to spread their malware. If it wouldn’t make sense for that sort of organization to promote the alleged product or service, then it’s probably bad! 
  •  

To learn more about malvertising attacks, check out Fable Security’s free ebook, “One ish, two ish: How to prevent modern phishing”—no email required!

Emerging threat: Attackers combo voice and email phishing for a credential knock-out

TL;DR—

  • Scammers now combine email spearphishing messages with a follow-up voice phishing call “from” an IT staff member.
  • The helpful scammer walks the victim through any multifactor authentication (MFA), one-time passwords (OTPs), or other security challenges to steal Google, Microsoft, Okta, or cryptocurrency credentials.
  • While the current recommendation is to roll out “phishing resistant” MFA tools such as YubiKeys, Fable Security recommends organizations send out reminder microtrainings on social engineering tactics to specific cohorts of likely vulnerable employees.
  • Check out “One ish, two ish: How to prevent modern phishing” for more about modern phishing lures and social engineering attacks like this one.

Okta: Dark web “phishing as a service” kits let scammers email and text victims to avoid MFA

In January 2026, Okta security researchers published a new attack format based on pre-made “phishing as a service” (PaaS) kits for sale on dark web forums.

Scammers can now:

  1. Buy one of these PaaS kits;
  2. Research an organization’s employees and technology stack; and
  3. Create extremely realistic phishing emails “from” a known member of the organization’s IT support staff.

Then, the scammer will follow up their email lure with an actual phone call—a voice phishing, or “vishing”, attack—to the same victim.

Still posing as a member of the organization’s IT help desk, the scammer will walk a victim through a fake login page and ask for whatever one-time passwords (OTPs), multi-factor authentication (MFA) codes, or other authentication challenges may pop up.

The scammer can even tailor their shared, on-screen steps to match what the victim is seeing on their own screen in real-time!

To the victim, it feels like a legitimate support interaction—not a threat—until it’s too late… and the scammer has their corporate account for worse attacks. 

Scammers can plant spyware, steal intellectual property or cryptocurrency, and even infect other corporate devices with malware, ransomware, or wiperware.

The Fable Security team highly encourages any Okta customers to download the complete threat advisory, which contains known indicators of compromise (IOCs) and other details of known exploited attacks.

Start securing your humans from combo phishing attacks—without YubiKeys

Based on initial reporting and the level of effort required to research and target employees—even with a dark web “as a service” platform coding up their emails and landing pages for them!—Fable threat analysts believe with moderate confidence that larger organizations with publicly available branding guidelines will be most at risk from this phishing combination in the next 3-6 months.

As for what these targeted organizations can do, current recommendations from Okta researchers suggest investing in YubiKeys. However, this solution can be expensive to purchase and time-consuming to roll out—particularly for organizations with employees who already don’t care for MFA applications.

Therefore, while your security team invests in long-term infrastructure to combat growing phishing attempts, Fable suggests your awareness team sends out targeted refresher briefings on spotting social engineering techniques—which include vishing and email phishing red flags. 

For example, you might send out social engineering reminders to:

  • Employees with high access permissions to critical applications and not IT help desk staff or system administrators; 
  • Employees likely to answer calls during work hours; or 
  • Employees who have previously clicked on a phishing simulation and either have high access permissions or have not enrolled in MFA. 

Make sure your briefings emphasize:

  • Pausing before clicking or responding to any “suspicious” communications, even if they look legitimate;
  • NEVER sending authentication codes to anyone, for any reason; and 
  • Following current processes for interacting with and accepting IT support. 

When in doubt, they should report the message and ask their security team for advice.

If you’re curious about other types of phishing lures, check out Fable Security’s free ebook, “One ish, two ish: How to prevent modern phishing”—no email required!

Emerging threat: LastPass “Backup Recommended” phishing email

TL;DR—

  • Over a US holiday weekend, attackers sent out urgent LastPass-themed “backup recommended” phishing emails from “mail-lastpass[.]com” to trick victims into revealing their master passwords.
    • Per the latest reports, LastPass itself was NOT compromised and did not leak customer data or credentials.
  • This particular phishing lure combines many effective phishing tactics, such as timing, urgency, and security-specific reassurances.
  • To avoid falling for this and similar phishing attacks, NEVER click, download, or reply to suspicious emails and reach out to the “last known good” contact information.
  • Check out “One ish, two ish: How to prevent modern phishing” for more about modern phishing lures and other social engineering attacks.

The MLK “Backup Required” LastPass phishing email

Password manager vendor LastPass received reports that over the weekend of January 19, 2026, attackers sent branded phishing emails to LastPass customers, pretending that an important “recommended backup” needed to happen within the next 24 hours.

On clicking the link, victims were taken to a realistic—but fake—login page for LastPass, where they were prompted to enter their master password.

With both their email and the master password—and assuming multifactor authentication (MFA) wasn’t set up—an attacker then gains access to the victim’s entire LastPass vault, which could include:

Why the LastPass phishing lure works

(courtesy of LastPass)

This phishing lure features many extremely effective social engineering tactics, including:

  • Send timing: Attackers sent these lures on a US holiday weekend—right before Martin Luther King, Jr. Day—when victims are distracted and security teams typically understaffed.
  • Language choice:
    • Notice the red alert box at the very top, as well as additional urgency triggers—specifically the “action required” within a short time period. The urgency started even before the email was opened, with subject lines like:
      • LastPass Infrastructure Update: Secure Your Vault Now
      • Protect Your Passwords: Backup Your Vault (24-Hour Window)
      • Important: LastPass Maintenance & Your Vault Security
    • Throughout the email’s written message, attackers repeated security-specific reassurances—“ongoing commitment to security”, the “ongoing commitment to security” checklist—to mask malicious intent.
  • Plausible packaging:
    • LastPass is a respected and personally important brand for its victims, increasing the chance they click the email.
    • The sender domain “sounds right” at first glance, from “mail-lastpass[.]com”.

How to avoid getting hooked by the MLK LastPass lure and similar phishing messages

  • DO NOT REPLY TO, DOWNLOAD, CLICK, OR CALL ANYTHING in a suspicious message!
    • After all, if the email is real, you can always come back to it later!
  • Confirm the message by reaching out to a known-good communication, like going to the sender’s website directly or sending a new email to customer support.
    • In this case, you could open the LastPass application itself to see if there was a maintenance banner, as well as find legitimate contact information for their help desk to verify the message.
  • Remember that no password manager company—or financial institution or any other store or vendor!—will ever ask for your password.

While this lure didn’t contain a direct ask for the password, many similar phishing emails—and voice phishing (“vishing”) or sms phishing (“smishing”)—will ask for either your authentication codes or the password to put in for you… but actually steal it.

If you’re curious about other types of phishing lures, check out Fable Security’s free ebook, “One ish, two ish: How to prevent modern phishing”—no email required!

We wrote the book on modern human risk management

The TL;DR

  • Cybersecurity has modernized nearly everywhere, except in human risk.
  • We wrote this book to set the bar for modern human risk management.
  • Programs must be data-driven, targeted, timely, outcomes-focused, and enterprise-grade.
  • Modern human risk management delivers: employee engagement, fewer incidents, fast threat response, and metrics that tie behavior change directly to business impact.

Over the past decade, cybersecurity has grown up. We’ve taken advantage of AI and automation to make enormous strides in malware detection, vulnerability management, secure software development, and more. Engineers now score risk continuously, automate remediation, and harden systems at scale. But there is one attack surface that largely remains untouched: people. While organizations fortify software and infrastructure, they continue to manage human risk with static training and phishing simulations that feel like they’re from the 1990s.

We wrote Modern Human Risk Management for Dummies to close that gap. The book treats human risk as a first-class security discipline, not a side program. It explains how AI-driven threats have reshaped the human attack surface, why traditional awareness efforts fail to change behavior, and what security teams must do differently if they want to reduce risk rather than merely count phishing clicks and training completions.

The book centers on five non-negotiables in modern human risk management: data-driven decision-making, highly targeted interventions, timely delivery, outcomes-focused measurement, and enterprise-grade execution. Instead of broadcasting generic content, security teams need to respond to real behavioral signals and intervene with precision as soon as they detect risk, meeting people in the tools they already use. Teams that follow these principles see the difference quickly: employee engagement, fewer incidents, timely threat response, and metrics that tie behavior change directly to business impact.

We wrote this book for practitioners—CISOs, GRC leaders, and security awareness teams—who understand the threat landscape and want something better than checkbox programs. If you’re ready to bring the human layer into the modern security stack and turn behavior from a chronic liability into a measurable control, this book is a great place to start.

Download the ebook.

Get your copy.

If you’d like risk-based briefings and nudges that are hyper-targeted and customized to your organization, try the Fable platform.

From annual training to real impact: Pennymac’s modern approach to security awareness

The TL;DR

  • Pennymac moved beyond annual training to an ongoing, security behavior program
  • Fable delivers role-specific messaging based on real user risk, and delivers interventions
  • Short-form video dramatically outperforms traditional email training
  • Increased video engagement correlated with faster OS patching and reduced vulnerabilities
  • Pennymac was able to close the loop, measuring whether security behavior changed

As social engineering threats evolve, and grow more convincing with AI, traditional security awareness training is no longer enough. In this customer testimonial video, the Pennymac CISO Cyrus Tibbs explains why annual refresher courses and generic email training fall short, and how his team uses Fable to deliver timely, role-specific security messaging that keeps pace with a rapidly changing threat landscape.

Cyrus describes a fundamental shift in how attackers operate: instead of breaking systems, they target people. That reality pushed Pennymac to rethink security training as an ongoing, behavioral program that understands individual risk, delivers relevant guidance in the moment, and measures whether behavior actually changes. Rather than relying on one-size-fits-all emails, the team adopted an approach closer to social media marketing: short, direct, actionable messages designed to drive engagement and measurable outcomes.

Using Fable, Pennymac automatically segments employees into cohorts based on role and observed behavior. These include money handlers, privileged infrastructure users, developers, and public-facing roles, each with distinct risk profiles and training needs. By eliminating guesswork around who receives which training, the security team ensures messaging is targeted, timely, and relevant, all without the manual toil.

The impact has been both immediate and measurable. A/B testing revealed dramatic differences in engagement between traditional email instructions and Fable’s AI-generated briefing videos, with employees consistently responding better to video. In one case study focused on OS patching, Pennymac integrated Fable with its vulnerability management system and tracked outcomes from video delivery through patch completion, finding a clear correlation between video engagement and reduced vulnerabilities.

Today, Fable has become Pennymac’s default platform for driving organizational change, not just security training. Cyrus notes that Fable’s automation and targeting capabilities free up significant staff time, while employees consistently respond positively to the short-form video format. The result is a security awareness program that scales with the business, adapts to real risk, and earns employee attention.

Genesys security is preparing employees for attacks of tomorrow

The TL;DR

  • The Genesis security replaced checkbox training with modern, short-form briefings that more closely resembles social media content
  • Engagement increased immediately, even after failed simulations
  • Employees reporting suspicious activity rose by double digits
  • Upon a request from incident response, the team delivered custom, threat-specific training in about a day using Fable

Traditional security awareness training often feels like a checkbox—something employees rush through and quickly forget. In this customer testimonial video, the Genesis security team shares how they set out to change that dynamic, using Fable to deliver crisp, short-form training that mirrors the content employees already engage with on platforms like TikTok and Instagram Reels.

Featuring insights from Marlene Galvan, Portfolio Coordinator and Security Awareness Lead, and Jonathan Chow, CISO, the video explores what happened after Genesis moved away from traditional, long-form training and adopted a more modern, human approach to security awareness.

The shift in employee response was immediate. Engagement increased, and for the first time, employees began offering positive feedback, even after failing a simulation. Instead of frustration or embarrassment, many appreciated the briefings for being short, relevant, and delivered in a positive, non-punitive tone.

That engagement quickly translated into measurable results. Genesis saw double-digit percentage point increases in employees reporting suspicious activity—one of the clearest indicators of an effective security awareness program. Not long after launch, the incident response team proactively requested a targeted phishing training video for a specific threat. Using Fable, Genesis delivered a custom video in about a day, tailored precisely to the behavior they needed to change.

The video also highlights how Fable’s AI-powered platform enables rapid, flexible content creation, including company-specific details, custom graphics, and topic-focused messaging that feels relevant rather than generic. By prioritizing short, targeted content, Genesis found that security awareness became easier to absorb, and far more likely to stick. As the team puts it, Fable helps make the “medicine go down easier.”

Finally, Marlene and Jonathan emphasize the partnership itself as a key factor in their success. Rather than a traditional vendor relationship, the Fable team operates as an extension of Genesis’s internal crew, collaborating, exchanging ideas, and working toward a shared mission. The result is a security awareness program that doesn’t just reduce risk, but actively strengthens security culture across the organization.

Risk-based targeting isn’t role-based targeting (and the difference matters)

The TL;DR

  • Most “risk-based targeting” is really just role-based targeting with assumed risk.
  • True risk-based targeting responds to observed behavior.
  • Security teams have a finite attention budget, and wasted training erodes impact.
  • Targeting the few who actually cause risk drives better outcomes and trust.

A hot topic in human risk management is risk-based targeting. Everyone knows one-size-fits-all security training is yesteryear, and there’s even a fair body of evidence that it may have the opposite effect than intended. Lots of vendors claim to target risk, but few actually do it. What they really mean is role-based targeting.

To be clear, role-based training is a good thing. It shows employees what “good” looks like at their company, and—delivered in a relevant and specific way—serves as an excellent training starting point. Lots of our customers brief, say, the finance team on an emerging social engineering threat targeting them. Or deploy a particular type of phishing simulation just developers based on their familiar tools. But if your human risk management vendor tells you this is “risk-based targeting,” I’d say what they’re really talking about is just role-based targeting with assumed risk layered on top—not based on actual observed risk. The distinction may sound academic, but in practice it has real consequences for effectiveness, trust, and attention.

Here’s an example of assumed risk: engineers receive training on securing API keys or following cloud storage best practices. These are reasonable guesses, and they’re not wrong. Any solid security content library should absolutely include this material. The problem is that the targeting itself is static. It’s driven by who someone is in the org chart, not by what they’ve actually done. Risk is inferred, not observed. And the bigger question is how many developers do you know that have tolerance for training on all the potential topics they might encounter before they start getting training fatigue? 

Security awareness leaders know the truth: they have a finite amount of attention to make use of. Every unnecessary alert, notification, or, yes—training module—spends just a little bit of that budget. So, anything they send better pack a punch. In one customer, a financial technology company, the security team detected a specific data-handling behavior: their Splunk instance was showing PII violations. They traced them back to Datadog, and then to a bad parser, which about 150 of their nearly 1,000-person engineering team was using. Instead of broadcasting a generic warning to the entire engineering organization, they targeted the 150 with a 90-second Fable video briefing. It was crisp, to the point, highly specific, named the tools, named the violation, and gave a precise call-to-action. The result was they cut those violations by 60% within a month, and then to 0% in the months following, with zero recidivism. The other benefit? All the people who weren’t logging PII didn’t get the briefing. The company interrupted fewer people, preserving others’ attention for issues that were genuinely relevant to them.

Also note the qualitative difference in how these messages land. Sending content about a risk someone might encounter (“Make sure you protect personal data”) feels generic and easy to tune out, especially if it doesn’t map to anything concrete in the recipient’s day-to-day work. By contrast, content that reflects an observed behavior (“You inadvertently logged sensitive data in cleartext to Datadog”) is specific, credible, and hard to ignore. It moves security guidance from the abstract into the real world, where learning actually sticks.

Role-based training is valuable, and there’s a place for it in your human risk management content line-up. It gives employees that “wanted poster,” so they’re reminded what behaviors to steer clear of, but true risk-based targeting starts with behavior, not assumptions. When we anchor targeting in what’s actually happening in the environment, we respect people’s time, deliver highly-specific guidance, increase the impact of our interventions, and build trust that security messages are sent for a reason. In a world where attention is scarce, that makes all the difference.

Beware Microsoft 365 secure authentication requests

The TL;DR

  • Attackers use OAuth “device code” phishing to trick victims into approving unauthorized access to their real Microsoft accounts.
  • The attack uses a real Microsoft login page as victims “re-authorize” their session… but approve the attacker’s session, instead.
  • Attackers can then do and see everything the victim is allowed to do and see—leading to sensitive mailbox access, proprietary data theft, and business email compromise.
  • Urge your people to never approve logins they didn’t personally ask for!
  • Scroll down for a free, 2-minute Fable video briefing you can use

Threat actors can bypass passwords and multi-factor authentication (MFA) controls to access Microsoft 365 accounts for future attacks through the popular OAuth “device code” phishing technique.

Instead of stealing credentials, OAuth device code phishing lures trick their victims into approving attackers’ access using legitimate Microsoft login pages.

For this lure, there’s no bad grammar or strange URLs your employees can spot: just an urgent and unexpected “reauthorization” request that innocently displays the real login page… 

… granting an unseen threat actor’s access to the victim’s Microsoft 365 account for as long as the victim doesn’t need to log in again.

Here’s how OAuth device code phishing lures generally work 

  1. Attackers send a phishing message asking someone to enter a short code – a one time password (OTP) – in a real Microsoft-based URL, because they need to “reauthenticate” their current session.
    1. Some attacks, for example, used the legitimate login page of microsoft.com/devicelogin
  2. Instead of the OTP being used for their personal session, victims are actually authorizing the attacker’s access.
    1. The system is only supposed to grant these tokens after an employee puts in their username, password, or other credentials to guarantee the user’s identity and authorization. 
    2. However, attackers manipulate the  system so the victim re-approves the attacker’s session instead of their own. The system then assumes this second session also used the victim’s credentials.
  3. The attacker can then access the person’s Microsoft account–including email, contacts, and proprietary business information–until the reauthorized session token expires.
    1. Remember: the system thinks that attackers are actually the authorized user, since they have a “real” session token. So, the attacker can look at and do anything the victim is allowed to see or do!

Security researchers have been noting a rise in these campaigns since their initial appearance during the COVID-19 pandemic in 2020-2021 and ramping up in late 2025:

2020-2021: Researchers first see the modern OAuth device code phishing lure used in “high-profile [business email compromise] BEC incidents” in “sophisticated phishing campaigns,” often using COVID-related messages to increase legitimacy and urgency. (Sophos)

Microsoft device login page used in an OAuth phishing attack / Secureworks

February 2025: Microsoft discusses how attackers targeted specific employees with text message lures (“smishing”) over Signal, WhatsApp, and Telegram messaging platforms to encourage victims to authorize the attacker’s session on Microsoft 365 accounts. (Microsoft)

An example of an early “smishing” lure and social engineering attempt used as part of a targeted OAuth attack / Microsoft).
An early example of an OTP used as part of an OAuth phishing attack / Microsoft

May 2025: Researchers continue to demonstrate the wide range of OAuth device code phishing attacks available, including setting up proofs of concept (PoCs) of how the attack technically works across lure formats. (Logpoint)

A researcher’s demonstration of how Microsoft redirects to a legitimate-appearing permissions request of an “unverified” application, as part of an OAuth device code phishing lure attack / Logpoint

November 2025: Cloud security researchers see more OAuth device code bypass attempts in their own security product across their customer base–with 98 suspicious successful authentication attempts, six malicious device registrations, and 7 Windows registration after device code authentications in the last three months. (Wiz)

December 2025: Email security researchers detail rising use of the OAuth device code phishing lure by both nation-state and financially motivated threat actors, now that low-code / no-code versions of the attack are now for sale on the dark web. One phishing email used a fake document about fake bonuses and benefits to encourage victims to click. (Proofpoint)

A phishing email used to trick victims into triggering the OTP for an OAuth session token theft / Proofpoint)
A phishing lure landing page, redirecting victims to a legitimate Microsoft authentication page so the victim can use the real OTP to authenticate the attacker’s session / Proofpoint

How to prevent initial access via OAuth device code phishing lures

In an OAuth attack, there’s no fake login page or obvious red flags you can train your teams to watch for: just a convincingly urgent request to “re-authorize” or “secure” their account. 

That’s why awareness and timing matter! Employees should never enter a device code unless they personally tried to login moments before, and they should treat any unexpected code requests as phishing.

How Fable can help you right now

Here’s a super-short and free downloadable video showing exactly how this attack works, and how employees can watch out for it. We designed this briefing specifically to help anyone recognize this threat before it turns into a real incident. 

Download it, share it, and remind your team: Don’t approve logins you didn’t ask for!

Watch the briefing

And download for your own use below.

If you’d like risk-based briefings and nudges that are hyper-targeted and customized to your organization, try the Fable platform.