TL;DR—

Part 2 of a multi-part blog series on the data behind Fable’s Human Risk Report Vol. 2, covering Q4 2025 threat analysis.

  • Last quarter’s threat landscape analysis surfaced ten rising cyber threats outpacing security control deployments.
    • The control struggles aren’t because defenders aren’t trying, but because some attacks are specifically designed to route around technical controls.
  • The counterintuitive finding: in each case, a trained, behavior-aware employee isn’t just a smaller attack surface. People become an active defense layer and compensating control for struggling technical controls.
  • We’re highlighting four of the most compelling threats below.
    • The full analysis — including all ten threats and the human behaviors that counter them — is in Fable’s Human Risk Report Vol. 2 (download at the bottom!)

What security vendors hate admitting: They can’t stop everything.

Here’s something worth saying plainly: technical security controls are critically important.

Deploy them. Maintain them. Keep deploying more of them. 

But also? 

There is a growing category of cyber-related attacks that technical controls cannot stop.

It’s not because the controls are bad or security teams aren’t trying—far from!—but because the attacks are specifically engineered to abuse legitimate infrastructure, trusted identities, and authorized users to do their dirty work.

In those cases, your last viable defense layer is an employee who knows what they’re looking at… and what they can do to stop it.

In our Human Risk Report (HRR) Vol. 2, we included a Q4 2025 threat landscape analysis that identified ten rising threats on track to outpace security control deployments. 

What struck me wasn’t just the threat mechanics, but rather how consistently the most effective countermeasure in each case came back to human behavior—not as a stopgap while better tech gets built, but as a genuinely irreplaceable compensating control.

Four of them are worth unpacking here.

BEC: The social engineering attack security controls can’t stop,

Business email compromise is, statistically, the most efficient attack in the modern threat landscape — and that’s not an exaggeration.

Per the 2025 Microsoft Digital Defense Report, BEC attacks represented just 2% of attempted attacks last year, but accounted for 21% of all successful ones. 

Ransomware, for comparison, only made up 16% of successful attacks—despite receiving substantially more attention and security investment.

Why is BEC so effective? Because it’s a pure social engineering attack. There’s no malware to detect, no malicious link to block, no payload to sandbox. 

The attacker tricks an authorized employee into intentionally moving money, sharing credentials, or bypassing a control—often by impersonating an executive, a vendor, or an IT team member with legitimate-sounding urgency.

No legacy email security gateway catches that. No EDR flags it. The technical controls work exactly as designed, because—from their perspective?—nothing unusual happened.

The human solution here isn’t complicated, but it requires investment to actually work: make, train, and enforce policy adherence for out-of-band requests—and critically, don’t punish employees who pump the brakes on a wire transfer request because it came from the CEO’s email on a Friday afternoon. 

That pause is the control working. Treat it that way.

Because when CrowdStrike’s 2026 threat report says that 83% of their incidents were caused by “malware-less” infections? We’ve got more attacks like this one coming.

MFA bypass: Expensive tech, simple human fix

In January 2026, the ShinyHunters threat group demonstrated a bypass technique that compromised authentication apps and tokens across 100+ organizations

Researchers noted there’s essentially no substitute for FIDO2/phishing-resistant authentication in a zero-trust network architecture—the expensive, difficult-to-deploy gold standard that, per Gartner, most ZTNA organizations expect to cover well under 75% of their environment.

So the expensive control exists, it’s hard to roll out, and it won’t cover everything even when you do.

Here’s the human fix, and it costs almost nothing to teach: no legitimate IT or security team member will ever ask an employee for a one-time password or authentication code. 

Full stop. 

If someone asks—over email, over the phone, over Slack, in a ticket, by singing telegram—that’s the attack. 

The employee who knows that and refuses is a more reliable control than the authentication layer the attacker just bypassed.

This is a case where training isn’t the fallback. For a meaningful portion of your user base, it’s the primary defense.

Shadow AI: The data violation no DLP tool sees coming

Shadow AI — employees connecting unauthorized generative AI tools to work systems — was one of last quarter’s top risk drivers across Fable customer environments. 

One survey found 51% of employees had connected unauthorized AI tools to work systems. That number should be read alongside the accelerating adoption of both sanctioned and unsanctioned AI tools for tasks involving sensitive data: summarizing documents, analyzing spreadsheets, drafting communications.

The challenge for technical controls is structural. Data loss prevention (DLP) tools are trained to evaluate what content is — not whether it’s appropriate to share in a given context. They’re notoriously hard to tune, with an average 47% false positive rate that makes security teams reluctant to act aggressively on alerts. 

Meanwhile, the employee uploading a contract summary to an unsanctioned AI tool isn’t doing it maliciously—they’re trying to do their job faster.

The human layer here does two things technical controls can’t. 

  1. Data sensitivity labeling—employees who understand what’s sensitive, and can actually classify it, enable the downstream controls to work better. 
  2. Understanding which tools are sanctioned and why isn’t a compliance checkbox; it’s the decision point that happens before any DLP alert ever fires.

(We’ve written separately about how agentic AI tools create a related problem—once an AI agent is acting on sensitive data autonomously, the only reliable way to limit exposure is not putting that data in the tool in the first place. That post is here.)

The quantum distraction — and what attackers are actually doing

We’ll keep this one short, because the point is the brevity.

Quantum computing gets a lot of airtime in security conversations as an impending threat to encryption. It is definitely a real long-term concern, and should be budgeted for if you’re in the middle of U.S. federal contract renewals or otherwise store sensitive data that nation-state level spies want. 

However, it is almost certainly not your most pressing problem right now.

Here’s what is: last year, 85% of targeted usernames in data incidents appeared in previous credential leaks. The attackers getting into your systems don’t need to crack your encryption.

They’re logging in with passwords your employees reused from a breach three years ago.

Don’t believe me? The latest intel on the March 2026 Stryker-Handala incident says that the script kiddies just bought admin team credentials to break into the Stryker environment.

No hacking. No password spraying. Just bought years-old admin credentials into the system, then told Intune to wipe it all.

Look, quantum-resistant cryptography is a real investment category. 

But teaching employees to use a password manager correctly — randomized, long, unique, updated after a breach — is a cheaper, faster, and more immediately impactful control for the attacks actually happening at scale right now.

Don’t let the futuristic threat crowd out the mundane one.

The other six — and why they’re in the report

The remaining threats in our Q4 analysis each have their own human solution worth understanding:

  • Institutional instability as CISA’s funding and program updates became less reliable
  • Phishing-as-a-service kits borrowing legitimate domains to evade email security gateways
  • IOC decay rendering blocked IP lists stale within 31 days (at best)
  • Agentic “vibe coding” introducing critical security flaws through autonomous AI development
  • Online oversharing enabling more targeted social engineering 
  • Phishing lure diversity as QR codes and deepfake voice calls expand the attack surface beyond the inbox

We didn’t walk all ten here because a list isn’t a thesis. The thesis is this: in each of these cases, a trained employee is your last resort when technical controls fail. 

Your human layer can be a purpose-built, compensating control layer that fills in the gaps left by incomplete technical tools.

The full threat landscape analysis, with the complete table of threats, why each one matters, and what human behaviors counter them, is in Fable’s Human Risk Report Vol. 2.

Download it. It’s free, no email required. And if you’re already thinking about where your program sits relative to the behaviors these threats require — this is a good place to start.