TL;DR: 

  • What happened: An infostealer infection at generative AI vendor Context[.]ai ended in decrypted environment variables across Vercel customer projects in April 2026.
    • One Context[.]ai employee’s Lumma infostealer infection — reportedly downloaded while seeking a personal Roblox video game cheat — started the supply-chain compromise.
    • A Vercel employee’s “Allow All” OAuth grant to Context[.]ai’s Google Workspace app gave the attacker the path in.
    • Attackers had the stolen credentials for a month before they were operationalized.
  • The social engineering angle: There was no phishing email — the “lure” was a stack of ordinary modern-workplace decisions.
  • What employees can do:
    • Use only AI tools your org has officially approved
    • Keep personal browsing off work-credentialed devices and browser profiles
    • Rotate any credential the moment you’re told it’s exposed — for personal and professional accounts
  • Check out ‘One ish, two ish: How to prevent modern phishing‘ for more on the social engineering patterns that bypass traditional phishing defenses.

Vercel: A supply chain breach that started two layers upstream

The human story underneath this breach is layered, but it isn’t complicated. This attack succeeded because:

  • One person treated their work-credentialed machine like a personal one; and 
  • Another person clicked “Allow All” on an unofficial, unsanctioned third-party AI tool.

On April 19, 2026, A Vercel security bulletin confirmed “highly sophisticated” attackers exploited an unauthorized instance of a legitimate generative AI tool, Context[.]AI, installed on a Vercel employee’s work device to pivot into their Google Workspace account. 

From Google Workspace, they made it into Vercel’s production environment, where attackers found and stole non-sensitive environment variables (API keys, tokens, database credentials) across multiple internal customer projects.

But Vercel wasn’t the start of the chain; the attack actually started several steps back.

The day after the Vercel breach, Wiz published a parallel technical analysis framing the attack as a double supply chain attack: 

  1. The Vercel employee had given broad (“Allow All”) permissions when setting up their shadow AI instance of Context[.]ai, letting the attacker pivot from Context[.]ai into Vercel’s broader environment.
  2. Context[.]ai’s own OAuth tokens had been compromised, which was the attackers’ first way into the Vercel employee’s work device.

So how did Context[.]ai first wind up compromised? 

Roblox.

Or more officially: employees using professional workstations for personal reasons, accidentally infecting workspaces. 

[H3] Roblox personal searches lead to professional compromise at two (and counting) organizations

Researchers at Hudson Rock traced the chain one more layer back, to the actual moment of human compromise in February 2026, when a single Context[.]ai employee with sensitive access privileges was infected with a Lumma information stealer stole:

  • Google Workspace credentials
  • Logins for Supabase, Datadog, and Authkit
  • A corporate social account
  • Autofill data containing the Google OAuth Client ID for Context[.]ai itself, which is what ultimately compromised the shadow AI instance on the Vercel employee’s workstation.

According to Hudson Rock’s analysis of the infected machine’s browser history, that user had been actively searching for and downloading Roblox “auto-farm” scripts and executors — an actively weaponized category of files that threat actors love for hiding Lumma.

The “lure” was an OAuth consent screen most of us would click through

Unlike a phishing email — which a trained employee can spot — this attack had no moment where someone should have known something was wrong. There was no spoofed sender, no suspicious attachment, no urgency text, no malicious link.

What there was, instead, was a Google OAuth consent screen for a productivity AI tool, asking a Vercel employee to “Allow All” permissions to their Google Workspace account.

That request, on its own, looks identical to the consent prompts every employee sees a dozen times a year — for Notion, Loom, Calendly, every meeting transcription tool ever built. 

The only red flag — and it’s a soft one — is the sheer breadth of the permissions requested:

  • “Read, compose, send, and permanently delete all your email”
  • “View, create, and delete files in your Drive”
  • “Manage your contacts”
  • And similar full-scope requests across every Workspace API

These are the prompts where an employee — already three meetings deep, just trying to use an AI tool the rest of the team is on — clicks “Allow” without reading the scope list.

When a tool with this scope is later compromised — through any path, including its own employee’s personal Roblox download — the attacker doesn’t need to phish the customer. They’ve already been authorized.

This is a near-identical pattern to the one we covered in the axios/UNC1069 npm OAuth-mediated supply chain attack — except here the attacker didn’t need to compromise a developer’s repository. They just needed an over-permissioned OAuth grant sitting in a Google Workspace tenant.

Employees most at risk of falling for similar double supply-chain attacks

This isn’t an attack with one obvious victim role. The vulnerable cohort is anyone whose daily work involves AI productivity tools, OAuth grants, or device-blurring between personal and work — which, in 2026, is most of your workforce.

Vulnerable employee cohorts to double supply-chain attacks like Vercel & Context[.]ai Details
Authorized AI users
X
Users who can grant OAuth scopes (i.e., IT, developers, security)
Daily decision-makers on “should this AI assistant have my Google Workspace?” Their compromise gives attackers production blast radius — exactly the chain that ended at Vercel.
Historic shadow IT or shadow AI users X

Known gamers

Anyone with a history of installing unsanctioned tools, browser extensions, or personal apps on work devices. The Context[.]ai employee fits this profile precisely — gaming-related downloads on a corporate-credentialed machine.
All employees X

Company leaders support AI but slow official rollout

When leadership says “use AI” but no approved tool exists yet, employees pick their own — usually the smaller, less-vetted vendor with the most permissive OAuth scope and the lowest barrier to “just getting started.”
Employees using work devices for personal reasonsX

History of slow credential rotation

Infostealers grab everything in the active session — corporate creds and personal cookies don’t get separated. This is the toxic combination behind the entire Vercel chain.Per HRR Vol. 2, poor credential rotation remains a top-10 employee behavior risk. Hudson Rock researchers had identified Context[.]ai’s credentials as compromised before they were used — giving employees enough time to make a difference if notifications aren’t ignored.

How employees can stop the next Vercel-style double supply-chain breach

The defenses here are simple, behavioral, and unglamorous. None of them require security expertise, either, so you can tell all of your at-risk employees:

  • Use only AI tools your organization has officially approved. 
    • If your company hasn’t named a sanctioned AI assistant yet, that’s not a green light to pick your own — it’s a signal to ask your security team what’s coming and what’s allowed in the meantime. The “small productivity AI” you found is exactly the threat profile attackers are exploiting.
  • Read the OAuth consent scope before you click “Allow.” 
    • If a tool, app, or browser extension wants full read/write access to your email, drive, and calendar to do something it shouldn’t really be able to do — say no, or escalate to IT. 
    • Legitimate tools usually offer scoped versions; the ones that demand “Allow All” deserve extra scrutiny.
  • Keep personal browsing on personal devices and profiles. 
    • Gaming, shopping, sideloaded apps, browser extensions, and personal Google logins should not share a session, profile, or device with corporate credentials. 
    • Personal-to-professional bleed is the most common infostealer entry point in 2026, full stop.
  • Treat credential rotation prompts as urgent. 
    • When IT or your security team tells you a credential may be exposed, rotate it that day. 
    • Context[.]ai credentials had been compromised for over a month — and Vercel customers paid the cost of that delay.
  • Audit which OAuth apps have access to your Google or Microsoft account every quarter. 
    • If you don’t recognize one, revoke it. 
    • For Google Workspace: Admin Console → Security → API Controls → Manage Third-Party App Access. 
    • For personal Google: myaccount.google.com → Security → Third-party apps with account access.
  • Report it. 
    • If you suspect your work credentials may have been exposed in a personal breach — including through an old infostealer infection on a personal device that you also use for work — tell your security team. They’d much rather rotate a credential they didn’t have to than chase an attacker who already used it. 
    • For US-based fraud, you can also report to the FBI’s IC3.

Want a deeper read on the social engineering patterns behind supply-chain attacks like this one? Check out ‘One ish, two ish: How to prevent modern phishing‘ for the full breakdown — including how shadow AI, OAuth abuse, and trusted-relationship attacks bypass traditional phishing defenses.