TL;DR:
- Generative AI inverted enterprises’ risk calculations.
- Security teams can’t fully block shadow AI.
- Employees get pushed onto personal accounts your security team can’t control, while your competitors compound productivity gains.
- Modern security teams must enable secure AI adoption, not prevent it:
- Make sanctioned tools easier than unsanctioned ones
- Coach employees in the moment they’re using AI
- Build champions instead of perimeters
- Measure tool adoption instead of training completion
- Fable turns AI policy into behavior change by coaching employees in the moment they reach for the wrong tool and making the secure path the obvious one
- Speed and security are now the same project.
- Reserve your spot in the upcoming live roundtable discussing how leaders secure and enable AI adoption across the enterprise, with real-world strategies from:
- John Yeoh (CSO, Cloud Security Alliance)
- Steve Tran (VP of Global Security, Iyuno)
- Jacob Berry (Field CISO, Fable)
Secure AI adoption is the steam engine of our generation.
Not the artificial general intelligence (AGI) fantasy. Not an existential risk debate.
Secure AI adoption means the actual, observable, mundane miracle of useful work taking less time. It is happening at every desk in your company right now, whether your IT team has sanctioned it or not.
The steam engine took three decades to remake the factory; AI is doing it in three years. The companies that figure out how to weave AI into their operating fabric are going to compound for a decade. The ones that don’t will compound in the wrong direction.
But every steam engine had a boiler explosion. The risk math has inverted.
We’ll get into exactly how Fable helps security teams solve this later, by turning policy documents into in-the-moment behavior change and making the secure path the obvious one.
But first, the threat landscape that makes that work urgent.
Blocking AI drives shadow AI while exposing data in new (and legal) ways.
For most CISOs, the risk of not adopting AI now outweighs the risk of adopting it, because the alternative isn’t “no AI.” It’s the AI you can’t see, and the risks we couldn’t have predicted.
I keep a running list of AI risks nobody could have predicted, and it’s longer than I’d like.
July 2023: ChatGPT’s share-link feature became a privacy bug
For example, last summer’s entry was the share-link bug: a UI toggle in ChatGPT’s “Share” feature that quietly let thousands of private conversations get indexed by Google.
People shared private information, then clicked a button they didn’t understand. A search engine indexed everything for anyone to find: strategy docs, internal complaints, secret drafts employees would never want searchable.
The cause? Individual users’ backend settings of their ChatGPT bots. This fundamental lack of user education and training exposed countless exposed private conversations – without a single malicious hack or malware installation.
That risk was unthinkable in January and a news story by August. Six months.
February 2026: Generative AI negates attorney-client privilege
And then this spring, a federal judge in the Southern District of New York ruled that sharing confidential information with a public AI tool waives attorney-client privilege and work-product protection.
That is, privilege evaporates at the moment of paste.
Every internal legal review, every M&A process, every board investigation conducted under privilege is now compromised through one careless AI chatbot prompt.
Shadow AI is the biggest enterprise security risk of 2026
Shadow AI happens when employees use any personal or unsanctioned generative AI tools (ChatGPT, Claude, Gemini, Copilot, AI notetakers, AI-enabled browsers, OpenClaw, anything) for work purposes. It blocks the security outside the approved parameters of IT and security.
It is the dominant AI risk in 2026 for one reason: blindness at scale. Your data loss protection (DLP) controls cannot see a personal phone. Your cloud access security broker (CASB) dashboard cannot see a personal browser logged into a personal Google account.
The volume is staggering:
- Employees are three times more likely to use generative AI tools for their daily work than their leaders expect, per McKinsey research.
- 47% of generative AI users at the average organization still use personal AI accounts, not corporate managed licenses, based on Netskope’s 2026 “Cloud and Threat” report.
- The average organization discovers 223 generative AI data policy violations a month, per the same Netskope research.
- 13% of all organizations reported breached AI models and applications last year, according to IBM’s 2025 “Cost of a Data Breach” report; organizations with high shadow AI presence saw $670,000 higher average breach costs than orgs with smaller less or no shadow AI.
In other words:
- Leaders are flying blind on how much shadow AI is already happening.
- Half the all generative AI users are doing it on personal accounts you don’t see, stop, or train on.
- Hundreds of times a month, somebody pasted in data they shouldn’t have, into a tool that’s likely not secured.
- Organizations that accidentally incentivize shadow AI expansion by blocking generative AI tools or not properly enabling purchased tools are generating over half a million more in security costs, per incident.
We are operating in a threat landscape where the next Samsung-style incident might come from a feature release, a court ruling, or a regulatory reclassification. The attack surface widens every time a vendor ships an update or an employee uses a generative AI tool, agent, or browser for which they have been untrained.
So the CISO sitting in the middle of this has been handed a real puzzle. The old question was “should we use AI?” That ship sailed.
The new question is harder, but more honest: how do we enable AI adoption that doesn’t blow up in our faces?
That question has two halves.
1. How do I get more of my employees to use the AI tools we already bought?
Most security teams are helping to rollout Copilot licenses, AI notetakers, and internal data models. Orgs paid for it. They vetted it.
And now, we watch half of the workforce keep using ChatGPT on their personal phones anyway.
2. How do I get those same employees to avoid risky, unsupervised AI tools?
That is, how can security teams incentivize employees to ditch tools that:
- Train on user inputs
- Retain data forever
- Share a parent company with a state actor
- Index a debug session into Google
It’s the same employee. Same week. Two different problems.
Employees use unsanctioned AI tools for one reason: comfort at speed
If you sit with the CISO’s question of controlled AI enablement for a minute, it stops feeling like two problems and starts feeling like one.
Both are behavior problems which boil down to a single human truth:
People use AI for the same reason they use anything else at work. It makes them faster.
- The early adopter on your team isn’t using ChatGPT because she’s rebellious. She’s using it because the report that took her four hours now takes forty minutes, and she has a kid’s recital at six.
- The shadow AI user isn’t malicious. He tried Copilot, found it slower than the tool he uses at home, and quietly switched back.
Productivity is gravity. It will always pull people toward the path of least resistance.
Historically, the productivity problem is where security teams have made the move that backfires. They put a wall in front of the easy path. Block the consumer tool.
Then, the team watches in real time as the path of least resistance simply routes around their careful controls: onto a personal device, a personal browser, a personal account.
We can’t policy our way out of a behavior problem.
Secure AI adoption is a behavior problem, not a policy problem
If productivity is gravity, then secure adoption isn’t a wall; it is the downhill path. Build the right slope, and the secure behavior is the one that takes the least effort.
(Behavioral economics calls this “choice architecture.” The design of the environment shapes behavior more than the information you give people about it.)
Four levers actually move behavior at scale.
1. Make the sanctioned (and secure) AI tool easier than the unsanctioned shadow AI option.
Single sign-on. Pre-installed. Available in the place the work is already happening.
A tool that takes three clicks will lose to a tool that takes one. Every time.
2. Coach secure AI in context, not on a calendar.
Avoid generic LMS modules on “AI security.”
Real, role-specific guidance: here is how legal uses AI without exposing client data; here is how engineering uses it without leaking source.
Deploy coaching that arrives in the moment an employee is about to do the risky thing, not six weeks later in a quarterly training they’ve already tuned out.
3. Build AI champions, not security perimeters.
Sometimes, we still have to tell someone no. (The boiler might still explode.)
So, we have to make it a real conversation. Explain why. Give them an alternative.
Do not just block the URL and walk away. If you do, they will find the same URL on their personal phone, and now you’ve lost both the risk and the relationship.
4. Measure sanctioned AI adoption, not training completion.
A 100% completion rate on an AI policy module tells you nothing about whether anyone is actually using sanctioned tools.
Track which tools your employees are reaching for, not which boxes they ticked. Adoption rate is the only metric that maps to risk.
We designed Fable’s platform to enable secure AI adoption from the very beginning.
This tracking, training, and behavior measurement? It’s the work we do at Fable.
Most AI policies are 12-page PDFs that exist to satisfy auditors, not to change how a developer works on a Tuesday afternoon.
We turn that PDF into a one-minute video the developer it’s meant for will actually finish.
We coach employees in the moment they’re using AI, in language specific to their role. And when someone reaches for an unsanctioned tool, we surface the sanctioned alternative in the context they’re already working in.
We help the secure path become the obvious one.
Today, roughly 1 in 3 Fable customers is running at least one secure AI adoption use case in production: AI policy rollout, shadow AI intervention, or driving adoption of a sanctioned tool like Copilot or Gemini Enterprise.
(One CISO who deployed Fable for AI risk reduction described the result as “more effective” with “faster employee behavior change” than previous efforts, which is exactly what we designed the platform to do.)
The slowest organization to enable secure AI will become the most exposed one
Throughout the history of enterprise security, the simplest way to reduce risk was to reduce exposure.
Fewer endpoints. Fewer tools. Fewer privileges. Less surface area.
The most secure organization, many believed, was the one moving the slowest.
AI is the first security problem in my career where the math doesn’t hold: the slowest organization is now the most exposed one.
Because that organization’s employees have already given up the sanctioned path and are using personal accounts at home.
We can’t fight AI gravity. We can only choose where the behavior slope points.
Speed and security can no longer be in tension. They must become the same project.
The CISO who internalizes that creed will become something the role has rarely been allowed to be: An accelerant. A driver of how the business actually does its work, not a tax on it.
We don’t get many windows like this one. The ones we do get reward the leaders who realize, before anyone else does, that the rules just changed.
Register for our live roundtable
Learn how about how to adapt to the identity shift AI is forcing on security leaders—moving from department of no to department of how.
Join Jacob Berry (Field CISO, Fable Security), John Yeoh (CSO, Cloud Security Alliance), and Steve Tran (VP of Global Security, Iyuno) for a live roundtable on what that shift actually looks like in practice.
