Check out the Fable human behavior data lakehouse—the foundational element for
accurate, explainable, AI-assisted human risk calculation and analysis.
We just released the Fable human behavior data lake. It’s the foundational element, empowering security teams with accurate, explainable, AI-assisted risk calculation and analysis. Watch Dr. Sanny Liao, co-founder & CTO and Kaushik Devireddy, product manager, and Sean Coyne, customer advisor, for a 15-minute explainer and demo on what we built.
What security integrations do you have?
We’ve built integrations with your core enterprise applications that give you insight into human behavior—directory, human resources (e.g., Workday), access (e.g., Okta), workspace (e.g., Microsoft 365, Google Workspace), and security (e.g., CrowdStrike, Netskope). We ingest and synthesize these human behavior signals, combining them to identify risk—especially where you may not see it in any one tool.
How do you deal with data quality issues and normalization?
We use a layered data pipeline modeled off of the Databricks Medallion architecture. At the bronze level, we keep raw, unmodified data so there’s always traceability back to the source. At the silver level, we normalize and enrich—joining events across identity, endpoint, etc. into a consistent schema so behavior data becomes provider-agnostic. At the gold level, we show you analysis-ready human risk data so you can calculate metrics and take action.
How explainable are your AI-generated responses?
Explainability is key to human risk. Every AI-assisted query response, risk score, or suggested intervention comes with human-readable context—which tools were called and for what purpose, the outputs of these calls, and how they were used in the following step. For example, if an employee gets flagged for phishing risk, you’re likely to see the underlying factors—like repeated risky clicks and elevated privileges in key data sources.
What kind of guardrails do you have to deal with hallucinations?
Our LLM responses incorporate checks and guardrails. All AI outputs are grounded in our structured data lakehouse—so responses are generated from facts, not guesses. The AI Agent is utilized to generate queries, as opposed to complete data interpretation, to ensure deterministic answers are derived from the data. We also equip our AI agent with tools to grade query and output confidence. Our goal is to give you results that are trustworthy, auditable, and safe to use in a security setting.
How do you keep my data private and not expose it to foundation models?
We’re deeply committed to our customers’ data privacy and security. All of our agentic workflows involving sensitive customer data run through Amazon Bedrock, the service we use to access foundation models. Bedrock follows strict data privacy and isolation guidelines, so customer data remains secure and encrypted in the Amazon ecosystem.
What they’re saying about Fable—the latest mentions from media and industry voices.
Fable to take on multi-billion companies with AI-generated security training targeted at employees who need it.
New funding: Fable Security raised $31 million from investors Greylock Partners and Redpoint Ventures.
Fable has emerged from stealth with a solution designed to detect risky behaviors and educate employees.