AI Usage in Fable

Last updated: February 11, 2026

Overview

Fable is an AI-powered Human Risk Platform that improves employee security behaviors to reduce security incidents. This document describes where and how AI is used within the Fable platform, the intended purpose of each use, and the safeguards in place.

AI-Powered Capabilities

Text-to-Video Content Generation

Fable uses AI to rapidly generate video-based security awareness briefings. This is the primary AI capability within the platform.

How it works: Fable leverages Claude running on AWS Bedrock within our VPC for prompt-based script generation. This capability manifests in the "Request a Briefing" flow in our Catalog. Fable additionally leverages a third-party text-to-video AI service, to convert written security briefing scripts into engaging video content. This technology is used solely for content generation — no large language models (LLMs) are used to process, analyze, or make decisions on customer data.

Intended purpose: Traditional security awareness training is often generic, outdated, and time-consuming to produce. Text-to-video AI enables Fable to create relevant, timely content at scale — covering emerging threats where technical controls may be lacking — without the lead time or cost of traditional video production.

Phishing Simulation Template Generation

Fable supports the creation of phishing templates through AI, referencing a prompt or uploaded email. This capability utilizes Claude running on AWS Bedrock within our VPC to generate the resulting template file.

What AI Is Not Used For

Fable does not use AI to make business decisions on behalf of customers. AI is scoped exclusively to content creation and personalization. Fable's platform identifies employees at risk with explainable, rule-based heuristics to assign them with relevant security briefings.

Customer Data and AI Training

No customer data is used to train or fine-tune any AI models — neither Fable's own systems nor any third-party models, including HeyGen. No customer data is shared with LLMs. This eliminates risks related to training data integrity, bias introduced through external data ingestion, and unintended data exposure.

Human Oversight

Every AI-generated briefing passes through a two-layer human review process before reaching employees:

  1. Fable content review — Fable's content managers review all AI-generated output for accuracy, appropriateness, and alignment with content standards.

  2. Customer approval — Only briefings that have been explicitly pre-approved by the customer's security team are delivered to employees. Customers have full access to the Fable platform to select, edit, approve, and target content. No content reaches employees without this approval.

This process provides 100% coverage on all AI-generated content.

Governance and Quality Control

Fable evaluates the performance of AI outputs through human-in-the-loop review, user and reviewer feedback loops to surface edge cases, governance oversight ensuring AI-generated media meets internal content standards and ethical guidelines, and code-level review of deployed models by Fable's R&D team.

For more detail on Fable's AI governance, see our AI Policy available in the Fable Trust Center.