AEO Optima Docs
Features

Intelligence Engines

Five computation engines that analyze your data and generate actionable intelligence

Overview

AEO Optima runs five computation engines that analyze your snapshot data and generate actionable insights. Each engine answers a specific strategic question about your brand's AI visibility — from identifying which citation sources drive your presence, to predicting when a competitor might overtake you, to tracing the root cause of factual errors.

These engines run daily as part of the analytics maintenance cycle. Their outputs power the Intelligence Center, the narrative strip on the dashboard, and dedicated sections on the Citations, Competitors, Entity, and Trends pages. Each engine writes structured insights with a type, severity, confidence score, and evidence payload.

Citation Impact Attribution

Question answered: "Which citation sources are driving your visibility?"

Citation Impact Attribution correlates the appearance of citation domains with per-model visibility changes. When a new domain first appears in AI responses about your brand, this engine measures whether your visibility on that model increased or decreased in the same and following week.

What it produces:

FieldDescription
DomainThe citation source URL domain
ModelWhich AI model cited this domain
Visibility deltaVisibility rate before and after the domain first appeared
Citation ratePercentage of snapshots in the period that cite this domain
First cited weekThe week the domain first appeared in responses

Insight type: citation_impact

An insight is generated when a domain caused more than 10% visibility change on any model. Positive impacts highlight sources worth cultivating; negative impacts flag sources that may be diluting your brand presence.

How to act on it:

  • Domains with large positive deltas are your highest-value citation sources. Ensure your content appears on those sites.
  • Domains with negative deltas may be introducing competing or inaccurate information. Review them and consider correction submissions.
  • Track citation rates over time — a source cited in a high percentage of responses has outsized influence on how AI perceives your brand.

Competitor Trajectory Prediction

Question answered: "Is any competitor about to overtake you on a specific prompt?"

This engine fits a linear regression on the last 4 weeks of per-prompt mention rates for your brand and each configured competitor. It then extrapolates to predict whether and when a competitor's mention rate will exceed yours.

What it produces:

FieldDescription
CompetitorThe competitor name
PromptThe specific prompt being analyzed
Brand rateCurrent percentage of snapshots mentioning your brand
Competitor rateCurrent percentage mentioning the competitor
Trend slopesWeekly rate of change for both brand and competitor
R-squaredConfidence of the linear fit (0 to 1)
Crossover weeksPredicted number of weeks until the competitor overtakes you (null if never)

Insight type: competitor_trajectory

An insight is generated when crossover is predicted within 14 weeks and the R-squared confidence exceeds 0.5. Severity is based on urgency — a crossover predicted in 2 weeks is critical; one predicted in 12 weeks is informational.

How to act on it:

  • Focus content efforts on prompts where crossover is imminent (fewer weeks remaining).
  • Investigate what the competitor is doing differently for those prompts — check their cited content via Citation Tracking.
  • Prompts with low R-squared may not warrant action yet; the trend is too noisy to be reliable.

Prompt Visibility Decomposition

Question answered: "Which prompts are pulling your visibility down, and how much would fixing them help?"

This engine computes per-prompt visibility over the last 7 days, identifies prompts that are significantly below your project-wide average (more than 20 points below), and projects how much your overall visibility score would improve if each weak prompt were brought up to a conservative 50% target.

What it produces:

FieldDescription
Prompt textThe underperforming prompt
VisibilityCurrent visibility rate (0-100) for this prompt
Models not mentioningWhich AI models are not mentioning your brand for this prompt
Models mentioningWhich AI models are mentioning your brand
Lift if fixedPoints of overall visibility improvement if this prompt reaches 50%
Projected lift totalSum of all potential lift across all weak prompts

Insight type: prompt_weakness

An insight is generated when 3 or more prompts are more than 20 points below the project average. The insight includes the total projected lift, giving you a clear picture of what is achievable.

How to act on it:

  • Start with the weak prompt that has the highest "lift if fixed" value — that is your biggest opportunity.
  • Check which specific models are not mentioning your brand. If most models mention you but one does not, investigate that model's training data and citation preferences.
  • Supports prompt segmentation — you can analyze branded, non-branded, and competitor prompts separately.

AI Error Root Cause Tracing

Question answered: "Which citation source is causing AI to state incorrect facts about your brand?"

When AI models produce factual errors about your brand (detected via accuracy checks against your Brand Facts), this engine traces those errors back to their likely citation sources. It groups inaccurate snapshots by the specific fact that was wrong, then ranks citation domains by how frequently they appear in error-containing responses.

What it produces:

FieldDescription
Fact nameThe Brand Fact that AI got wrong
LLM valueWhat the AI model stated
Actual valueThe correct value from your Brand Facts
Models affectedWhich AI models produced the error
Affected response %Percentage of responses containing this error
Likely source domainThe most-cited domain in error-containing responses
ConfidenceHow strongly the citation frequency correlates with the error

Insight type: error_root_cause

How to act on it:

  • The "likely source domain" is your primary target. If an outdated article on a third-party site is being cited by AI models, that article is probably the source of the misinformation.
  • Use Hallucination Corrections to submit corrections for the identified errors.
  • Prioritize errors that affect multiple models — these indicate a widely-cited incorrect source.
  • Update your own published content to make the correct fact explicit and easy for AI crawlers to find.

Leading Indicator Detection

Question answered: "Which prompts predict your overall visibility score days in advance?"

This engine performs cross-correlation analysis between each prompt's daily visibility and your project's overall daily visibility, testing time lags from 1 to 14 days. A prompt is a leading indicator if its visibility changes reliably predict your overall score change several days later.

What it produces:

FieldDescription
Prompt textThe prompt that acts as a leading indicator
Lag daysHow many days in advance this prompt predicts overall visibility
CorrelationPearson correlation coefficient at the identified lag
Data pointsNumber of overlapping daily observations used

Insight type: leading_indicator

An insight is generated when the correlation exceeds 0.6 and the lag is 3 or more days. The minimum data requirement is 21 daily data points (3 weeks of captures).

How to act on it:

  • Monitor leading indicator prompts closely — a visibility drop on these prompts signals that your overall score may decline in the coming days.
  • Leading indicators are often your most sensitive prompts. Changes in AI behavior appear here first before spreading to other prompts.
  • Use these prompts as an early warning system by setting up Alert Rules for visibility drops on them.

How Insights Are Generated

All five engines run as part of the analytics maintenance cron job, which executes daily at 4:30 AM UTC. The process works as follows:

  1. The cron job identifies all projects with 20 or more snapshots — projects below this threshold do not have enough data for statistically meaningful analysis.
  2. Each engine runs against qualifying projects and produces results.
  3. Results that meet each engine's significance thresholds are written to the insights table.
  4. Each insight record includes:
    • Type — The engine that generated it (citation_impact, competitor_trajectory, prompt_weakness, error_root_cause, leading_indicator)
    • Severitycritical, high, medium, or low based on the engine's assessment of urgency
    • Confidence — A numeric score reflecting statistical reliability
    • Evidence — A JSONB payload containing the full computation output for that insight
  5. Insights auto-expire after 30 days if not acknowledged. This prevents stale intelligence from cluttering your feed.

Tip: The more snapshots you capture and the longer your capture history, the more reliable these engines become. Consistent daily captures across multiple AI models produce the best results.

Where Insights Appear

Intelligence engine outputs are surfaced across multiple parts of the platform:

LocationWhat appears
Dashboard — NarrativeStripNatural-language summaries of the most important recent insights
Dashboard — GoalPromptCardPrompt-level intelligence tied to your goals
Citations page — CitationImpactSectionCitation Impact Attribution results with domain-level detail
Competitors page — TrajectorySectionCompetitor Trajectory predictions with crossover timelines
Entity page — BrandDossierError root cause traces linked to specific brand facts
Trends page — LeadingIndicatorsCardLeading indicator prompts with lag and correlation data
Intelligence Center — Discover tabComplete feed of all insights across all engines, filterable by type and severity

MCP Tools

If you use the MCP API, four tools provide programmatic access to intelligence engine outputs:

ToolDescription
list_insightsRetrieve insights filtered by type, severity, or date range
update_insightAcknowledge or dismiss an insight
get_intelligence_scoresRetrieve the current Intelligence Score values for a project
get_intelligence_summaryGet a natural-language summary of the latest intelligence across all engines

Webhook Events

When a new insight is generated, AEO Optima can notify your systems via webhooks:

EventTrigger
insight.generatedFired each time an engine produces a new insight that meets its significance threshold

The webhook payload includes the insight type, severity, and evidence, allowing you to route high-severity insights to Slack, email, or other notification channels.

FeatureConnection
Intelligence ScoresIntelligence Scores measure your brand's overall AI health; Intelligence Engines explain why scores change
Citation TrackingCitation Impact Attribution builds on the same citation data
CompetitorsCompetitor Trajectory uses your configured competitor list
Brand FactsError Root Cause Tracing requires Brand Facts to identify inaccuracies
Alert RulesSet alerts on leading indicator prompts for early warning
Analytics & TrendsAll engines feed into the analytics narrative and trend analysis

On this page