Intelligence Engines
Five computation engines that analyze your data and generate actionable intelligence
Overview
AEO Optima runs five computation engines that analyze your snapshot data and generate actionable insights. Each engine answers a specific strategic question about your brand's AI visibility — from identifying which citation sources drive your presence, to predicting when a competitor might overtake you, to tracing the root cause of factual errors.
These engines run daily as part of the analytics maintenance cycle. Their outputs power the Intelligence Center, the narrative strip on the dashboard, and dedicated sections on the Citations, Competitors, Entity, and Trends pages. Each engine writes structured insights with a type, severity, confidence score, and evidence payload.
Citation Impact Attribution
Question answered: "Which citation sources are driving your visibility?"
Citation Impact Attribution correlates the appearance of citation domains with per-model visibility changes. When a new domain first appears in AI responses about your brand, this engine measures whether your visibility on that model increased or decreased in the same and following week.
What it produces:
| Field | Description |
|---|---|
| Domain | The citation source URL domain |
| Model | Which AI model cited this domain |
| Visibility delta | Visibility rate before and after the domain first appeared |
| Citation rate | Percentage of snapshots in the period that cite this domain |
| First cited week | The week the domain first appeared in responses |
Insight type: citation_impact
An insight is generated when a domain caused more than 10% visibility change on any model. Positive impacts highlight sources worth cultivating; negative impacts flag sources that may be diluting your brand presence.
How to act on it:
- Domains with large positive deltas are your highest-value citation sources. Ensure your content appears on those sites.
- Domains with negative deltas may be introducing competing or inaccurate information. Review them and consider correction submissions.
- Track citation rates over time — a source cited in a high percentage of responses has outsized influence on how AI perceives your brand.
Competitor Trajectory Prediction
Question answered: "Is any competitor about to overtake you on a specific prompt?"
This engine fits a linear regression on the last 4 weeks of per-prompt mention rates for your brand and each configured competitor. It then extrapolates to predict whether and when a competitor's mention rate will exceed yours.
What it produces:
| Field | Description |
|---|---|
| Competitor | The competitor name |
| Prompt | The specific prompt being analyzed |
| Brand rate | Current percentage of snapshots mentioning your brand |
| Competitor rate | Current percentage mentioning the competitor |
| Trend slopes | Weekly rate of change for both brand and competitor |
| R-squared | Confidence of the linear fit (0 to 1) |
| Crossover weeks | Predicted number of weeks until the competitor overtakes you (null if never) |
Insight type: competitor_trajectory
An insight is generated when crossover is predicted within 14 weeks and the R-squared confidence exceeds 0.5. Severity is based on urgency — a crossover predicted in 2 weeks is critical; one predicted in 12 weeks is informational.
How to act on it:
- Focus content efforts on prompts where crossover is imminent (fewer weeks remaining).
- Investigate what the competitor is doing differently for those prompts — check their cited content via Citation Tracking.
- Prompts with low R-squared may not warrant action yet; the trend is too noisy to be reliable.
Prompt Visibility Decomposition
Question answered: "Which prompts are pulling your visibility down, and how much would fixing them help?"
This engine computes per-prompt visibility over the last 7 days, identifies prompts that are significantly below your project-wide average (more than 20 points below), and projects how much your overall visibility score would improve if each weak prompt were brought up to a conservative 50% target.
What it produces:
| Field | Description |
|---|---|
| Prompt text | The underperforming prompt |
| Visibility | Current visibility rate (0-100) for this prompt |
| Models not mentioning | Which AI models are not mentioning your brand for this prompt |
| Models mentioning | Which AI models are mentioning your brand |
| Lift if fixed | Points of overall visibility improvement if this prompt reaches 50% |
| Projected lift total | Sum of all potential lift across all weak prompts |
Insight type: prompt_weakness
An insight is generated when 3 or more prompts are more than 20 points below the project average. The insight includes the total projected lift, giving you a clear picture of what is achievable.
How to act on it:
- Start with the weak prompt that has the highest "lift if fixed" value — that is your biggest opportunity.
- Check which specific models are not mentioning your brand. If most models mention you but one does not, investigate that model's training data and citation preferences.
- Supports prompt segmentation — you can analyze branded, non-branded, and competitor prompts separately.
AI Error Root Cause Tracing
Question answered: "Which citation source is causing AI to state incorrect facts about your brand?"
When AI models produce factual errors about your brand (detected via accuracy checks against your Brand Facts), this engine traces those errors back to their likely citation sources. It groups inaccurate snapshots by the specific fact that was wrong, then ranks citation domains by how frequently they appear in error-containing responses.
What it produces:
| Field | Description |
|---|---|
| Fact name | The Brand Fact that AI got wrong |
| LLM value | What the AI model stated |
| Actual value | The correct value from your Brand Facts |
| Models affected | Which AI models produced the error |
| Affected response % | Percentage of responses containing this error |
| Likely source domain | The most-cited domain in error-containing responses |
| Confidence | How strongly the citation frequency correlates with the error |
Insight type: error_root_cause
How to act on it:
- The "likely source domain" is your primary target. If an outdated article on a third-party site is being cited by AI models, that article is probably the source of the misinformation.
- Use Hallucination Corrections to submit corrections for the identified errors.
- Prioritize errors that affect multiple models — these indicate a widely-cited incorrect source.
- Update your own published content to make the correct fact explicit and easy for AI crawlers to find.
Leading Indicator Detection
Question answered: "Which prompts predict your overall visibility score days in advance?"
This engine performs cross-correlation analysis between each prompt's daily visibility and your project's overall daily visibility, testing time lags from 1 to 14 days. A prompt is a leading indicator if its visibility changes reliably predict your overall score change several days later.
What it produces:
| Field | Description |
|---|---|
| Prompt text | The prompt that acts as a leading indicator |
| Lag days | How many days in advance this prompt predicts overall visibility |
| Correlation | Pearson correlation coefficient at the identified lag |
| Data points | Number of overlapping daily observations used |
Insight type: leading_indicator
An insight is generated when the correlation exceeds 0.6 and the lag is 3 or more days. The minimum data requirement is 21 daily data points (3 weeks of captures).
How to act on it:
- Monitor leading indicator prompts closely — a visibility drop on these prompts signals that your overall score may decline in the coming days.
- Leading indicators are often your most sensitive prompts. Changes in AI behavior appear here first before spreading to other prompts.
- Use these prompts as an early warning system by setting up Alert Rules for visibility drops on them.
How Insights Are Generated
All five engines run as part of the analytics maintenance cron job, which executes daily at 4:30 AM UTC. The process works as follows:
- The cron job identifies all projects with 20 or more snapshots — projects below this threshold do not have enough data for statistically meaningful analysis.
- Each engine runs against qualifying projects and produces results.
- Results that meet each engine's significance thresholds are written to the insights table.
- Each insight record includes:
- Type — The engine that generated it (
citation_impact,competitor_trajectory,prompt_weakness,error_root_cause,leading_indicator) - Severity —
critical,high,medium, orlowbased on the engine's assessment of urgency - Confidence — A numeric score reflecting statistical reliability
- Evidence — A JSONB payload containing the full computation output for that insight
- Type — The engine that generated it (
- Insights auto-expire after 30 days if not acknowledged. This prevents stale intelligence from cluttering your feed.
Tip: The more snapshots you capture and the longer your capture history, the more reliable these engines become. Consistent daily captures across multiple AI models produce the best results.
Where Insights Appear
Intelligence engine outputs are surfaced across multiple parts of the platform:
| Location | What appears |
|---|---|
| Dashboard — NarrativeStrip | Natural-language summaries of the most important recent insights |
| Dashboard — GoalPromptCard | Prompt-level intelligence tied to your goals |
| Citations page — CitationImpactSection | Citation Impact Attribution results with domain-level detail |
| Competitors page — TrajectorySection | Competitor Trajectory predictions with crossover timelines |
| Entity page — BrandDossier | Error root cause traces linked to specific brand facts |
| Trends page — LeadingIndicatorsCard | Leading indicator prompts with lag and correlation data |
| Intelligence Center — Discover tab | Complete feed of all insights across all engines, filterable by type and severity |
MCP Tools
If you use the MCP API, four tools provide programmatic access to intelligence engine outputs:
| Tool | Description |
|---|---|
list_insights | Retrieve insights filtered by type, severity, or date range |
update_insight | Acknowledge or dismiss an insight |
get_intelligence_scores | Retrieve the current Intelligence Score values for a project |
get_intelligence_summary | Get a natural-language summary of the latest intelligence across all engines |
Webhook Events
When a new insight is generated, AEO Optima can notify your systems via webhooks:
| Event | Trigger |
|---|---|
insight.generated | Fired each time an engine produces a new insight that meets its significance threshold |
The webhook payload includes the insight type, severity, and evidence, allowing you to route high-severity insights to Slack, email, or other notification channels.
Related Features
| Feature | Connection |
|---|---|
| Intelligence Scores | Intelligence Scores measure your brand's overall AI health; Intelligence Engines explain why scores change |
| Citation Tracking | Citation Impact Attribution builds on the same citation data |
| Competitors | Competitor Trajectory uses your configured competitor list |
| Brand Facts | Error Root Cause Tracing requires Brand Facts to identify inaccuracies |
| Alert Rules | Set alerts on leading indicator prompts for early warning |
| Analytics & Trends | All engines feed into the analytics narrative and trend analysis |