AEO Optima Docs
Features

Intelligence Scores

Six proprietary metrics — developed exclusively for AEO Optima — that measure your brand's health across AI engines using information theory and graph analysis

Overview

Intelligence Scores are six proprietary metrics developed exclusively for AEO Optima that give you a comprehensive, quantitative view of how AI engines perceive, represent, and position your brand. While standard analytics tell you whether your brand appears in AI responses, Intelligence Scores tell you how well your brand is represented — measuring narrative accuracy, cross-platform agreement, competitive positioning, sentiment stability, citation strength, and topical authority.

These scores are computed from your existing snapshot data. Each score runs on a 0–100 scale, is updated every time you visit the Analytics page, and is displayed alongside a confidence indicator based on sample size. Together, the six scores form a radar chart that visualizes your brand's overall AI health at a glance.

Why This Framework Is Novel

Traditional SEO tools measure rankings, traffic, and keyword positions — metrics designed for search engine results pages. AI answer engines don't have pages, rankings, or click-through rates in the traditional sense. They generate free-form text responses that mention, describe, compare, and cite brands in entirely new ways.

AEO Optima's Intelligence Scores are the first framework purpose-built for this new reality:

Traditional SEO MetricAEO Intelligence ScoreWhat Changes
Keyword ranking positionBNCI — Brand Narrative CoherenceInstead of "where do you rank?", measures "does AI accurately tell your story?"
Organic traffic shareCMCS — Cross-Model ConsistencyInstead of "how much traffic?", measures "do all AI platforms agree about you?"
Competitor keyword overlapMEI — Market Entropy IndexInstead of "who ranks for the same keywords?", measures "how fragmented is the AI attention space?" using Shannon Entropy
Brand sentiment (manual)SDI — Sentiment Drift IndexInstead of static sentiment, tracks narrative evolution over time using KL Divergence
Backlink authorityCIPS — Citation Impact ScoreInstead of link count, measures your share of AI citations using PageRank-inspired influence scoring
Topical authority (estimated)ETAS — Entity & Topical AuthorityInstead of estimated topic relevance, directly measures how strongly AI models associate your brand with specific topics

Key techniques used:

  • Jensen-Shannon Divergence (JSD) — information-theoretic measure for cross-model consistency
  • Shannon Entropy — measures competitive landscape fragmentation per prompt
  • Kullback-Leibler Divergence — tracks vocabulary and narrative shifts between time periods
  • PageRank-inspired influence scoring — weights citation sources by authority and depth
  • Multi-dimensional topic authority — evaluates breadth, consistency, depth, and citation rate across 10 topic categories

No other AEO platform, SEO tool, or brand monitoring service provides these specific metrics. They are computed entirely from your snapshot data — no external data sources or third-party APIs required.

The Six Scores

BNCI — Brand Narrative Coherence Index

What it measures: How consistently AI platforms describe your brand's core attributes — value propositions, differentiators, and key facts.

How it's calculated: BNCI analyzes your most recent brand-mentioning snapshots against your Brand Facts and produces a weighted composite of four sub-dimensions:

Sub-dimensionWeightWhat it checks
Value Proposition Alignment30%Do AI responses mention your key value propositions, mission, and vision?
Differentiator Accuracy30%Are your unique competitive advantages correctly represented?
Attribute Coverage25%What percentage of your Brand Facts appear in AI responses?
Competitor Contamination15%Are competitor attributes being wrongly attributed to your brand? (inverted — less contamination = higher score)

The system uses keyword and phrase overlap analysis to match AI response text against your Brand Facts entries.

Score interpretation:

RangeAssessmentWhat it means
80–100ExcellentAI platforms consistently and accurately represent your brand story.
60–79GoodMost facts and differentiators are present; minor gaps exist.
40–59FairSignificant attributes are missing or inconsistently represented.
20–39PoorAI platforms frequently omit or misrepresent your brand narrative.
0–19Very PoorYour brand story is largely absent or incorrect in AI responses.

How to improve it:

  • Add comprehensive Brand Facts in Settings — the more facts you provide, the more accurately the system can measure coverage.
  • Publish clear, structured content on your website that states your value propositions explicitly.
  • Use schema markup (Organization, FAQ, HowTo) to make facts machine-readable.
  • Review the "Missing Facts" list in the score detail to identify which attributes AI platforms are not picking up.

Tip: BNCI requires Brand Facts to be configured. If you see a score of 0, go to Settings and add your brand facts first.


CMCS — Cross-Model Consistency Score

What it measures: Whether different AI platforms (ChatGPT, Claude, Gemini, Perplexity, DeepSeek, etc.) agree about your brand.

How it's calculated: CMCS groups your snapshots by AI provider and compares three dimensions across every pair of platforms:

  1. Mention rate — Does each platform mention your brand at the same frequency?
  2. Sentiment distribution — Do platforms assign similar positive/neutral/negative sentiment?
  3. Rank position — Does your brand appear at similar positions across platforms?

The comparison uses Jensen-Shannon Divergence (JSD), an information-theoretic measure of how different two probability distributions are. The final score is:

CMCS = (1 - average JSD across all platform pairs) x 100

A score of 100 means all platforms produce identical distributions. The system also identifies the most divergent platform (the outlier) and the most consistent platform (closest to consensus).

When enough data exists, the Analytics page displays a Cross-Model Consistency Matrix heatmap showing pairwise JSD values between every platform combination.

Score interpretation:

RangeAssessmentWhat it means
80–100Highly consistentAll AI platforms agree about your brand.
60–79Mostly consistentMinor differences between platforms; one may be an outlier.
40–59Moderate divergencePlatforms disagree on some aspects — investigate the outlier.
20–39Significant divergenceAI platforms present materially different views of your brand.
0–19Highly inconsistentEach platform tells a different story about your brand.

How to improve it:

  • Investigate the "most divergent" platform and check what it says differently.
  • Ensure your content is accessible to all major AI crawlers (check your robots.txt via Crawler Intelligence).
  • Publish consistent messaging across all channels — inconsistent source material leads to inconsistent AI output.
  • Capture snapshots from at least 3 different AI providers for meaningful comparison.

Tip: CMCS requires snapshots from at least 2 different AI providers. If you only capture from one model, this score cannot be computed.


MEI — Market Entropy Index

What it measures: How fragmented or concentrated the competitive landscape is within AI responses to your prompts. In other words: does one brand dominate AI answers, or do many brands share the space equally?

How it's calculated: For each prompt, MEI builds a distribution of brand mentions (your brand + competitors + "other") and computes Shannon Entropy, normalized to a 0–100 scale. The overall MEI is the average entropy across all analyzed prompts.

Each prompt receives a market assessment:

Entropy RangeAssessmentMeaning
0–19LockedOne brand dominates — hard to break in.
20–39CompetitiveA few brands share the space — defensible positions exist.
40–69OpenMultiple brands mentioned — growth opportunity available.
70–100FragmentedMany brands share equal space — differentiation is essential.

Important: MEI uses inverted coloring in the UI. A lower MEI is better for your competitive position (it means you dominate). In the radar chart, MEI is displayed as 100 - raw score so that higher = more favorable.

Score interpretation:

  • Low MEI (0–30): Your brand dominates AI responses for these prompts. Maintain your position through continued content investment.
  • Mid MEI (40–70): This is the "sweet spot" for opportunity. There is room to grow your share without the market being impossibly fragmented.
  • High MEI (70+): The market is highly fragmented. Focus on differentiating content and building topical authority to stand out.

How to improve it (lower your MEI):

  • Focus content strategy on prompts currently in the "Open" (40–70) range — these are your best growth opportunities.
  • Avoid wasting resources on "Locked" prompts where a competitor dominates unless you have a specific strategy.
  • The MEI Bubble Chart on the Analytics page visualizes each prompt's entropy, making it easy to identify opportunity zones.

SDI — Sentiment Drift Index

What it measures: How much the AI narrative about your brand is shifting over time. It tracks changes in word frequency and sentiment between consecutive time periods.

How it's calculated: SDI divides your snapshot history into time periods and performs three analyses between each consecutive pair:

  1. Word frequency analysis — Extracts the top 50 meaningful words from brand-mentioning sentences in each period, filtering out stop words and low-frequency terms.
  2. KL Divergence — Computes Kullback-Leibler Divergence between the word frequency distributions of consecutive periods. Higher divergence = more change in the vocabulary AI uses about your brand.
  3. Sentiment shift — Tracks how the average sentiment score changes between periods.

The period grouping auto-adapts to your selected window:

  • Window < 60 days → weekly periods. A 30-day view yields ~4 weekly groups, enough for meaningful drift analysis even on short ranges.
  • Window ≥ 60 days → monthly periods. Preserves the multi-month narrative signal that's the natural cadence for slower brand-perception shifts.

The raw KL Divergence is normalized to a 0–100 scale via a sigmoid curve so typical values spread across the full range instead of clustering near zero:

SDI = round(100 × (1 − e^(−4 × KL)))

The overall SDI is the average across all period pairs. The system also identifies rising words (terms gaining frequency) and declining words (terms losing frequency), giving you concrete visibility into how the narrative is changing.

The Sentiment Pulse card on the Intelligence Center combines the drift index with the sentiment direction to give a clearer headline: "Major shift · positive" when vocabulary changed dramatically toward a more favorable framing, "Stable" when neither shifted, and so on. The numbers behind the headline (drift index 0–100 + average sentiment shift in points) appear in the subline.

Important: Like MEI, SDI uses inverted coloring in the UI. A lower SDI is better (stable reputation). In the radar chart, SDI is displayed as 100 - raw score.

Score interpretation:

RangeAssessmentWhat it means
0–14StableAI narrative about your brand is consistent — no significant shifts.
15–39Minor driftSmall narrative changes detected — normal for active brands.
40–69Moderate driftNarrative is evolving noticeably. Review content strategy and rising/declining words.
70–100Major driftSignificant narrative shift. Investigate immediately — this could indicate a PR event, market change, or content problem. The shift is positive if average sentiment also rose, negative if it fell.

How to improve it (lower your SDI):

  • Publish consistent messaging over time — avoid frequent brand repositioning.
  • Monitor the "Rising Words" and "Declining Words" lists to understand what is changing.
  • If you intentionally changed your positioning, a temporary SDI spike is expected and healthy.
  • SDI requires at least 2 periods of snapshot data and 10+ brand-mentioning snapshots to produce meaningful results. With weekly grouping that means ~14 days of capture; with monthly grouping it means snapshots that span at least two calendar months.

Tip: A stable (low) SDI is generally good, but some drift is natural. A sudden spike may indicate a significant event worth investigating.


CIPS — Citation Impact & Positioning Score

What it measures: Your brand's share of all citations in AI responses, plus the influence and authority of sources citing you.

How it's calculated: CIPS analyzes all citations (URLs) found in your snapshots and builds a citation graph:

  1. Citation frequency — Counts how often each domain is cited across all snapshots.
  2. Source classification — Categorizes each source as earned media, brand-owned, social, academic, or directory.
  3. Influence scoring — Uses a PageRank-inspired algorithm with weighted citation depth:
    • Direct citations: 1.0x weight
    • Second-order influence (estimated): 0.5x weight
    • Source type authority multiplier (academic: 1.5x, earned media: 1.3x, social: 0.8x)
  4. Brand citation rate — The percentage of all citations that reference your own website.

The displayed score is brandCitationRate x 100, representing your share of all AI citations as a percentage.

The system also identifies:

  • Citation hubs — Sources cited 5+ times (high-impact, authoritative sources)
  • Citation gaps — Sources that cite competitors but not you, ranked by priority (competitorMentionCount x sourceAuthority x recencyBoost)

Score interpretation:

RangeAssessmentWhat it means
10+StrongYour website is frequently cited as an authoritative source.
5–9ModerateYour content is cited, but there is room to grow.
1–4EmergingYour brand has some citation presence — focus on building authority.
0AbsentAI responses do not cite your website.

Note: CIPS scores tend to be lower than the other five scores because citation rates across the entire web are naturally distributed. A CIPS of 8 (meaning 8% of all AI citations go to your site) is a strong result.

How to improve it:

  • Create original, authoritative content that AI models cite as a source.
  • Review the citation gaps list — these are sources citing competitors but not you. Pursue coverage on those platforms.
  • Invest in earned media (press, industry publications) — these carry higher authority multipliers.
  • Ensure your content is crawlable and well-structured so AI models can cite specific pages.

ETAS — Entity & Topical Authority Score

What it measures: How strongly AI models recognize your brand as an authority across specific topics such as product quality, innovation, pricing, customer support, market position, trust, expertise, scale, integration, and content/SEO.

How it's calculated: ETAS evaluates your brand across 10 pre-defined topic categories using four weighted dimensions:

DimensionWeightWhat it measures
Breadth30%How many sub-topic patterns match within brand-mentioning responses.
Consistency30%Whether your brand is associated with the topic across multiple AI providers and over time.
Depth20%How much of each AI response is dedicated to discussing your brand in the topic context (measured by brand-sentence share).
Citation Rate20%How often AI responses cite your content when discussing the topic.

Each topic receives a composite score:

ETAS(topic) = 0.30 x breadth + 0.30 x consistency + 0.20 x depth + 0.20 x citation_rate

The overall ETAS is the average score across all topics that have data.

Topic assessments:

Score RangeAssessmentWhat it means
80–100DominantAI models strongly associate your brand with this topic.
60–79StrongYour brand has clear authority — maintain and expand.
40–59ModerateSome presence, but room for improvement.
20–39WeakMinimal authority — consider targeted content investment.
0–19AbsentAI models do not associate your brand with this topic.

The Analytics page shows the count of strong topics, weak topics, and gap topics (topics your prompts cover but where your brand is never mentioned).

How to improve it:

  • Focus content strategy on weak topics and gap topics where you want to build authority.
  • Publish in-depth, authoritative content per topic — breadth across sub-topics matters.
  • Ensure content is cited by creating original research, data, or frameworks for each topic area.
  • Capture snapshots from multiple AI providers to improve the consistency dimension.

Score Comparison at a Glance

ScoreFull NameMeasuresScaleHigher is...
BNCIBrand Narrative Coherence IndexHow accurately AI tells your brand story0–100Better
CMCSCross-Model Consistency ScoreAgreement between AI platforms0–100Better
MEIMarket Entropy IndexMarket fragmentation in AI responses0–100Lower = you dominate
SDISentiment Drift IndexNarrative stability over time0–100Lower = more stable
CIPSCitation Impact & Positioning ScoreYour share of AI citations0–100Better
ETASEntity & Topical Authority ScoreStrength of topical authority0–100Better

Note on the radar chart: On the Score Radar displayed in the Analytics page, MEI and SDI are inverted (100 - raw score) so that all six axes follow the same convention: higher = more favorable for your brand.

Data Requirements

Each score has minimum data thresholds to produce meaningful results:

ScoreMinimum Data Required
BNCIAt least 1 Brand Fact configured and brand-mentioning snapshots
CMCSSnapshots from at least 2 different AI providers
MEIAt least 5 snapshots with 3+ snapshots per prompt
SDIAt least 10 brand-mentioning snapshots across 2+ months
CIPSSnapshots containing citation URLs
ETASAt least 5 snapshots with prompt and response text

If a score cannot be computed due to insufficient data, it displays "Insufficient data" instead of a number.

Confidence Indicators

Each score card shows a confidence indicator based on sample size. Larger sample sizes produce more reliable scores. As a general guideline:

  • Low confidence — Fewer than 20 snapshots analyzed. Scores may shift significantly as more data is captured.
  • Medium confidence — 20–50 snapshots. Scores are directionally reliable.
  • High confidence — 50+ snapshots. Scores are stable and trustworthy.

Where to Find Them

Intelligence Scores are displayed on the Analytics page under the "Intelligence Scores" section. Navigate to Dashboard > Analytics and scroll to the score cards.

The section includes:

  • Six score cards — One per metric, showing the numeric score, description, abbreviation, and confidence indicator.
  • Score Radar — A radar chart plotting all six scores for a visual overview of brand health.
  • Cross-Model Consistency Matrix — A heatmap of pairwise JSD values between AI platforms (visible when CMCS data is available).
  • Market Entropy Landscape — A bubble chart showing per-prompt entropy, snapshot count, and brand count (visible when MEI data is available).

Plan Requirements

Intelligence Scores are computed from your existing snapshot data and are available on all plans. However, to build up the data needed for meaningful scores, you will benefit from higher-tier plan features:

CapabilityPlan Required
Basic intelligence scores (all 6)All plans
Snapshots from multiple AI providers (helps CMCS)All plans
Higher daily snapshot limits (helps all scores)Starter and above
Advanced AI Insights (Entity Analysis, Shopping, Multi-Language)Professional and above
Scheduled reports with intelligence score dataProfessional and above
FeatureConnection to Intelligence Scores
Analytics & TrendsIntelligence Scores appear on the Analytics page alongside visibility charts
Entity AnalysisETAS and BNCI complement Entity Clarity by measuring topic-level and narrative-level accuracy
Citation TrackingCIPS builds on the same citation data displayed in Citation Tracking
Sentiment AnalysisSDI extends sentiment analysis by tracking how sentiment shifts over time
CompetitorsMEI uses competitor mention data to measure market fragmentation
SnapshotsAll six scores are computed from your captured snapshots