Intelligence Scores
Six proprietary metrics — developed exclusively for AEO Optima — that measure your brand's health across AI engines using information theory and graph analysis
Overview
Intelligence Scores are six proprietary metrics developed exclusively for AEO Optima that give you a comprehensive, quantitative view of how AI engines perceive, represent, and position your brand. While standard analytics tell you whether your brand appears in AI responses, Intelligence Scores tell you how well your brand is represented — measuring narrative accuracy, cross-platform agreement, competitive positioning, sentiment stability, citation strength, and topical authority.
These scores are computed from your existing snapshot data. Each score runs on a 0–100 scale, is updated every time you visit the Analytics page, and is displayed alongside a confidence indicator based on sample size. Together, the six scores form a radar chart that visualizes your brand's overall AI health at a glance.
Why This Framework Is Novel
Traditional SEO tools measure rankings, traffic, and keyword positions — metrics designed for search engine results pages. AI answer engines don't have pages, rankings, or click-through rates in the traditional sense. They generate free-form text responses that mention, describe, compare, and cite brands in entirely new ways.
AEO Optima's Intelligence Scores are the first framework purpose-built for this new reality:
| Traditional SEO Metric | AEO Intelligence Score | What Changes |
|---|---|---|
| Keyword ranking position | BNCI — Brand Narrative Coherence | Instead of "where do you rank?", measures "does AI accurately tell your story?" |
| Organic traffic share | CMCS — Cross-Model Consistency | Instead of "how much traffic?", measures "do all AI platforms agree about you?" |
| Competitor keyword overlap | MEI — Market Entropy Index | Instead of "who ranks for the same keywords?", measures "how fragmented is the AI attention space?" using Shannon Entropy |
| Brand sentiment (manual) | SDI — Sentiment Drift Index | Instead of static sentiment, tracks narrative evolution over time using KL Divergence |
| Backlink authority | CIPS — Citation Impact Score | Instead of link count, measures your share of AI citations using PageRank-inspired influence scoring |
| Topical authority (estimated) | ETAS — Entity & Topical Authority | Instead of estimated topic relevance, directly measures how strongly AI models associate your brand with specific topics |
Key techniques used:
- Jensen-Shannon Divergence (JSD) — information-theoretic measure for cross-model consistency
- Shannon Entropy — measures competitive landscape fragmentation per prompt
- Kullback-Leibler Divergence — tracks vocabulary and narrative shifts between time periods
- PageRank-inspired influence scoring — weights citation sources by authority and depth
- Multi-dimensional topic authority — evaluates breadth, consistency, depth, and citation rate across 10 topic categories
No other AEO platform, SEO tool, or brand monitoring service provides these specific metrics. They are computed entirely from your snapshot data — no external data sources or third-party APIs required.
The Six Scores
BNCI — Brand Narrative Coherence Index
What it measures: How consistently AI platforms describe your brand's core attributes — value propositions, differentiators, and key facts.
How it's calculated: BNCI analyzes your most recent brand-mentioning snapshots against your Brand Facts and produces a weighted composite of four sub-dimensions:
| Sub-dimension | Weight | What it checks |
|---|---|---|
| Value Proposition Alignment | 30% | Do AI responses mention your key value propositions, mission, and vision? |
| Differentiator Accuracy | 30% | Are your unique competitive advantages correctly represented? |
| Attribute Coverage | 25% | What percentage of your Brand Facts appear in AI responses? |
| Competitor Contamination | 15% | Are competitor attributes being wrongly attributed to your brand? (inverted — less contamination = higher score) |
The system uses keyword and phrase overlap analysis to match AI response text against your Brand Facts entries.
Score interpretation:
| Range | Assessment | What it means |
|---|---|---|
| 80–100 | Excellent | AI platforms consistently and accurately represent your brand story. |
| 60–79 | Good | Most facts and differentiators are present; minor gaps exist. |
| 40–59 | Fair | Significant attributes are missing or inconsistently represented. |
| 20–39 | Poor | AI platforms frequently omit or misrepresent your brand narrative. |
| 0–19 | Very Poor | Your brand story is largely absent or incorrect in AI responses. |
How to improve it:
- Add comprehensive Brand Facts in Settings — the more facts you provide, the more accurately the system can measure coverage.
- Publish clear, structured content on your website that states your value propositions explicitly.
- Use schema markup (Organization, FAQ, HowTo) to make facts machine-readable.
- Review the "Missing Facts" list in the score detail to identify which attributes AI platforms are not picking up.
Tip: BNCI requires Brand Facts to be configured. If you see a score of 0, go to Settings and add your brand facts first.
CMCS — Cross-Model Consistency Score
What it measures: Whether different AI platforms (ChatGPT, Claude, Gemini, Perplexity, DeepSeek, etc.) agree about your brand.
How it's calculated: CMCS groups your snapshots by AI provider and compares three dimensions across every pair of platforms:
- Mention rate — Does each platform mention your brand at the same frequency?
- Sentiment distribution — Do platforms assign similar positive/neutral/negative sentiment?
- Rank position — Does your brand appear at similar positions across platforms?
The comparison uses Jensen-Shannon Divergence (JSD), an information-theoretic measure of how different two probability distributions are. The final score is:
A score of 100 means all platforms produce identical distributions. The system also identifies the most divergent platform (the outlier) and the most consistent platform (closest to consensus).
When enough data exists, the Analytics page displays a Cross-Model Consistency Matrix heatmap showing pairwise JSD values between every platform combination.
Score interpretation:
| Range | Assessment | What it means |
|---|---|---|
| 80–100 | Highly consistent | All AI platforms agree about your brand. |
| 60–79 | Mostly consistent | Minor differences between platforms; one may be an outlier. |
| 40–59 | Moderate divergence | Platforms disagree on some aspects — investigate the outlier. |
| 20–39 | Significant divergence | AI platforms present materially different views of your brand. |
| 0–19 | Highly inconsistent | Each platform tells a different story about your brand. |
How to improve it:
- Investigate the "most divergent" platform and check what it says differently.
- Ensure your content is accessible to all major AI crawlers (check your robots.txt via Crawler Intelligence).
- Publish consistent messaging across all channels — inconsistent source material leads to inconsistent AI output.
- Capture snapshots from at least 3 different AI providers for meaningful comparison.
Tip: CMCS requires snapshots from at least 2 different AI providers. If you only capture from one model, this score cannot be computed.
MEI — Market Entropy Index
What it measures: How fragmented or concentrated the competitive landscape is within AI responses to your prompts. In other words: does one brand dominate AI answers, or do many brands share the space equally?
How it's calculated: For each prompt, MEI builds a distribution of brand mentions (your brand + competitors + "other") and computes Shannon Entropy, normalized to a 0–100 scale. The overall MEI is the average entropy across all analyzed prompts.
Each prompt receives a market assessment:
| Entropy Range | Assessment | Meaning |
|---|---|---|
| 0–19 | Locked | One brand dominates — hard to break in. |
| 20–39 | Competitive | A few brands share the space — defensible positions exist. |
| 40–69 | Open | Multiple brands mentioned — growth opportunity available. |
| 70–100 | Fragmented | Many brands share equal space — differentiation is essential. |
Important: MEI uses inverted coloring in the UI. A lower MEI is better for your competitive position (it means you dominate). In the radar chart, MEI is displayed as
100 - raw scoreso that higher = more favorable.
Score interpretation:
- Low MEI (0–30): Your brand dominates AI responses for these prompts. Maintain your position through continued content investment.
- Mid MEI (40–70): This is the "sweet spot" for opportunity. There is room to grow your share without the market being impossibly fragmented.
- High MEI (70+): The market is highly fragmented. Focus on differentiating content and building topical authority to stand out.
How to improve it (lower your MEI):
- Focus content strategy on prompts currently in the "Open" (40–70) range — these are your best growth opportunities.
- Avoid wasting resources on "Locked" prompts where a competitor dominates unless you have a specific strategy.
- The MEI Bubble Chart on the Analytics page visualizes each prompt's entropy, making it easy to identify opportunity zones.
SDI — Sentiment Drift Index
What it measures: How much the AI narrative about your brand is shifting over time. It tracks changes in word frequency and sentiment between consecutive time periods.
How it's calculated: SDI divides your snapshot history into time periods and performs three analyses between each consecutive pair:
- Word frequency analysis — Extracts the top 50 meaningful words from brand-mentioning sentences in each period, filtering out stop words and low-frequency terms.
- KL Divergence — Computes Kullback-Leibler Divergence between the word frequency distributions of consecutive periods. Higher divergence = more change in the vocabulary AI uses about your brand.
- Sentiment shift — Tracks how the average sentiment score changes between periods.
The period grouping auto-adapts to your selected window:
- Window < 60 days → weekly periods. A 30-day view yields ~4 weekly groups, enough for meaningful drift analysis even on short ranges.
- Window ≥ 60 days → monthly periods. Preserves the multi-month narrative signal that's the natural cadence for slower brand-perception shifts.
The raw KL Divergence is normalized to a 0–100 scale via a sigmoid curve so typical values spread across the full range instead of clustering near zero:
The overall SDI is the average across all period pairs. The system also identifies rising words (terms gaining frequency) and declining words (terms losing frequency), giving you concrete visibility into how the narrative is changing.
The Sentiment Pulse card on the Intelligence Center combines the drift index with the sentiment direction to give a clearer headline: "Major shift · positive" when vocabulary changed dramatically toward a more favorable framing, "Stable" when neither shifted, and so on. The numbers behind the headline (drift index 0–100 + average sentiment shift in points) appear in the subline.
Important: Like MEI, SDI uses inverted coloring in the UI. A lower SDI is better (stable reputation). In the radar chart, SDI is displayed as
100 - raw score.
Score interpretation:
| Range | Assessment | What it means |
|---|---|---|
| 0–14 | Stable | AI narrative about your brand is consistent — no significant shifts. |
| 15–39 | Minor drift | Small narrative changes detected — normal for active brands. |
| 40–69 | Moderate drift | Narrative is evolving noticeably. Review content strategy and rising/declining words. |
| 70–100 | Major drift | Significant narrative shift. Investigate immediately — this could indicate a PR event, market change, or content problem. The shift is positive if average sentiment also rose, negative if it fell. |
How to improve it (lower your SDI):
- Publish consistent messaging over time — avoid frequent brand repositioning.
- Monitor the "Rising Words" and "Declining Words" lists to understand what is changing.
- If you intentionally changed your positioning, a temporary SDI spike is expected and healthy.
- SDI requires at least 2 periods of snapshot data and 10+ brand-mentioning snapshots to produce meaningful results. With weekly grouping that means ~14 days of capture; with monthly grouping it means snapshots that span at least two calendar months.
Tip: A stable (low) SDI is generally good, but some drift is natural. A sudden spike may indicate a significant event worth investigating.
CIPS — Citation Impact & Positioning Score
What it measures: Your brand's share of all citations in AI responses, plus the influence and authority of sources citing you.
How it's calculated: CIPS analyzes all citations (URLs) found in your snapshots and builds a citation graph:
- Citation frequency — Counts how often each domain is cited across all snapshots.
- Source classification — Categorizes each source as earned media, brand-owned, social, academic, or directory.
- Influence scoring — Uses a PageRank-inspired algorithm with weighted citation depth:
- Direct citations: 1.0x weight
- Second-order influence (estimated): 0.5x weight
- Source type authority multiplier (academic: 1.5x, earned media: 1.3x, social: 0.8x)
- Brand citation rate — The percentage of all citations that reference your own website.
The displayed score is brandCitationRate x 100, representing your share of all AI citations as a percentage.
The system also identifies:
- Citation hubs — Sources cited 5+ times (high-impact, authoritative sources)
- Citation gaps — Sources that cite competitors but not you, ranked by priority (
competitorMentionCount x sourceAuthority x recencyBoost)
Score interpretation:
| Range | Assessment | What it means |
|---|---|---|
| 10+ | Strong | Your website is frequently cited as an authoritative source. |
| 5–9 | Moderate | Your content is cited, but there is room to grow. |
| 1–4 | Emerging | Your brand has some citation presence — focus on building authority. |
| 0 | Absent | AI responses do not cite your website. |
Note: CIPS scores tend to be lower than the other five scores because citation rates across the entire web are naturally distributed. A CIPS of 8 (meaning 8% of all AI citations go to your site) is a strong result.
How to improve it:
- Create original, authoritative content that AI models cite as a source.
- Review the citation gaps list — these are sources citing competitors but not you. Pursue coverage on those platforms.
- Invest in earned media (press, industry publications) — these carry higher authority multipliers.
- Ensure your content is crawlable and well-structured so AI models can cite specific pages.
ETAS — Entity & Topical Authority Score
What it measures: How strongly AI models recognize your brand as an authority across specific topics such as product quality, innovation, pricing, customer support, market position, trust, expertise, scale, integration, and content/SEO.
How it's calculated: ETAS evaluates your brand across 10 pre-defined topic categories using four weighted dimensions:
| Dimension | Weight | What it measures |
|---|---|---|
| Breadth | 30% | How many sub-topic patterns match within brand-mentioning responses. |
| Consistency | 30% | Whether your brand is associated with the topic across multiple AI providers and over time. |
| Depth | 20% | How much of each AI response is dedicated to discussing your brand in the topic context (measured by brand-sentence share). |
| Citation Rate | 20% | How often AI responses cite your content when discussing the topic. |
Each topic receives a composite score:
The overall ETAS is the average score across all topics that have data.
Topic assessments:
| Score Range | Assessment | What it means |
|---|---|---|
| 80–100 | Dominant | AI models strongly associate your brand with this topic. |
| 60–79 | Strong | Your brand has clear authority — maintain and expand. |
| 40–59 | Moderate | Some presence, but room for improvement. |
| 20–39 | Weak | Minimal authority — consider targeted content investment. |
| 0–19 | Absent | AI models do not associate your brand with this topic. |
The Analytics page shows the count of strong topics, weak topics, and gap topics (topics your prompts cover but where your brand is never mentioned).
How to improve it:
- Focus content strategy on weak topics and gap topics where you want to build authority.
- Publish in-depth, authoritative content per topic — breadth across sub-topics matters.
- Ensure content is cited by creating original research, data, or frameworks for each topic area.
- Capture snapshots from multiple AI providers to improve the consistency dimension.
Score Comparison at a Glance
| Score | Full Name | Measures | Scale | Higher is... |
|---|---|---|---|---|
| BNCI | Brand Narrative Coherence Index | How accurately AI tells your brand story | 0–100 | Better |
| CMCS | Cross-Model Consistency Score | Agreement between AI platforms | 0–100 | Better |
| MEI | Market Entropy Index | Market fragmentation in AI responses | 0–100 | Lower = you dominate |
| SDI | Sentiment Drift Index | Narrative stability over time | 0–100 | Lower = more stable |
| CIPS | Citation Impact & Positioning Score | Your share of AI citations | 0–100 | Better |
| ETAS | Entity & Topical Authority Score | Strength of topical authority | 0–100 | Better |
Note on the radar chart: On the Score Radar displayed in the Analytics page, MEI and SDI are inverted (
100 - raw score) so that all six axes follow the same convention: higher = more favorable for your brand.
Data Requirements
Each score has minimum data thresholds to produce meaningful results:
| Score | Minimum Data Required |
|---|---|
| BNCI | At least 1 Brand Fact configured and brand-mentioning snapshots |
| CMCS | Snapshots from at least 2 different AI providers |
| MEI | At least 5 snapshots with 3+ snapshots per prompt |
| SDI | At least 10 brand-mentioning snapshots across 2+ months |
| CIPS | Snapshots containing citation URLs |
| ETAS | At least 5 snapshots with prompt and response text |
If a score cannot be computed due to insufficient data, it displays "Insufficient data" instead of a number.
Confidence Indicators
Each score card shows a confidence indicator based on sample size. Larger sample sizes produce more reliable scores. As a general guideline:
- Low confidence — Fewer than 20 snapshots analyzed. Scores may shift significantly as more data is captured.
- Medium confidence — 20–50 snapshots. Scores are directionally reliable.
- High confidence — 50+ snapshots. Scores are stable and trustworthy.
Where to Find Them
Intelligence Scores are displayed on the Analytics page under the "Intelligence Scores" section. Navigate to Dashboard > Analytics and scroll to the score cards.
The section includes:
- Six score cards — One per metric, showing the numeric score, description, abbreviation, and confidence indicator.
- Score Radar — A radar chart plotting all six scores for a visual overview of brand health.
- Cross-Model Consistency Matrix — A heatmap of pairwise JSD values between AI platforms (visible when CMCS data is available).
- Market Entropy Landscape — A bubble chart showing per-prompt entropy, snapshot count, and brand count (visible when MEI data is available).
Plan Requirements
Intelligence Scores are computed from your existing snapshot data and are available on all plans. However, to build up the data needed for meaningful scores, you will benefit from higher-tier plan features:
| Capability | Plan Required |
|---|---|
| Basic intelligence scores (all 6) | All plans |
| Snapshots from multiple AI providers (helps CMCS) | All plans |
| Higher daily snapshot limits (helps all scores) | Starter and above |
| Advanced AI Insights (Entity Analysis, Shopping, Multi-Language) | Professional and above |
| Scheduled reports with intelligence score data | Professional and above |
Related Features
| Feature | Connection to Intelligence Scores |
|---|---|
| Analytics & Trends | Intelligence Scores appear on the Analytics page alongside visibility charts |
| Entity Analysis | ETAS and BNCI complement Entity Clarity by measuring topic-level and narrative-level accuracy |
| Citation Tracking | CIPS builds on the same citation data displayed in Citation Tracking |
| Sentiment Analysis | SDI extends sentiment analysis by tracking how sentiment shifts over time |
| Competitors | MEI uses competitor mention data to measure market fragmentation |
| Snapshots | All six scores are computed from your captured snapshots |