AI Analysis
Run LLM-powered deep analysis on your recent snapshots to surface sentiment drivers, content gaps, opportunities, and comprehensive rollups.
Overview
AI Analysis runs a second-pass LLM over the recent snapshots of a project to surface insights your dashboard charts can't reveal at a glance. It complements the automated per-snapshot analysis (brand detection, sentiment, citations) with a project-wide pass that reasons across many snapshots at once.
The key difference from Insights: Insights are extracted from individual snapshot responses. AI Analysis generates new findings by running your project's own LLM over a window of recent data and asking it targeted questions about patterns across snapshots.
When to Use It
- Explaining a visibility dip or spike. The chart shows the what; AI Analysis explains the why — which factors drove the change and what to do about it.
- Finding content gaps before your competitors do. Runs across many snapshots at once to surface topics you aren't covering that AI engines treat as adjacent to your brand.
- Prioritizing a backlog of action items. Scores opportunities by impact and effort so you can pick quick wins first.
- Stakeholder reports. Comprehensive analysis produces an executive summary suitable for paste-into-deck.
How to Run an Analysis
- Open any snapshot from the Snapshots list (click into the detail page).
- Click AI Analyze in the page header, next to the Re-run button.
- Pick an analysis type from the dropdown inside the dialog:
- Sentiment Drivers
- Content Gaps
- Opportunities
- Comprehensive
- Click Run Analysis and wait 20–60 seconds. The slower the type, the deeper the analysis.
- Results populate in the same panel and remain in your project's AI Analysis history indefinitely.
Tip: The button analyzes the whole project's recent snapshots, not just the one you're currently viewing. The snapshot you clicked is purely the entry point — use it as the place that prompted your curiosity, not as the sole input.
Analysis Types
Sentiment Drivers
Identifies the factors pushing your brand's overall sentiment positive or negative across recent snapshots. Each driver comes with:
- A direction (positive, neutral, negative)
- A strength score (0–10)
- A recommendation explaining how to reinforce or counter the driver
Use this when: sentiment is moving and you want to know why. Cheapest type to run — start here.
Content Gaps
Surfaces topics your brand should be known for but isn't — subjects that AI engines repeatedly discuss in response to your prompts without mentioning you. Each gap includes:
- The topic the AI is covering
- An estimated impact (low / medium / high)
- A suggested content angle to close the gap
Use this when: competitors are winning on adjacent topics and you want a prioritized content backlog.
Opportunities
Produces a scored list of moves to improve visibility, each with:
- A title describing the opportunity
- A score ranking it against the others
- An effort rating (low / medium / high) so you can pick quick wins
- A description explaining the expected return
Use this when: you have bandwidth for a few actions and want to know which will move the needle fastest.
Comprehensive
Runs all three of the above in one pass and produces an executive summary that ties them together. Uses the most tokens and costs the most. Run it when:
- You're building a stakeholder report
- Sentiment Drivers alone isn't giving you enough context
- You want drivers, gaps, and opportunities to reference each other (e.g., "this gap explains this driver")
Where Results Are Stored
Every analysis creates a row in your project's AI Analysis history, accessible from the Insights & Actions page. Results never expire — revisit them anytime without paying quota again.
Cost tracking: AI Analysis runs are tracked in your usage records under the ai_analysis source type, separate from regular snapshot costs. This means you can see exactly how much AI Analysis is costing the project in Cost Tracking without conflating it with snapshot capture spend.
Which LLM Does the Analysis Use?
AI Analysis picks from your project's active LLM configurations in this priority order:
- Anthropic (Claude)
- OpenAI (GPT)
- Google (Gemini)
- Perplexity
The first active config in priority order wins. To influence which model runs analyses, ensure your preferred model has a config on the project and is marked active.
Plan Limits
| Plan | AI Analysis on snapshot detail | Analyses per day |
|---|---|---|
| Free | Not available | 0 |
| Starter | Not available on snapshot detail (sentiment_drivers available from Insights & Actions page) | 3 |
| Professional | All types available | 20 |
| Enterprise | All types available | 100 |
Why is Starter restricted on the snapshot detail surface? Starter users can still run sentiment-drivers analyses from the dedicated Insights & Actions page. The gating on the snapshot detail surface is a consistency choice — the AI Analyze button either runs the full type selector or it doesn't, so the upgrade story stays clean.
Performance & Cost
Analyses run synchronously — the dialog waits for the LLM to finish, then renders results. Expected durations:
- Sentiment Drivers: 15–30 seconds, ~2,000–4,000 tokens
- Content Gaps: 20–40 seconds, ~3,000–6,000 tokens
- Opportunities: 20–40 seconds, ~3,000–6,000 tokens
- Comprehensive: 40–90 seconds, ~8,000–15,000 tokens
Cost depends on the LLM config's pricing — a comprehensive run on Claude Opus 4.6 typically costs $0.05–$0.10.
Troubleshooting
The AI Analyze button is disabled with a lock tooltip. Your plan doesn't include Advanced AI Insights. Upgrade to Pro-SME, Enterprise-Brand, Enterprise-Agency, or Custom to unlock it. Owners can do this in Settings → Billing; other roles can request an upgrade from Settings → Organization.
"Analysis failed" with no clear error. Most likely your project's active LLM configuration can't reach its provider — check your API keys in Settings → API Keys, then try again. The failure is recorded in the history so you can see the error message.
The analysis ran but the dialog closed without results.
Results are still stored — open the dialog again and the history list will show the most recent run. If status is completed, click View to expand it.
Results aren't showing up in my usage records. AI Analysis only writes a usage record if the LLM call succeeded and returned token counts. If the analysis failed early (e.g., before the LLM call), no usage record is created — that's intentional, we don't charge for failed analyses.
Related Features
- Snapshots — the raw data AI Analysis reasons over
- Insights — automated per-snapshot insights (complementary, not a replacement)
- Cost Tracking — where AI Analysis costs appear in your dashboards
- Trends — visibility charts that AI Analysis helps explain