AEO Optima Docs
Configuration

LLM Configuration

Configure which AI models your project queries when capturing snapshots. Understand supported providers, model capabilities, and recommended configurations.

Overview

LLM Configuration controls which AI models AEO Optima queries when capturing snapshots. Each enabled model will be sent every active prompt in your project, and the responses are analyzed for brand mentions, rank position, sentiment, and citations.

Different AI models draw on different training data, have different strengths, and produce different answers to the same question. Monitoring across multiple models gives you a comprehensive view of your brand's AI visibility rather than a single-engine perspective.


Dynamic Model Registry

AEO Optima maintains a dynamic model registry that is automatically synchronized daily with the latest releases from all supported providers. This means:

  • New models become available without any manual updates
  • Model metadata (capabilities, context window, pricing) stays current
  • You always have access to the latest AI models from each provider

When you open the LLM Configuration page, models are loaded from this registry and grouped by provider. Each model shows its display name, context window, and whether it is recommended.


Supported Providers

AEO Optima supports models from the following providers (and more as the registry is updated):

ProviderExample ModelsBest For
OpenAIGPT-4o, GPT-4o Mini, o1, o3The most widely used AI platform. Essential to track as it represents the largest share of AI search traffic.
AnthropicClaude Opus 4, Claude Sonnet 4, Claude Haiku 3.5Growing market share with strong reasoning capabilities. Increasingly used for research and professional queries.
GoogleGemini 2.5 Pro, Gemini 2.0 FlashDeeply integrated with the Google ecosystem. Important for brands that rely on Google's platforms.
PerplexitySonar Pro, SonarCombines real-time web search with AI generation and provides source citations. Valuable for tracking citation-based visibility.
DeepSeekDeepSeek Chat, DeepSeek ReasonerStrong reasoning and analytics capabilities at budget-friendly pricing.
Mistral AIMistral Large, Mistral MediumEU-based provider with efficient models. Important for European market coverage.
xAIGrok 3, Grok 3 MiniFast, concise responses. Cost-efficient option for broader coverage.
MetaLlama 4 variantsOpen-source foundation models widely deployed by third parties.
MicrosoftCopilot (GPT-powered)AI assistant integrated into Microsoft 365 and Edge. Captures enterprise audience visibility.
QwenQwen 2.5 (Alibaba)Strong multilingual capabilities, valuable for Asian market coverage.
OpenRouterAll models via unified gatewayProvides single-API access to models across all providers.

Note: The specific models available may change as providers release new versions. The model registry updates daily, so you'll always see the latest options.


Choosing Which Models to Enable

Minimum Recommendation

Enable at least two models from different providers. A common starting configuration is:

  • One OpenAI model (e.g., GPT-4o) — The most popular AI assistant, representing the largest user base
  • One additional model — Choose based on your audience and industry

This gives you cross-engine comparison data without excessive API usage.

Comprehensive Coverage

For the most complete picture of your AI visibility, enable models from all major providers:

  1. OpenAI — GPT-4o or GPT-4o Mini
  2. Anthropic — Claude Sonnet 4 or Claude Haiku 3.5
  3. Google — Gemini 2.5 Pro or Gemini 2.0 Flash
  4. Perplexity — Sonar Pro or Sonar

This configuration ensures you are tracking your brand across every major AI platform users are likely to encounter.

Model Selection Considerations

FactorGuidance
AudienceWhich AI tools do your customers use most? Prioritize those models.
Coverage vs. costMore models means broader coverage but more API calls per snapshot. Start with 2-3 and expand as needed.
Model sizeLarger models tend to produce more detailed responses. Smaller models are faster and more cost-efficient.
Use caseIf citation tracking matters, include Perplexity. If reasoning depth matters, include Claude. If mainstream coverage matters, include GPT-4o.

How Model Configuration Affects Snapshots

When you capture a snapshot, AEO Optima sends each active prompt to each enabled model. The total number of API calls per snapshot is:

Total calls = Number of active prompts x Number of enabled models

For example, if you have 10 prompts and 4 models enabled, each snapshot will make 40 API calls. Keep this in mind when configuring your model list and snapshot schedule.


Enabling and Disabling Models

To manage your model configuration:

  1. Navigate to Settings in the sidebar
  2. Open the LLM Configuration section
  3. Browse models grouped by provider — each model shows its name, context window, and a "Recommended" badge if applicable
  4. Toggle individual models on or off
  5. Changes take effect on the next snapshot capture

Disabling a model does not delete any previously captured data from that model. Historical snapshots from disabled models remain available in your analytics.


Using Your Own API Keys

By default, all enabled providers use AEO Optima's managed API to process requests. This means you do not need any API keys to get started.

If you prefer to use your own API keys, AEO Optima supports a 3-tier routing system:

  1. Direct BYOK Key — If you have a native provider key (e.g., an OpenAI key), it is used first for that provider's models
  2. Gateway BYOK Key — If you have an OpenRouter key, it is used as a fallback for any provider without a direct key
  3. Platform Managed API — If no BYOK keys are configured, the platform's managed API handles the request

You can manage your keys in Settings > API Keys. See API Keys for details.

Tip: You do not need your own API keys to use AEO Optima. The managed API works out of the box. BYOK is available for teams that want direct provider access or have existing API agreements.

On this page