Engines field guide
AI search is five engines. They behave differently.
What we observed about each engine across the mapou Visibility Index panel: 740 brands, 36 segments, 12 buyer personas, 74,000 first-party verdicts. Pick the engine your buyer uses, then read its field guide.
Methodology open at /methodology. Each observation is sourced from a specific finding on State of AI Search. Updated monthly.
Five engines at a glance
| Engine | Personality | Cite rate | Brands per answer | Persona stability |
|---|---|---|---|---|
| Claude → | The broad recommender | 22% | 5.8 | 74% |
| ChatGPT → | The confident citer | 21% | 5.5 | 68% |
| Grok → | Narrow scope | 16% | 4.4 | 69% |
| Gemini → | The tight gatekeeper | 16% | 4.9 | 66% |
| Perplexity → | The contextual discussant | 13% | 5.5 | 73% |
Cite rate = share of (brand × prompt) cells where the engine actively recommended a tracked brand. Brands per answer = mean cited brands per response. Persona stability = mean top-3 overlap with baseline persona under 10 buyer personas (higher = more stable). Source: Finding 07 and Finding 09.
The broad recommender
Claude
Cite rate
22%
Brands / answer
5.8
Mention share
18%
Persona stability
74%
Persona-stable leaders
2 / 22
What we observed
- 01Recommends 5.8 brands per answer on average. The highest of the five. Claude prefers menu-style answers over single-brand picks.
- 02Most stable engine under persona signals (74% mean top-3 overlap vs baseline). Persona signals reorder Claude's top 5 less than they reorder ChatGPT or Gemini.
- 03Most resilient at the Comparison phase (69% leader-held under professional persona, the most stable cell in the entire panel). Feature-comparison rankings on Claude are particularly durable.
- 04Different brands hold across personas on Claude than on other engines. Carvana on used auto, Lowe's on home retailers. Neither matches the ChatGPT or Gemini leader.
Content patterns we observed in cited brands
Long-form, structured comparison content with explicit feature lists. Claude's broad-recommender bias rewards content that frames brands as part of a category set, not as standalone winners.
When to prioritize this engine
Professional or research-heavy: Claude is the right second engine after ChatGPT. Skip Claude if your buyer mix is impulse-driven.
The confident citer
ChatGPT
Cite rate
21%
Brands / answer
5.5
Mention share
12%
Persona stability
68%
Persona-stable leaders
4 / 36
What we observed
- 01Highest cite rate of the five engines (29% in our panel) and lowest mention-but-not-cited gap. ChatGPT actively recommends rather than hedges.
- 02Most stable engine for commerce queries when buyer signals are absent. Baseline rankings are durable.
- 03The kingmaker for default-choice brand discovery. If a brand holds #1 on ChatGPT, it usually holds across personas too. Chime in fintech is the only segment-leader that survived all 10 buyer personas in our panel.
- 04ChatGPT does NOT change its mind on premium-coded categories (luxury goods, hotels, makeup) without a strong premium signal. Generic shopping framing produces the 'safe default' answer, not the premium answer.
Content patterns we observed in cited brands
Strong entity authority and consistent third-party citations. The brands ChatGPT cites tend to have well-structured entity data on their primary domains and dense external mentions on aggregator sites.
When to prioritize this engine
Mass-market or convenience: ChatGPT alone is sufficient for most discovery decisions. Add other engines only after ChatGPT is solid.
Narrow scope
Grok
Cite rate
16%
Brands / answer
4.4
Mention share
17%
Persona stability
69%
Persona-stable leaders
2 / 22
What we observed
- 01Smallest mean answer set after Gemini (4.3 brands per answer). Grok prefers naming a few brands and committing.
- 02Citation rate (15%) is the lowest of the five engines, but the brands Grok DOES cite tend to be category-defining.
- 03Surprisingly persona-stable for commerce queries (70% mean top-3 overlap, on par with ChatGPT). Grok's persona behavior is more like ChatGPT than its small answer set would suggest.
- 04Locks in on home retailer (Home Depot) and used auto retailer (CarMax) baseline leaders across all 10 buyer personas in our panel, the most-stable engine for those two specific segments.
Content patterns we observed in cited brands
Brand authority signals that read as 'category default'. Grok's narrow-scope picks tend to be the brands consumers would name without thinking. Short answer set means top-of-mind matters most.
When to prioritize this engine
Skew toward retail-aligned or category-defining buyer behavior: Grok is the right fourth engine to add after ChatGPT/Claude/Gemini. For long-tail or niche categories, Grok adds less.
The tight gatekeeper
Gemini
Cite rate
16%
Brands / answer
4.9
Mention share
18%
Persona stability
66%
Persona-stable leaders
1 / 22
What we observed
- 01Lowest cite rate (16%) and smallest mean answer set (4.8 brands). Gemini is the most selective recommender in the panel.
- 02Most personalization-sensitive engine (66% mean top-3 overlap, 8 percentage points below Claude). The same buyer signal moves Gemini rankings the most.
- 03Different baseline leaders than the other engines in 6 of 36 segments. Gemini's defaults are not the panel's defaults.
- 04Premium signal has its biggest effect on Gemini at Discovery (35% leader-held, the lowest cell in our panel). If your buyer is premium-coded, your Gemini Discovery rankings are wide open.
Content patterns we observed in cited brands
High Google-aligned signals: Knowledge Graph entries, structured data on the primary domain, and presence in Google's AI Overviews retrieval set. Gemini's training data weights Google's surface signals heavily.
When to prioritize this engine
Mobile, Android, or Google-account-connected: Gemini is your largest blind spot if you've ignored it. Premium and luxury brands have the most Gemini upside.
The contextual discussant
Perplexity
Cite rate
13%
Brands / answer
5.5
Mention share
25%
Persona stability
73%
Persona-stable leaders
4 / 22
What we observed
- 01Highest mention-share of visibility in our panel (32%). Perplexity hedges. It names brands without explicitly recommending them. Every other engine has a mention share below 18%.
- 02Web-grounded retrieval matters here. Brands with strong recent press, partner integrations, or sustained social signal punch above their weight.
- 03Surprisingly stable under persona signals (73% top-3 overlap), comparable to Claude. The intuition that web grounding amplifies personalization was wrong in our data.
- 04Has the cleanest baseline-leader story in some categories (hi-retailers all 5 personas held Lowe's). Where Perplexity locks in, it locks in hard.
Content patterns we observed in cited brands
Recent, authoritative third-party citations. A category review article published in the last 90 days moves Perplexity citations more than the same content published 18 months ago.
When to prioritize this engine
Research-driven or news-aware: Perplexity is the strongest candidate for buyers who fact-check before purchase. Premium retail and B2B are the highest-leverage categories here.
Cross-engine pattern
Pick the engine your buyer uses. Then optimize for the persona that matches your buyer mix.
The most volatile (persona × phase) cell in our panel was premium × evaluation at 32% leader-held, meaning when a premium-coded buyer asks AI which brand to actually buy, the rankings reshuffle hardest. The most stable cell was professional × comparison at 68%, feature-comparison rankings hold under professional buyer signals. See Finding 10 for the full persona × phase heatmap.
Use this guide
Three ways to act on this guide.
- 1. Find your category on /research to see which engines cite which brands in your space.
- 2. Open the Persona Explorer to see how your brand's citation rate moves under each buyer persona.
- 3. Run a free AI visibility check to find out which engine cites you most and which is your biggest blind spot.