mapou.ai
Free check

mapou Visibility Index · Consumer electronics → Smartphones · May 2026

Which smartphones does AI cite most?

When buyers ask AI assistants questions like What are the best smartphone options to consider, including flagship, budget, foldable, and gaming phones that are available unlocked in the US? or Best affordable smartphones under $300?, a small set of smartphonesget cited every time. Most don't. This report measures which.

How we measured. MVI is a 0–100 score per brand: 0 means AI never cites you in smartphones, 100 means it cites you in every prompt. We tested 20 brands across 5 AI assistants (ChatGPT, Perplexity, Gemini, Claude, Grok) using 20 fixed prompts, reused every monthly run for replicability.

How we describe AI visibility

  • Stage 1, First encounter. The brand is discovered and cited occasionally in AI answers for buyer-intent prompts.
  • Stage 2, Repeat use. The brand is cited regularly enough that it feels familiar and reliably present across prompts and engines.
  • Stage 3, Default choice. The brand is the go-to recommendation in AI answers within its segment, often appearing first or most consistently.
Read the methodology →Run your own check →

Bottom line

Samsung leads smartphones on AI search visibility with MVI 76, sitting firmly in the default-choice tier, ahead of the field, with Apple second at MVI 64.

  • Who AI cites most

    Samsung is cited in 75 of 100 prompt-engine pairs (75%). 95% confidence interval 67-83.

  • Concentration

    The top 3 brands (Samsung, Apple, Google Pixel) capture 57% of all citations in this segment. 13 of 20 tracked brands are cited in fewer than 1 in 10 prompt-engine pairs.

  • Where the field sits

    Of 20 brands tested: 1 in default-choice, 2 in repeat-use, 3 in first-encounter, 14 not yet cited. Overall, AI cites a brand from this segment in 17% of buyer-intent prompt-engine pairs.

  • Engine asymmetry

    Samsung is cited in 100% of ChatGPT prompts but only 0% on Perplexity, visibility is engine-specific, not universal.

  • Notable absence

    Honor is not yet cited, 0 prompts across all 100 prompt-engine pairs. A recognizable brand AI is not yet surfacing.

Analyst note

Samsung leads with a 76 MVI on ChatGPT and Claude engines while 14 brands are not mentioned.

Samsung holds the top position in the smartphone segment with an MVI of 76. It has full citation on ChatGPT and Claude, both at 100%. Apple and Google Pixel are tied at 64 MVI, both in the foothold tier. The presence of 14 invisible brands shows a clear visibility gap.

Risk:Apple risks losing ground on Perplexity with zero visibility, while Samsung performs well across multiple engines.

Headline finding

Samsung leads smartphones on AI search visibility with MVI 76, sitting firmly in the default-choice tier.

Average MVI

18

Default choice

1

Of 20 brands

Repeat use

2

First encounter

3

Not yet cited

14

Citation rate per engine

How often each engine cites a brand from this category as a recommendation, averaged across all 20 brands tested.

ChatGPT

25%

13 / 20 brands cited at least once

Perplexity

0%

0 / 20 brands cited at least once

Gemini

18%

10 / 20 brands cited at least once

Claude

23%

10 / 20 brands cited at least once

Grok

18%

10 / 20 brands cited at least once

Phase strength across the category

Which buyer-intent phases are easiest vs hardest to win in smartphones. Citation rate averaged across all brands tested. Phase weights are part of the MVI formula.

Discovery · 30%

20%

Top: Samsung

Filtered discovery · 25%

16%

Top: Samsung

Comparison · 25%

15%

Top: Samsung

Evaluation · 20%

16%

Top: Samsung

Who wins which buyer phase

Top 12brands by MVI mapped against the four buyer-intent phases. Each cell shows the brand's citation rate for that phase, color-coded so the visual pattern tells the story: a brand strong across all four phases reads as a horizontal orange band; a brand strong only at Discovery but weak at Evaluation reads as a left-heavy gradient. This is the segment's findings against the panel.

BrandMVIDiscoveryFilteredComparisonEvaluation
Samsung7675%77%78%75%
Apple6460%60%65%73%
Google Pixel6462%68%55%73%
ASUS ROG Phone3752%33%20%40%
Motorola3648%27%38%28%
OnePlus3538%25%40%35%
Xiaomi2228%13%25%20%
REDMAGIC920%3%3%10%
Nothing820%3%5%0%
Fairphone30%13%0%0%
Sony Xperia32%3%5%0%
Nokia HMD22%3%0%3%

How to read it. Strong horizontal band = durable brand, AI cites it across the entire funnel. Left-heavy gradient = brand with awareness (Discovery) but weak recommendation (Evaluation), the demand-leak pattern from Finding 03. Right-heavy gradient = brand AI considers in Comparison and Evaluation but does not surface in initial Discovery, the anti-leak pattern. Color steps: dark orange ≥75%, orange ≥50%, peach ≥30%, light ≥10%, beige >0%, neutral 0%.

For CMOs in smartphones

What this report means for your smartphones portfolio

Each bullet is a category-specific decision derived from this month's data, with the mapou service that operationalizes it.

01

Concentration risk

In smartphones, AI effectively recommends 7.3 of 20 tracked brands. The top brand captures 21% of citations. The next 2 capture another 17%. Visibility is concentrated but not winner-takes-all.

How mapou helps: GEO & Citation Architecture restructures your entity data so you can break into the top set.

02

Engine fragmentation

Engines disagree on the smartphones leaderboard. Mean cross-engine agreement is only 0.54 (1.0 = perfect agreement, 0 = independent). Optimizing for ChatGPT will not necessarily improve your Claude or Gemini visibility. You need engine-specific strategy.

How mapou helps: AI Visibility Audit maps your position separately on each of the 5 engines.

03

Persona-stable category

Smartphones is unusually stable. Samsung holds the #1 spot across all 5 buyer personas tested. Top-3 overlap is 93%. The baseline MVI is a usable proxy for buyer-specific visibility, unlike most categories in our panel.

How mapou helps: AI Visibility Audit benchmarks you against the stable leader to identify the closest gap.

04

Visibility tier landscape

In smartphones, 1 of 20 tracked brands clear MVI 75 (default-choice tier). 14 are below MVI 25 (not yet cited). The strategy differs at each tier. If you are below 25, you need foundational visibility infrastructure before tactical optimization.

How mapou helps: AI Visibility Audit identifies your tier; GEO & Citation Architecture moves you up.

The mapou Visibility Index

What is MVI?

The mapou Visibility Index (MVI) is a 0-100 proprietary score combining four weighted dimensions: Discovery (open recommendations, 30%), Filtered Discovery (budget, persona, use-case, 25%), Comparison (head-to-head authority, 25%), and Evaluation (decision-criteria authority, 20%).

Citations count fully; mentions count at half weight. Engines are equally weighted (no market-share gymnastics). Wilson 95% confidence intervals are shown alongside every score. The same 20 prompts run every month so MVI deltas are paired comparisons, not noise.

How to read this ranking

  • Default choice (MVI 75+). AI's go-to recommendation in smartphones. The tier other brands are competing into.
  • Repeat use (50–74). Cited often enough to feel reliably present across prompts and engines. One signal away from default.
  • First encounter (25–49). Discovered and cited occasionally, but visibility is inconsistent. The brand is real to AI, not yet trusted.
  • Not yet cited (0–24). AI does not surface this brand for buyer-intent prompts in smartphones. Effectively invisible in AI-driven discovery.

Full methodology →

Ranked by MVI score (Wilson 95% CI shown). The Spread column shows the gap between each brand's best and worst engine, under 15pp is durable, 50pp+ is engine-dependent. Per-engine columns show the count of prompts where each engine cited the brand as a recommendation (out of 20). Read each column as a signal: when ChatGPT cites you but Gemini doesn't, your gap is engine-specific. When all five miss you, the gap is foundational.

#BrandMVI95% CISpreadPer-engineChatGPTPerplexityGeminiClaudeGrokTier
1
Comparison (15/20) · open recommendation (4/5)
766783100ppChatGPT: 20/20 prompts (100%)Claude: 20/20 prompts (100%)Gemini: 18/20 prompts (90%)Grok: 17/20 prompts (85%)Perplexity: 0/20 prompts (0%)200182017Default choice
2
Evaluation (13/20) · top-brands lists (4/5)
Discovery (18/30) · emerging brands (0/5)
64547285ppChatGPT: 17/20 prompts (85%)Claude: 14/20 prompts (70%)Gemini: 12/20 prompts (60%)Grok: 15/20 prompts (75%)Perplexity: 0/20 prompts (0%)170121415Repeat use
3
Evaluation (13/20) · open recommendation (4/5)
Comparison (9/20) · emerging brands (0/5)
64557380ppChatGPT: 15/20 prompts (75%)Claude: 16/20 prompts (80%)Gemini: 13/20 prompts (65%)Grok: 15/20 prompts (75%)Perplexity: 0/20 prompts (0%)150131615Repeat use
4
Discovery (15/30) · discovery recommendation request (4/5)
Comparison (4/20) · popular brands (0/5)
37294755ppChatGPT: 11/20 prompts (55%)Claude: 8/20 prompts (40%)Gemini: 9/20 prompts (45%)Grok: 9/20 prompts (45%)Perplexity: 0/20 prompts (0%)110989First encounter
5
Discovery (14/30) · top-brands lists (4/5)
Filtered Discovery (8/30) · current-year picks (0/5)
36274560ppChatGPT: 8/20 prompts (40%)Claude: 12/20 prompts (60%)Gemini: 5/20 prompts (25%)Grok: 8/20 prompts (40%)Perplexity: 0/20 prompts (0%)805128First encounter
6
Comparison (7/20) · comparison alternative to leader (3/5)
Filtered Discovery (7/30) · filtered persona beginner (0/5)
35254460ppChatGPT: 12/20 prompts (60%)Claude: 12/20 prompts (60%)Gemini: 5/20 prompts (25%)Grok: 1/20 prompts (5%)Perplexity: 0/20 prompts (0%)1205121First encounter
7
Discovery (8/30) · popular brands (4/5)
Filtered Discovery (3/30) · current-year picks (0/5)
22153150ppChatGPT: 10/20 prompts (50%)Claude: 2/20 prompts (10%)Gemini: 5/20 prompts (25%)Grok: 3/20 prompts (15%)Perplexity: 0/20 prompts (0%)100523Not yet cited
8
Discovery (6/30) · open recommendation (3/5)
Comparison (0/20) · current-year picks (0/5)
951720ppChatGPT: 2/20 prompts (10%)Claude: 0/20 prompts (0%)Gemini: 4/20 prompts (20%)Grok: 3/20 prompts (15%)Perplexity: 0/20 prompts (0%)20403Not yet cited
9
Discovery (6/30) · emerging brands (4/5)
Evaluation (0/20) · open recommendation (0/5)
841525ppChatGPT: 1/20 prompts (5%)Claude: 5/20 prompts (25%)Gemini: 1/20 prompts (5%)Grok: 1/20 prompts (5%)Perplexity: 0/20 prompts (0%)10151Not yet cited
10
Filtered Discovery (4/30) · filtered values driven (4/5)
Discovery (0/30) · open recommendation (0/5)
32105ppChatGPT: 1/20 prompts (5%)Claude: 1/20 prompts (5%)Gemini: 1/20 prompts (5%)Grok: 1/20 prompts (5%)Perplexity: 0/20 prompts (0%)10111Not yet cited
11
Comparison (1/20) · filtered use case specific (1/5)
31810ppChatGPT: 2/20 prompts (10%)Claude: 0/20 prompts (0%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)20000Not yet cited
12
Filtered Discovery (0/30)
2170ppChatGPT: 0/20 prompts (0%)Claude: 0/20 prompts (0%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)00000Not yet cited
13
Discovery (1/30) · emerging brands (1/5)
2175ppChatGPT: 1/20 prompts (5%)Claude: 0/20 prompts (0%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)10000Not yet cited
14
Discovery (2/30) · popular brands (2/5)
2175ppChatGPT: 1/20 prompts (5%)Claude: 1/20 prompts (5%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)10010Not yet cited
15
Evaluation (0/20)
1050ppChatGPT: 0/20 prompts (0%)Claude: 0/20 prompts (0%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)00000Not yet cited
16
Evaluation (0/20)
1060ppChatGPT: 0/20 prompts (0%)Claude: 0/20 prompts (0%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)00000Not yet cited
17
Discovery (0/30)
1050ppChatGPT: 0/20 prompts (0%)Claude: 0/20 prompts (0%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)00000Not yet cited
18
Comparison (0/20)
1050ppChatGPT: 0/20 prompts (0%)Claude: 0/20 prompts (0%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)00000Not yet cited
19
Discovery (0/30)
1050ppChatGPT: 0/20 prompts (0%)Claude: 0/20 prompts (0%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)00000Not yet cited
200040ppChatGPT: 0/20 prompts (0%)Claude: 0/20 prompts (0%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)00000Not yet cited

Strategic insights for smartphones

Five derived metrics computed from the same data, surfacing how this segment behaves on AI search. See the State of AI Search for cross-segment comparison.

Engine agreement

0.54

Partial agreement

Effective brands

7.3 / 20

Moderately concentrated · top 2 take 38%

Top demand-leak brand

Motorola

+21pp Discovery vs Evaluation

Top mention-only brand

Nokia HMD

100% of visibility is mention-only

Kingmaker engine by funnel phase

discovery

ChatGPT

100pp spread

filtered

ChatGPT

100pp spread

comparison

ChatGPT

100pp spread

evaluation

ChatGPT

100pp spread

For each phase, the engine where the gap between most-cited and least-cited brand is widest, i.e. where positioning matters most. Win that engine, win that phase.

Brands cited most across the category

Aggregated across every (brand × prompt × engine) combination tested. The most-cited brands here are the names AI consistently surfaces when buyers ask about smartphones.

Samsung×1606Google Pixel×1384Apple×1285Motorola×704ASUS ROG Phone×703OnePlus×702Xiaomi×402REDMAGIC×117Nothing×115Fairphone×78Nokia HMD×51Realme×50Oppo×39Sony Xperia×31TCL×31

Emerging brands AI is citing in smartphones

Brand names AI engines surfaced for smartphones prompts that are not currently on the mapou tracked panel. Ranked by mention count and engine breadth. These are panel candidates, brands AI considers part of the category even though we are not yet measuring them.

BrandMentionsEnginesSlots
TCL314 of 55 of 20What AI said ↓

ChatGPT response, discovery emerging prompt

...a top choice for flagship seekers. **Budget Phones:** 1. **TCL** - The TCL 30 series offers excellent value with decent specifications and a vibrant display, making it a great...

Teracube122 of 51 of 20What AI said ↓

Gemini response, filtered values driven prompt

Keep an eye on emerging players too! Companies like **Teracube** (currently offering repairable, long-warranty phones) could expand their ethical sourcing and manufacturing,...

Shiftphone82 of 51 of 20What AI said ↓

ChatGPT response, filtered values driven prompt

...nt to repairability, and they use recycled materials. 3. **Shiftphone**: Based in Germany, Shiftphone promotes modularity and repairability. Their phones are designed to last longer and...

Black Shark52 of 54 of 20What AI said ↓

ChatGPT response, filtered persona pro prompt

...obust cooling, making it the go-to for serious gamers. 2. **Black Shark 6**: Known for its gaming-focused features and competitive pricing, it provides a great gaming experience without the...

Method. Aggregated across the canonical run for smartphones. For every (panel brand × prompt × engine) we record the brand names the analyzer extracted (capped at 6 per response), then drop names that match the tracked panel or its aliases, plus a denylist of generic category terms. Threshold to qualify: at least 3 mentions across at least 2 of 5 engines. Click any row to see the AI quote that surfaced the brand. Some entries may be tracked elsewhere on mapou but not in this segment, in which case AI considers them cross-category competitors. Reviewed monthly to inform panel additions.

How smartphones rankings shift by buyer persona

The MVI score is calibrated to a generic shopping-assistant prompt. But buyers don't arrive generically. We re-ran the same 20 canonical prompts five more times, each with a different buyer-persona signal in the system prompt: budget-conscious, premium, working professional, first-time, values-driven. Top-3 overlap with baseline: 93%. Leader holds across all personas: yes.

BrandBaselineBudgetPremiumProFirst-timeValues
Samsung100%95%95%95%100%100%
Apple85%40%85%80%85%70%
Google Pixel85%65%95%80%80%85%
OnePlus55%60%60%50%40%40%
ASUS ROG Phone55%40%55%50%50%45%

Each cell is the citation rate (out of 20 canonical prompts) for that brand under that persona, ChatGPT only. Cells are tinted green when a brand gains 5+ percentage points vs baseline, orange when it loses 5+. Strong tints flag a 20+ percentage-point swing. Top 8 baseline brands shown; full per-persona data is in data/research/persona-robustness/2026-05-07-1625/. The full methodology is on the State of AI Search page.

The 20-prompt taxonomy

Every brand in this report is tested against the same 20 canonical prompts, spanning the four MVI dimensions (Discovery, Filtered Discovery, Comparison, Evaluation). The prompt set is fixed at methodology v1.0 and reused every monthly run, so MVI deltas are paired comparisons not noise.

The exact prompt templates and phase-weighting formula are part of mapou's proprietary methodology, shared with paying clients alongside custom benchmarks for their specific brand.

See the framework →

Methodology v1.0. MVI is mapou's proprietary 0-100 visibility score across 5 AI engines and 4 buyer-intent dimensions. 95% Wilson confidence intervals. Equal engine weighting. See the framework →

Run yours

Want to see your brand on this leaderboard? Run a free visibility check on your own brand. We'll show you exactly which prompts you're missing and which engines are losing you the most ground.