mapou Visibility Index · Beauty → Skincare · May 2026
Which skincare brands does AI cite most?
When buyers ask AI assistants questions like “What are the best skincare brands and products to consider for cleansers, serums, moisturizers, treatments, retinol, sunscreen, and masks in 2026?” or “Best affordable skincare brands for cleansers, serums, moisturizers, treatments, retinol, sunscreen, and masks under $30?”, a small set of skincareget cited every time. Most don't. This report measures which.
How we measured. MVI is a 0–100 score per brand: 0 means AI never cites you in skincare, 100 means it cites you in every prompt. We tested 20 brands across 5 AI assistants (ChatGPT, Perplexity, Gemini, Claude, Grok) using 20 fixed prompts, reused every monthly run for replicability.
How we describe AI visibility
- Stage 1, First encounter. The brand is discovered and cited occasionally in AI answers for buyer-intent prompts.
- Stage 2, Repeat use. The brand is cited regularly enough that it feels familiar and reliably present across prompts and engines.
- Stage 3, Default choice. The brand is the go-to recommendation in AI answers within its segment, often appearing first or most consistently.
Bottom line
CeraVe leads skincare on AI search visibility with MVI 68, sitting firmly in the repeat-use tier, in a tight race with La Roche-Posay (MVI 64).
Who AI cites most
CeraVe is cited in 66 of 100 prompt-engine pairs (66%). 95% confidence interval 57-75.
Concentration
The top 3 brands (CeraVe, La Roche-Posay, The Ordinary) capture 42% of all citations in this segment. 9 of 20 tracked brands are cited in fewer than 1 in 10 prompt-engine pairs.
Where the field sits
Of 20 brands tested: 3 in repeat-use, 5 in first-encounter, 12 not yet cited. Overall, AI cites a brand from this segment in 22% of buyer-intent prompt-engine pairs.
Engine asymmetry
CeraVe is cited in 90% of Claude prompts but only 0% on Perplexity, visibility is engine-specific, not universal.
Phase flip
La Roche-Posay actually outperforms CeraVe on Filtered Discovery prompts, overall MVI hides this category-specific strength.
Notable absence
Rhode is not yet cited, 0 prompts across all 100 prompt-engine pairs. A recognizable brand AI is not yet surfacing.
Analyst note
CeraVe leads with 68 MVI, but no brand reaches the leader tier in skincare.
CeraVe is the top brand in skincare with an MVI of 68, followed by La Roche-Posay at 64. Both are in the foothold tier, showing repeat use but not strong leadership. CeraVe has high visibility on ChatGPT and Claude, scoring 80 and 90. The Ordinary is third with an MVI of 57, performing well in discovery but weaker in comparison phases. Twelve out of twenty brands are not visible, indicating a concentrated field.
Risk:Neutrogena risks falling further behind with a low MVI of 39 and no visibility on Perplexity.
Headline finding
CeraVe leads skincare on AI search visibility with MVI 68, sitting firmly in the repeat-use tier.
Average MVI
24
Default choice
0
Of 20 brands
Repeat use
3
First encounter
5
Not yet cited
12
Citation rate per engine
How often each engine cites a brand from this category as a recommendation, averaged across all 20 brands tested.
ChatGPT
27%
14 / 20 brands cited at least once
Perplexity
0%
0 / 20 brands cited at least once
Gemini
25%
16 / 20 brands cited at least once
Claude
33%
17 / 20 brands cited at least once
Grok
23%
14 / 20 brands cited at least once
Phase strength across the category
Which buyer-intent phases are easiest vs hardest to win in skincare. Citation rate averaged across all brands tested. Phase weights are part of the MVI formula.
Discovery · 30%
27%
Top: CeraVe
Filtered discovery · 25%
16%
Top: CeraVe
Comparison · 25%
22%
Top: CeraVe
Evaluation · 20%
22%
Top: CeraVe
Who wins which buyer phase
Top 12brands by MVI mapped against the four buyer-intent phases. Each cell shows the brand's citation rate for that phase, color-coded so the visual pattern tells the story: a brand strong across all four phases reads as a horizontal orange band; a brand strong only at Discovery but weak at Evaluation reads as a left-heavy gradient. This is the segment's findings against the panel.
| Brand | MVI | Discovery | Filtered | Comparison | Evaluation |
|---|---|---|---|---|---|
| CeraVe | 68 | 70% | 47% | 78% | 80% |
| La Roche-Posay | 64 | 65% | 48% | 70% | 75% |
| The Ordinary | 57 | 68% | 43% | 50% | 65% |
| Neutrogena | 39 | 40% | 33% | 40% | 45% |
| Paula's Choice | 39 | 47% | 23% | 38% | 48% |
| SkinCeuticals | 33 | 50% | 23% | 30% | 25% |
| La Mer | 28 | 27% | 28% | 30% | 28% |
| EltaMD | 27 | 40% | 13% | 20% | 35% |
| Drunk Elephant | 23 | 38% | 20% | 20% | 5% |
| Cetaphil | 18 | 17% | 10% | 23% | 25% |
| Augustinus Bader | 16 | 27% | 20% | 5% | 10% |
| Vanicream | 16 | 13% | 0% | 30% | 20% |
How to read it. Strong horizontal band = durable brand, AI cites it across the entire funnel. Left-heavy gradient = brand with awareness (Discovery) but weak recommendation (Evaluation), the demand-leak pattern from Finding 03. Right-heavy gradient = brand AI considers in Comparison and Evaluation but does not surface in initial Discovery, the anti-leak pattern. Color steps: dark orange ≥75%, orange ≥50%, peach ≥30%, light ≥10%, beige >0%, neutral 0%.
For CMOs in skincare
What this report means for your skincare portfolio
Each bullet is a category-specific decision derived from this month's data, with the mapou service that operationalizes it.
Concentration risk
In skincare, AI effectively recommends 11.4 of 20 tracked brands. The top brand captures 14% of citations. The next 2 capture another 14%. Visibility is concentrated but not winner-takes-all.
How mapou helps: GEO & Citation Architecture restructures your entity data so you can break into the top set.
Engine fragmentation
Engines disagree on the skincare leaderboard. Mean cross-engine agreement is only 0.41 (1.0 = perfect agreement, 0 = independent). Optimizing for ChatGPT will not necessarily improve your Claude or Gemini visibility. You need engine-specific strategy.
How mapou helps: AI Visibility Audit maps your position separately on each of the 5 engines.
Persona-volatile category
Skincare rankings shift meaningfully by buyer persona. CeraVe is the baseline #1 brand, but loses the top spot under at least one buyer signal (budget, premium, professional, first-time, values-driven). Top-3 overlap with baseline is only 60% across personas. Your baseline visibility number is incomplete.
How mapou helps: Persona-Tuned MVI computes the visibility number for your actual buyer mix.
Visibility tier landscape
In skincare, 0 of 20 tracked brands clear MVI 75 (default-choice tier). 12 are below MVI 25 (not yet cited). The strategy differs at each tier. If you are below 25, you need foundational visibility infrastructure before tactical optimization.
How mapou helps: AI Visibility Audit identifies your tier; GEO & Citation Architecture moves you up.
The mapou Visibility Index
What is MVI?
The mapou Visibility Index (MVI) is a 0-100 proprietary score combining four weighted dimensions: Discovery (open recommendations, 30%), Filtered Discovery (budget, persona, use-case, 25%), Comparison (head-to-head authority, 25%), and Evaluation (decision-criteria authority, 20%).
Citations count fully; mentions count at half weight. Engines are equally weighted (no market-share gymnastics). Wilson 95% confidence intervals are shown alongside every score. The same 20 prompts run every month so MVI deltas are paired comparisons, not noise.
How to read this ranking
- Default choice (MVI 75+). AI's go-to recommendation in skincare. The tier other brands are competing into.
- Repeat use (50–74). Cited often enough to feel reliably present across prompts and engines. One signal away from default.
- First encounter (25–49). Discovered and cited occasionally, but visibility is inconsistent. The brand is real to AI, not yet trusted.
- Not yet cited (0–24). AI does not surface this brand for buyer-intent prompts in skincare. Effectively invisible in AI-driven discovery.
Ranked by MVI score (Wilson 95% CI shown). The Spread column shows the gap between each brand's best and worst engine, under 15pp is durable, 50pp+ is engine-dependent. Per-engine columns show the count of prompts where each engine cited the brand as a recommendation (out of 20). Read each column as a signal: when ChatGPT cites you but Gemini doesn't, your gap is engine-specific. When all five miss you, the gap is foundational.
| # | Brand | MVI | 95% CI | Spread | Per-engine | ChatGPT | Perplexity | Gemini | Claude | Grok | Tier |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | ▲ Evaluation (16/20) · open recommendation (4/5) ▼ Filtered Discovery (14/30) · premium (0/5) | 68 | 57–75 | 90pp | 16 | 0 | 15 | 18 | 17 | Repeat use | |
| 2 | ▲ Evaluation (15/20) · open recommendation (4/5) ▼ Filtered Discovery (12/30) · emerging brands (0/5) | 64 | 53–72 | 85pp | 15 | 0 | 14 | 14 | 17 | Repeat use | |
| 3 | ▲ Discovery (20/30) · open recommendation (4/5) ▼ Filtered Discovery (13/30) · emerging brands (0/5) | 57 | 47–66 | 80pp | 15 | 0 | 11 | 14 | 16 | Repeat use | |
| 4 | ▲ Evaluation (9/20) · evaluation decision criteria (4/5) ▼ Filtered Discovery (10/30) · emerging brands (0/5) | 39 | 30–49 | 75pp | 15 | 0 | 2 | 12 | 8 | First encounter | |
| 5 | ▲ Evaluation (9/20) · top-brands lists (4/5) ▼ Filtered Discovery (7/30) · premium (0/5) | 39 | 29–48 | 70pp | 9 | 0 | 14 | 4 | 10 | First encounter | |
| 6 | ▲ Discovery (15/30) · filtered persona pro (4/5) ▼ Filtered Discovery (7/30) · emerging brands (0/5) | 33 | 25–43 | 60pp | 11 | 0 | 12 | 8 | 2 | First encounter | |
| 7 | ▲ Comparison (1/20) · premium (4/5) | 28 | 20–37 | 10pp | 2 | 0 | 1 | 2 | 1 | First encounter | |
| 8 | ▲ Discovery (12/30) · open recommendation (4/5) ▼ Filtered Discovery (4/30) · popular brands (0/5) | 27 | 19–36 | 40pp | 8 | 0 | 7 | 5 | 7 | First encounter | |
| 9 | ▲ Discovery (11/30) · open recommendation (4/5) ▼ Evaluation (1/20) · discovery recommendation request (0/5) | 23 | 15–32 | 35pp | 7 | 0 | 2 | 7 | 6 | Not yet cited | |
| 10 | ▲ Evaluation (5/20) · comparison alternative to leader (2/5) ▼ Filtered Discovery (3/30) · emerging brands (0/5) | 18 | 11–26 | 65pp | 0 | 0 | 1 | 13 | 3 | Not yet cited | |
| 11 | ▲ Discovery (8/30) · premium (4/5) ▼ Comparison (1/20) · discovery recommendation request (0/5) | 16 | 11–26 | 50pp | 1 | 0 | 5 | 10 | 1 | Not yet cited | |
| 12 | ▲ Comparison (6/20) · comparison attribute specific (3/5) ▼ Filtered Discovery (0/30) · open recommendation (0/5) | 16 | 9–22 | 40pp | 0 | 0 | 5 | 8 | 1 | Not yet cited | |
| 13 | ▲ Comparison (5/20) · comparison attribute specific (3/5) ▼ Evaluation (0/20) · open recommendation (0/5) | 10 | 4–16 | 25pp | 2 | 0 | 0 | 5 | 1 | Not yet cited | |
| 14 | ▲ Discovery (5/30) · top-brands lists (2/5) ▼ Filtered Discovery (1/30) · emerging brands (0/5) | 9 | 5–16 | 25pp | 1 | 0 | 2 | 5 | 1 | Not yet cited | |
| 15 | ▲ Discovery (6/30) · top-brands lists (2/5) ▼ Evaluation (0/20) · emerging brands (0/5) | 9 | 5–17 | 20pp | 4 | 0 | 1 | 4 | 0 | Not yet cited | |
| 16 | ▲ Discovery (3/30) · filtered persona beginner (2/5) | 6 | 3–12 | 25pp | 1 | 0 | 5 | 0 | 0 | Not yet cited | |
| 17 | ▲ Discovery (3/30) · popular brands (2/5) | 5 | 2–11 | 15pp | 0 | 0 | 3 | 2 | 0 | Not yet cited | |
| 18 | ▲ Comparison (1/20) · filtered persona beginner (1/5) | 2 | 1–7 | 10pp | 0 | 0 | 0 | 2 | 0 | Not yet cited | |
| 19 | ▲ Filtered Discovery (0/30) | 1 | 0–6 | 0pp | 0 | 0 | 0 | 0 | 0 | Not yet cited | |
| 20 | 0 | 0–4 | 0pp | 0 | 0 | 0 | 0 | 0 | Not yet cited |
Strategic insights for skincare
Five derived metrics computed from the same data, surfacing how this segment behaves on AI search. See the State of AI Search for cross-segment comparison.
Engine agreement
0.41
Partial agreement
Effective brands
11.4 / 20
Moderately concentrated · top 2 take 28%
Top demand-leak brand
Drunk Elephant
+33pp Discovery vs Evaluation
Top mention-only brand
Dr. Dennis Gross
100% of visibility is mention-only
Kingmaker engine by funnel phase
discovery
Claude
100pp spread
filtered
Grok
83pp spread
comparison
ChatGPT
100pp spread
evaluation
ChatGPT
100pp spread
For each phase, the engine where the gap between most-cited and least-cited brand is widest, i.e. where positioning matters most. Win that engine, win that phase.
On the record for skincare
Pre-registered claim for the next monthly run.
Each of these is a falsifiable, dated prediction we'll grade green or red against next month's data. The full set across all segments lives on the State of AI Search page.
Skincare stays concentrated, CeraVe retains top-1
PendingOn the next monthly run, the skincare segment will retain effective-brand count at or below 6.0 (currently 5.4 of 12 tracked) and CeraVe will retain top-1 MVI position. Concentration is structural in this category.
Why we expect this: Skincare is one of the most concentrated tracked segments (top 2 brands take 54 percent of MVI share). CeraVe leads with MVI 53 in a category where the runner-up sits at 47. Categories this concentrated rarely de-concentrate in 30 days, the citation infrastructure that produces the leader pattern is the same infrastructure that would have to change.
Brands cited most across the category
Aggregated across every (brand × prompt × engine) combination tested. The most-cited brands here are the names AI consistently surfaces when buyers ask about skincare.
Emerging brands AI is citing in skincare
Brand names AI engines surfaced for skincare prompts that are not currently on the mapou tracked panel. Ranked by mention count and engine breadth. These are panel candidates, brands AI considers part of the category even though we are not yet measuring them.
Method. Aggregated across the canonical run for skincare. For every (panel brand × prompt × engine) we record the brand names the analyzer extracted (capped at 6 per response), then drop names that match the tracked panel or its aliases, plus a denylist of generic category terms. Threshold to qualify: at least 3 mentions across at least 2 of 5 engines. Click any row to see the AI quote that surfaced the brand. Some entries may be tracked elsewhere on mapou but not in this segment, in which case AI considers them cross-category competitors. Reviewed monthly to inform panel additions.
How skincare rankings shift by buyer persona
The MVI score is calibrated to a generic shopping-assistant prompt. But buyers don't arrive generically. We re-ran the same 20 canonical prompts five more times, each with a different buyer-persona signal in the system prompt: budget-conscious, premium, working professional, first-time, values-driven. Top-3 overlap with baseline: 60%. Leader holds across all personas: no. CeraVe loses the #1 spot to a different brand under at least one persona.
| Brand | Baseline | Budget | Premium | Pro | First-time | Values |
|---|---|---|---|---|---|---|
| CeraVe | 85% | 95% | 5% | 75% | 90% | 10% |
| La Roche-Posay | 80% | 50% | 25% | 70% | 60% | 0% |
| Neutrogena | 80% | 90% | 5% | 70% | 80% | 5% |
| The Ordinary | 70% | 95% | 10% | 60% | 85% | 60% |
| SkinCeuticals | 55% | 5% | 45% | 55% | 5% | 10% |
Each cell is the citation rate (out of 20 canonical prompts) for that brand under that persona, ChatGPT only. Cells are tinted green when a brand gains 5+ percentage points vs baseline, orange when it loses 5+. Strong tints flag a 20+ percentage-point swing. Top 8 baseline brands shown; full per-persona data is in data/research/persona-robustness/2026-05-07-1625/. The full methodology is on the State of AI Search page.
The 20-prompt taxonomy
Every brand in this report is tested against the same 20 canonical prompts, spanning the four MVI dimensions (Discovery, Filtered Discovery, Comparison, Evaluation). The prompt set is fixed at methodology v1.0 and reused every monthly run, so MVI deltas are paired comparisons not noise.
The exact prompt templates and phase-weighting formula are part of mapou's proprietary methodology, shared with paying clients alongside custom benchmarks for their specific brand.
See the framework →Methodology v1.0. MVI is mapou's proprietary 0-100 visibility score across 5 AI engines and 4 buyer-intent dimensions. 95% Wilson confidence intervals. Equal engine weighting. See the framework →
Run yours
Want to see your brand on this leaderboard? Run a free visibility check on your own brand. We'll show you exactly which prompts you're missing and which engines are losing you the most ground.