mapou.ai
Free check

mapou Visibility Index · Toys → Toys · May 2026

Which toy brands does AI cite most?

When buyers ask AI assistants questions like What are the best toy options for kids in 2026? or Best affordable educational toys for kids under $30?, a small set of toysget cited every time. Most don't. This report measures which.

How we measured. MVI is a 0–100 score per brand: 0 means AI never cites you in toys, 100 means it cites you in every prompt. We tested 20 brands across 5 AI assistants (ChatGPT, Perplexity, Gemini, Claude, Grok) using 20 fixed prompts, reused every monthly run for replicability.

How we describe AI visibility

  • Stage 1, First encounter. The brand is discovered and cited occasionally in AI answers for buyer-intent prompts.
  • Stage 2, Repeat use. The brand is cited regularly enough that it feels familiar and reliably present across prompts and engines.
  • Stage 3, Default choice. The brand is the go-to recommendation in AI answers within its segment, often appearing first or most consistently.
Read the methodology →Run your own check →

Bottom line

LEGO leads toys on AI search visibility with MVI 52, sitting firmly in the repeat-use tier, ahead of the field, with Melissa & Doug second at MVI 40.

  • Who AI cites most

    LEGO is cited in 49 of 100 prompt-engine pairs (49%). 95% confidence interval 42-61.

  • Concentration

    The top 3 brands (LEGO, Melissa & Doug, Magna-Tiles) capture 59% of all citations in this segment. 16 of 20 tracked brands are cited in fewer than 1 in 10 prompt-engine pairs.

  • Where the field sits

    Of 20 brands tested: 1 in repeat-use, 2 in first-encounter, 17 not yet cited. Overall, AI cites a brand from this segment in 10% of buyer-intent prompt-engine pairs.

  • Engine asymmetry

    LEGO is cited in 80% of Claude prompts but only 0% on Grok, visibility is engine-specific, not universal.

  • Phase flip

    Melissa & Doug actually outperforms LEGO on Evaluation prompts, overall MVI hides this category-specific strength.

  • Notable absence

    Lalo is not yet cited, 0 prompts across all 100 prompt-engine pairs. A recognizable brand AI is not yet surfacing.

Analyst note

LEGO leads with an MVI of 52, but no brand achieves default-choice status in this segment.

LEGO holds a repeat-use brand position with an MVI of 52, yet no brand reaches the default-choice tier. Its visibility is strongest on Claude at 80 and weakest on Grok at 0. Melissa & Doug and Magna-Tiles trail with MVIs of 40 and 31, respectively. Most brands, 17 out of 20, remain not yet encountered. This suggests a narrow field where only a few brands gain traction across engines.

Risk:LEGO's absence on Grok, with a 0 citation rate, poses a risk if Grok's influence grows in the segment.

Headline finding

LEGO leads toys on AI search visibility with MVI 52, sitting firmly in the repeat-use tier.

Average MVI

11

Default choice

0

Of 20 brands

Repeat use

1

First encounter

2

Not yet cited

17

Citation rate per engine

How often each engine cites a brand from this category as a recommendation, averaged across all 20 brands tested.

ChatGPT

15%

15 / 20 brands cited at least once

Perplexity

9%

10 / 20 brands cited at least once

Gemini

14%

13 / 20 brands cited at least once

Claude

13%

11 / 20 brands cited at least once

Grok

0%

0 / 20 brands cited at least once

Phase strength across the category

Which buyer-intent phases are easiest vs hardest to win in toys. Citation rate averaged across all brands tested. Phase weights are part of the MVI formula.

Discovery · 30%

11%

Top: LEGO

Filtered discovery · 25%

10%

Top: LEGO

Comparison · 25%

9%

Top: LEGO

Evaluation · 20%

10%

Top: Melissa & Doug

Who wins which buyer phase

Top 12brands by MVI mapped against the four buyer-intent phases. Each cell shows the brand's citation rate for that phase, color-coded so the visual pattern tells the story: a brand strong across all four phases reads as a horizontal orange band; a brand strong only at Discovery but weak at Evaluation reads as a left-heavy gradient. This is the segment's findings against the panel.

BrandMVIDiscoveryFilteredComparisonEvaluation
LEGO5247%53%63%45%
Melissa & Doug4027%37%40%65%
Magna-Tiles3120%43%30%33%
Fisher-Price1733%7%10%15%
Hot Wheels812%3%5%10%
Lovevery813%10%0%5%
VTech813%3%5%10%
Funko77%7%15%0%
Squishmallows713%0%13%0%
Mattel613%0%3%5%
LeapFrog58%7%5%0%
Play-Doh53%13%3%0%

How to read it. Strong horizontal band = durable brand, AI cites it across the entire funnel. Left-heavy gradient = brand with awareness (Discovery) but weak recommendation (Evaluation), the demand-leak pattern from Finding 03. Right-heavy gradient = brand AI considers in Comparison and Evaluation but does not surface in initial Discovery, the anti-leak pattern. Color steps: dark orange ≥75%, orange ≥50%, peach ≥30%, light ≥10%, beige >0%, neutral 0%.

For CMOs in toys

What this report means for your toys portfolio

Each bullet is a category-specific decision derived from this month's data, with the mapou service that operationalizes it.

01

Concentration risk

In toys, AI effectively recommends 7.4 of 20 tracked brands. The top brand captures 25% of citations. The next 2 capture another 19%. Visibility is concentrated but not winner-takes-all.

How mapou helps: GEO & Citation Architecture restructures your entity data so you can break into the top set.

02

Engine fragmentation

Engines disagree on the toys leaderboard. Mean cross-engine agreement is only 0.33 (1.0 = perfect agreement, 0 = independent). Optimizing for ChatGPT will not necessarily improve your Claude or Gemini visibility. You need engine-specific strategy.

How mapou helps: AI Visibility Audit maps your position separately on each of the 5 engines.

03

Persona-volatile category

Toys rankings shift meaningfully by buyer persona. LEGO is the baseline #1 brand, but loses the top spot under at least one buyer signal (budget, premium, professional, first-time, values-driven). Top-3 overlap with baseline is only 80% across personas. Your baseline visibility number is incomplete.

How mapou helps: Persona-Tuned MVI computes the visibility number for your actual buyer mix.

04

Visibility tier landscape

In toys, 0 of 20 tracked brands clear MVI 75 (default-choice tier). 17 are below MVI 25 (not yet cited). The strategy differs at each tier. If you are below 25, you need foundational visibility infrastructure before tactical optimization.

How mapou helps: AI Visibility Audit identifies your tier; GEO & Citation Architecture moves you up.

The mapou Visibility Index

What is MVI?

The mapou Visibility Index (MVI) is a 0-100 proprietary score combining four weighted dimensions: Discovery (open recommendations, 30%), Filtered Discovery (budget, persona, use-case, 25%), Comparison (head-to-head authority, 25%), and Evaluation (decision-criteria authority, 20%).

Citations count fully; mentions count at half weight. Engines are equally weighted (no market-share gymnastics). Wilson 95% confidence intervals are shown alongside every score. The same 20 prompts run every month so MVI deltas are paired comparisons, not noise.

How to read this ranking

  • Default choice (MVI 75+). AI's go-to recommendation in toys. The tier other brands are competing into.
  • Repeat use (50–74). Cited often enough to feel reliably present across prompts and engines. One signal away from default.
  • First encounter (25–49). Discovered and cited occasionally, but visibility is inconsistent. The brand is real to AI, not yet trusted.
  • Not yet cited (0–24). AI does not surface this brand for buyer-intent prompts in toys. Effectively invisible in AI-driven discovery.

Full methodology →

Ranked by MVI score (Wilson 95% CI shown). The Spread column shows the gap between each brand's best and worst engine, under 15pp is durable, 50pp+ is engine-dependent. Per-engine columns show the count of prompts where each engine cited the brand as a recommendation (out of 20). Read each column as a signal: when ChatGPT cites you but Gemini doesn't, your gap is engine-specific. When all five miss you, the gap is foundational.

#BrandMVI95% CISpreadPer-engineChatGPTPerplexityGeminiClaudeGrokTier
1
Comparison (11/20) · filtered use case specific (4/5)
Evaluation (9/20) · filtered values driven (0/5)
AI says educational toys · STEM-focused · creativity and problem-solving · durable building sets
52426180ppChatGPT: 15/20 prompts (75%)Claude: 16/20 prompts (80%)Gemini: 11/20 prompts (55%)Grok: 0/20 prompts (0%)Perplexity: 7/20 prompts (35%)15711160Repeat use
2
Evaluation (13/20) · discovery recommendation request (4/5)
Discovery (8/30) · open recommendation (0/5)
AI says educational toys · durable materials · developmentally appropriate · safe and non-toxic
40315075ppChatGPT: 15/20 prompts (75%)Claude: 8/20 prompts (40%)Gemini: 8/20 prompts (40%)Grok: 0/20 prompts (0%)Perplexity: 8/20 prompts (40%)158880First encounter
3
Filtered Discovery (13/30) · premium (3/5)
Discovery (6/30) · popular brands (0/5)
AI says magnetic building blocks · educational STEM toy · develops spatial reasoning · affordable quality
31234160ppChatGPT: 3/20 prompts (15%)Claude: 12/20 prompts (60%)Gemini: 11/20 prompts (55%)Grok: 0/20 prompts (0%)Perplexity: 5/20 prompts (25%)3511120First encounter
4
Discovery (10/30) · top-brands lists (4/5)
Filtered Discovery (2/30) · open recommendation (0/5)
AI says developmental toys · infants to preschoolers · interactive playsets · learning through play
17112640ppChatGPT: 8/20 prompts (40%)Claude: 3/20 prompts (15%)Gemini: 3/20 prompts (15%)Grok: 0/20 prompts (0%)Perplexity: 3/20 prompts (15%)83330Not yet cited
5
Discovery (3/30) · top-brands lists (2/5)
AI says diverse car models · starter toy · sturdy construction · interactive play
841420ppChatGPT: 1/20 prompts (5%)Claude: 1/20 prompts (5%)Gemini: 4/20 prompts (20%)Grok: 0/20 prompts (0%)Perplexity: 1/20 prompts (5%)11410Not yet cited
6
Discovery (4/30) · top-brands lists (2/5)
Comparison (0/20) · open recommendation (0/5)
AI says subscription play kits · age-appropriate toys · developmental milestones · expert-designed
841530ppChatGPT: 0/20 prompts (0%)Claude: 2/20 prompts (10%)Gemini: 6/20 prompts (30%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)00620Not yet cited
7
Discovery (4/30) · top-brands lists (2/5)
AI says interactive learning toys · infants and toddlers · educational focus · durable design
841530ppChatGPT: 6/20 prompts (30%)Claude: 0/20 prompts (0%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 2/20 prompts (10%)62000Not yet cited
8
Comparison (3/20) · head-to-head comparison (3/5)
Evaluation (0/20) · open recommendation (0/5)
AI says pop culture collectibles · appeals to older kids · teens and adults · vast array of figures
731415ppChatGPT: 1/20 prompts (5%)Claude: 3/20 prompts (15%)Gemini: 3/20 prompts (15%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)10330Not yet cited
9
Discovery (4/30) · current-year picks (3/5)
Filtered Discovery (0/30) · top-brands lists (0/5)
AI says collectible plush · soft and squishy · variety of characters · popular with kids
731310ppChatGPT: 1/20 prompts (5%)Claude: 2/20 prompts (10%)Gemini: 1/20 prompts (5%)Grok: 0/20 prompts (0%)Perplexity: 2/20 prompts (10%)12120Not yet cited
10
Discovery (4/30) · top-brands lists (3/5)
Filtered Discovery (0/30) · open recommendation (0/5)
AI says educational toys · iconic brands · licensed character toys · multinational leader
621210ppChatGPT: 1/20 prompts (5%)Claude: 1/20 prompts (5%)Gemini: 1/20 prompts (5%)Grok: 0/20 prompts (0%)Perplexity: 2/20 prompts (10%)12110Not yet cited
11
Discovery (2/30) · top-brands lists (1/5)
AI says educational toys · interactive learning · early childhood focus · problem-solving skills
521215ppChatGPT: 1/20 prompts (5%)Claude: 0/20 prompts (0%)Gemini: 3/20 prompts (15%)Grok: 0/20 prompts (0%)Perplexity: 1/20 prompts (5%)11300Not yet cited
12
Filtered Discovery (3/30) · filtered use case specific (3/5)
Evaluation (0/20) · open recommendation (0/5)
AI says creative play · fine motor skills · tactile learning · artistic expression
521210ppChatGPT: 2/20 prompts (10%)Claude: 1/20 prompts (5%)Gemini: 1/20 prompts (5%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)20110Not yet cited
13
Discovery (4/30) · open recommendation (1/5)
Comparison (0/20)
AI says educational toys · interactive play · durable audio players · screen-free engagement
521225ppChatGPT: 0/20 prompts (0%)Claude: 0/20 prompts (0%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 5/20 prompts (25%)05000Not yet cited
14
Filtered Discovery (2/30) · filtered use case specific (2/5)
AI says art supplies · creative expression · educational toys · multi-use
421115ppChatGPT: 3/20 prompts (15%)Claude: 1/20 prompts (5%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)30010Not yet cited
15
Comparison (1/20) · top-brands lists (1/5)
2175ppChatGPT: 1/20 prompts (5%)Claude: 0/20 prompts (0%)Gemini: 1/20 prompts (5%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)10100Not yet cited
16
Comparison (1/20) · comparison attribute specific (1/5)
2065ppChatGPT: 1/20 prompts (5%)Claude: 0/20 prompts (0%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)10000Not yet cited
17
Evaluation (1/20) · filtered persona beginner (1/5)
21710ppChatGPT: 0/20 prompts (0%)Claude: 0/20 prompts (0%)Gemini: 2/20 prompts (10%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)00200Not yet cited
18
Evaluation (0/20)
1050ppChatGPT: 0/20 prompts (0%)Claude: 0/20 prompts (0%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)00000Not yet cited
19
Evaluation (1/20) · evaluation decision criteria (1/5)
1055ppChatGPT: 1/20 prompts (5%)Claude: 0/20 prompts (0%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)10000Not yet cited
200040ppChatGPT: 0/20 prompts (0%)Claude: 0/20 prompts (0%)Gemini: 0/20 prompts (0%)Grok: 0/20 prompts (0%)Perplexity: 0/20 prompts (0%)00000Not yet cited

Strategic insights for toys

Five derived metrics computed from the same data, surfacing how this segment behaves on AI search. See the State of AI Search for cross-segment comparison.

Engine agreement

0.33

High disagreement

Effective brands

7.4 / 20

Moderately concentrated · top 2 take 44%

Top demand-leak brand

Fisher-Price

+18pp Discovery vs Evaluation

Top mention-only brand

Play-Doh

43% of visibility is mention-only

Kingmaker engine by funnel phase

discovery

Claude

83pp spread

filtered

ChatGPT

83pp spread

comparison

ChatGPT

75pp spread

evaluation

ChatGPT

100pp spread

For each phase, the engine where the gap between most-cited and least-cited brand is widest, i.e. where positioning matters most. Win that engine, win that phase.

Brands cited most across the category

Aggregated across every (brand × prompt × engine) combination tested. The most-cited brands here are the names AI consistently surfaces when buyers ask about toys.

LEGO×977Melissa & Doug×797Magna-Tiles×524Osmo×361Fisher-Price×264Hape×213VTech×212Lovevery×187Snap Circuits×186Fat Brain Toys×143Green Toys×117Tonies×112HABA×107Jellycat×101Squishmallows×96

Emerging brands AI is citing in toys

Brand names AI engines surfaced for toys prompts that are not currently on the mapou tracked panel. Ranked by mention count and engine breadth. These are panel candidates, brands AI considers part of the category even though we are not yet measuring them.

BrandMentionsEnginesSlots
Osmo3614 of 518 of 20What AI said ↓

ChatGPT response, discovery open prompt

...ty Toys**: Expect to see more AR-based toys. Companies like Osmo are pioneering fun and educational interactive play that blends physical and digital experiences. 5. **Creative Play...

Hape2134 of 516 of 20What AI said ↓

ChatGPT response, discovery recommendation request prompt

...creativity, fine motor skills, and problem-solving. 2. **Shape Sorters**: The **Melissa & Doug Shape Sorting Cube** is a classic choice. It helps toddlers learn about shapes and...

Snap Circuits1864 of 517 of 20What AI said ↓

Claude response, discovery popular prompt

...*Bee-Bot** lead in coding education for younger children. **Snap Circuits** remains popular for electronics learning among older kids. ## Classic Reimagined **Melissa & Doug** maintains...

Fat Brain Toys1434 of 514 of 20What AI said ↓

Perplexity response, discovery top brands prompt

...y brands for kids and infants in 2026 include Fisher-Price, Fat Brain Toys, VTech, Lamaze, Manhattan Toy, Tonies, and Goliath.**[1][2][5][8] These brands dominate based on 2026 Toy Fair...

Green Toys1174 of 513 of 20What AI said ↓

ChatGPT response, discovery open prompt

...e Toys**: Eco-friendly options are on the rise. Brands like Green Toys and PlanToys focus on sustainability, creating toys from recycled or natural materials. 4. **Augmented Reality...

HABA1072 of 59 of 20What AI said ↓

Perplexity response, discovery recommendation request prompt

...able from 18 months, grows with child[1]. - **Magnatiles or HABA Magnetic Building Blocks**: Promote problem-solving and creativity; perfect for open-ended construction[1][3]. -...

Jellycat1014 of 57 of 20What AI said ↓

Gemini response, discovery current year prompt

...stainable materials and personalized options. Brands like **Jellycat** will continue with their unique, super-soft designs, and custom plushies based on children's drawings could become...

Manhattan Toy934 of 57 of 20What AI said ↓

Perplexity response, discovery top brands prompt

...n 2026 include Fisher-Price, Fat Brain Toys, VTech, Lamaze, Manhattan Toy, Tonies, and Goliath.**[1][2][5][8] These brands dominate based on 2026 Toy Fair picks, awards finalists, and expert...

K'NEX894 of 59 of 20What AI said ↓

ChatGPT response, discovery open prompt

...rent trajectories. 1. **STEM Toys**: Brands like LEGO and K'NEX are continually innovating with sets that promote coding, engineering, and robotics. The LEGO Boost set or K'NEX...

KiwiCo864 of 512 of 20What AI said ↓

ChatGPT response, discovery popular prompt

...ocusing on early learning skills and interactive fun. 5. **KiwiCo**: Known for its subscription boxes, KiwiCo delivers monthly projects that inspire creativity and critical thinking...

PlanToys834 of 59 of 20What AI said ↓

ChatGPT response, discovery open prompt

Brands like Green Toys and PlanToys focus on sustainability, creating toys from recycled or natural materials. 4. **Augmented Reality Toys**: Expect to...

ThinkFun683 of 59 of 20What AI said ↓

ChatGPT response, discovery popular prompt

...nking in kids of all ages, focusing on STEM concepts. 6. **ThinkFun**: This brand specializes in games and puzzles that encourage logical thinking and problem-solving, making learning...

Tegu574 of 510 of 20What AI said ↓

ChatGPT response, filtered values driven prompt

...ducational toys that inspire creativity and learning. 4. **Tegu** - Tegu blocks are made from sustainably sourced hardwood and are designed to encourage open-ended play. They also...

Learning Resources554 of 513 of 20What AI said ↓

Gemini response, filtered budget low prompt

For elementary school-aged kids, **Learning Resources** has some winners. Their **Counting Cans** (around $20) are perfect for early math, and their **Gears! Gears! Gears!...

Thames & Kosmos553 of 513 of 20What AI said ↓

Gemini response, comparison premium vs value prompt

...offer a quick lesson, premium STEM toys from brands like **Thames & Kosmos** or **KiwiCo** often provide deeper learning opportunities, encouraging critical thinking, problem-solving, and a...

Grimm's543 of 510 of 20What AI said ↓

Claude response, filtered persona pro prompt

...cks** - Develop spatial reasoning and fine motor skills - **Grimm's Wooden Blocks** - Open-ended building promotes creativity and problem-solving - **Montessori-inspired puzzles** -...

Sphero513 of 56 of 20What AI said ↓

Claude response, discovery open prompt

...ding skills and creativity - **Coding robots** like Dash or Sphero – Making programming fun and accessible - **Marble runs and engineering kits** – Great for problem-solving ## Active...

Gund492 of 54 of 20What AI said ↓

ChatGPT response, discovery current year prompt

...e designs and sizes, making them perfect for cuddling. 2. **Gund Baby Animated Flappy the Elephant** - This interactive plush sings and flaps its ears, captivating infants and...

PicassoTiles443 of 510 of 20What AI said ↓

Gemini response, discovery recommendation request prompt

...kills as your toddler builds all sorts of structures. The **PicassoTiles** brand offers a more budget-friendly alternative that's still very durable. For language and imaginative play,...

Mega Bloks394 of 56 of 20What AI said ↓

ChatGPT response, discovery recommendation request prompt

...few recommendations: 1. **Building Blocks**: Brands like **Mega Bloks** and **Duplo** offer larger blocks that are perfect for little hands. They encourage creativity, fine motor skills,...

Method. Aggregated across the canonical run for toys. For every (panel brand × prompt × engine) we record the brand names the analyzer extracted (capped at 6 per response), then drop names that match the tracked panel or its aliases, plus a denylist of generic category terms. Threshold to qualify: at least 3 mentions across at least 2 of 5 engines. Click any row to see the AI quote that surfaced the brand. Some entries may be tracked elsewhere on mapou but not in this segment, in which case AI considers them cross-category competitors. Reviewed monthly to inform panel additions.

How toys rankings shift by buyer persona

The MVI score is calibrated to a generic shopping-assistant prompt. But buyers don't arrive generically. We re-ran the same 20 canonical prompts five more times, each with a different buyer-persona signal in the system prompt: budget-conscious, premium, working professional, first-time, values-driven. Top-3 overlap with baseline: 80%. Leader holds across all personas: no. LEGO loses the #1 spot to a different brand under at least one persona.

BrandBaselineBudgetPremiumProFirst-timeValues
LEGO80%60%85%75%85%35%
Melissa & Doug70%80%70%70%80%50%
VTech45%35%10%35%40%0%
Fisher-Price20%20%15%35%25%0%
Crayola15%15%5%5%10%0%

Each cell is the citation rate (out of 20 canonical prompts) for that brand under that persona, ChatGPT only. Cells are tinted green when a brand gains 5+ percentage points vs baseline, orange when it loses 5+. Strong tints flag a 20+ percentage-point swing. Top 8 baseline brands shown; full per-persona data is in data/research/persona-robustness/2026-05-07-1625/. The full methodology is on the State of AI Search page.

The 20-prompt taxonomy

Every brand in this report is tested against the same 20 canonical prompts, spanning the four MVI dimensions (Discovery, Filtered Discovery, Comparison, Evaluation). The prompt set is fixed at methodology v1.0 and reused every monthly run, so MVI deltas are paired comparisons not noise.

The exact prompt templates and phase-weighting formula are part of mapou's proprietary methodology, shared with paying clients alongside custom benchmarks for their specific brand.

See the framework →

Methodology v1.0. MVI is mapou's proprietary 0-100 visibility score across 5 AI engines and 4 buyer-intent dimensions. 95% Wilson confidence intervals. Equal engine weighting. See the framework →

Run yours

Want to see your brand on this leaderboard? Run a free visibility check on your own brand. We'll show you exactly which prompts you're missing and which engines are losing you the most ground.