AEO • commercial intent
Why ChatGPT Doesn't Mention Your Brand (And How to Fix It)
A diagnostic guide for brands that are invisible in ChatGPT, Claude, and Gemini responses. Covers the 7 most common reasons AI models ignore your brand, with specific fixes for each one.
The invisible brand problem
You search for your product category in ChatGPT. Your competitors show up. You don't. You ask Claude to recommend tools in your space. It lists five options. Yours isn't one of them. You try Perplexity. Same result.
This isn't random. AI models don't randomly forget brands — they systematically deprioritize sites that fail specific technical and content criteria. The good news: the reasons are diagnosable and most are fixable.
Before spending money on 'AI SEO' services, understand why you're invisible. The cause determines the fix, and most causes don't require expensive solutions.
Reason 1: Your robots.txt blocks AI crawlers
This is the single most common reason for AI invisibility, and the simplest to fix. If your robots.txt file contains 'Disallow: /' under 'User-agent: *', or specifically blocks GPTBot, ClaudeBot, or PerplexityBot, AI crawlers can't read your site.
Many sites added these blocks during the 2023-2024 wave of AI crawler opt-out discussions without understanding the trade-off. Blocking AI crawlers doesn't prevent AI models from mentioning you (they have training data from before the block), but it prevents them from updating their knowledge of your site and deprioritizes you in retrieval-based systems like Perplexity.
Check: Visit yourdomain.com/robots.txt and look for any rules that block bots broadly or AI crawlers specifically. Fix: Remove blocks for GPTBot, ClaudeBot, PerplexityBot, and Google-Extended. If you're using a CMS plugin that manages robots.txt, make changes through the plugin's interface so they persist across updates.
Reason 2: Your CDN or WAF blocks bot traffic
Even if robots.txt is clean, your infrastructure might block AI crawlers before they reach your content. Cloudflare's Bot Fight Mode, AWS WAF rules, and similar security configurations often challenge or block requests from unfamiliar user agents.
AI crawlers don't solve CAPTCHAs. If your CDN presents a JavaScript challenge or CAPTCHA to PerplexityBot, it fails and moves on. Your site becomes invisible to that platform.
Check: Use a command-line tool to fetch your homepage with each AI crawler's user agent string and verify you get a 200 response with actual content. Fix: Whitelist known AI crawler user agents in your CDN or WAF settings. Cloudflare users should add firewall rules that allow GPTBot, ClaudeBot, and PerplexityBot to pass without challenge.
Reason 3: Your content is marketing, not information
AI models don't cite marketing copy. They cite facts. When your homepage says 'the world's most powerful platform for modern teams,' ChatGPT has nothing useful to extract. When your competitor's homepage says 'a project management tool with Gantt charts, time tracking, and resource allocation, starting at $12/user/month,' ChatGPT can quote that in a comparison.
This is the hardest reason to accept because the marketing copy that converts human visitors is exactly the content that AI models ignore. AI models need: what your product does (in concrete terms), who it's for (specific use cases, not 'everyone'), what it costs (actual numbers), and how it compares to alternatives (honest positioning).
Check: Read your homepage and top landing pages. Count the factual statements (specific features, prices, numbers, comparisons) versus the promotional statements (superlatives, vague benefits, emotional appeals). If the ratio is below 50% factual, AI models probably can't extract useful information. Fix: Add a section to your key pages with specific, quotable facts. This doesn't mean removing your marketing copy — it means supplementing it with machine-extractable information.
Reason 4: No structured data for AI to parse
When AI models crawl your site, JSON-LD structured data gives them a clean, unambiguous source of facts. Without it, they have to guess from your HTML — and guessing is unreliable.
A product page with Product schema that includes name, price, availability, and reviews gives AI models everything they need to recommend your product confidently. Without schema, the AI has to parse the page visually, which often fails when pages have dynamic pricing, multiple variants, or prices hidden behind interaction elements.
Check: View your page source and search for 'application/ld+json'. If you find nothing, or if the schema is incomplete (missing price, missing reviews, missing description), your structured data needs work. Fix: Add Organization schema to your homepage, Product or SoftwareApplication schema to product pages, and FAQPage schema to any page with Q&A content.
Reason 5: Your brand lacks independent mentions
Training-based AI models (ChatGPT, Claude, Gemini) build their knowledge from the broader web, not just your own site. If your brand is only mentioned on your own website and nowhere else, AI models don't have enough signal to include you in recommendations.
This is analogous to backlinks in SEO, but broader. AI models learn about brands from: review sites (G2, Capterra, TrustPilot), industry publications, comparison articles by third parties, social media discussions, forum mentions (Reddit, HN, Stack Overflow), and directory listings.
Check: Search your brand name in quotes on Google. If the results are almost entirely your own site, you have a mention gap. Fix: This isn't a quick fix — it requires building genuine presence through PR, guest contributions, partnership announcements, and encouraging customer reviews on third-party platforms. Focus on 2-3 high-authority platforms relevant to your industry.
Reason 6: You have no llms.txt file
llms.txt is an emerging standard that gives AI models a curated entry point to your site — a Markdown file at yourdomain.com/llms.txt that describes what your site is, what's most important, and where to find it.
Without llms.txt, AI crawlers have to discover your site structure by following links from your homepage. They might miss important pages, crawl low-priority pages, or get stuck in pagination. With llms.txt, you control the narrative: 'here's what we do, here are our best products, start here.'
Check: Visit yourdomain.com/llms.txt. If it 404s, you don't have one. Fix: Create a Markdown file with your brand name as H1, a one-paragraph description, and links to your 10-20 most important pages organized by category (products, docs, policies). Serve it at /llms.txt and also at /.well-known/llms.txt as a fallback location.
Reason 7: Your site is too new or too small
Training-based AI models have knowledge cutoff dates. If your brand launched after the model's training data was collected, it literally doesn't know you exist. ChatGPT's training data has a cutoff, and brands that emerged after that date won't appear in its base knowledge.
This is less of an issue for retrieval-based systems (Perplexity, ChatGPT Browse) because they search the live web. But for direct ChatGPT or Claude queries without web browsing, a new brand is invisible by definition.
Fix: For retrieval-based platforms, ensure your site is crawlable and content-rich (fixes 1-6 above). For training-based models, the only fix is time — your brand will be included in future training data if it has sufficient web presence. In the meantime, focus on building the independent mentions (Reason 5) that will ensure you're included when the next training cycle runs.
Execution Checklist
- • Check robots.txt for blocks on GPTBot, ClaudeBot, PerplexityBot, and broad bot blocks.
- • Test CDN/WAF access by fetching your site with AI crawler user agent strings.
- • Audit your key pages: count factual vs. promotional statements. Aim for 50%+ factual.
- • View page source and verify JSON-LD structured data exists with complete fields.
- • Search your brand name in quotes on Google — check for independent third-party mentions.
- • Create and publish llms.txt at your domain root with your top 10-20 pages.
- • Run a free AI visibility scan to identify all technical blockers at once.
FAQ
How long does it take to start appearing in ChatGPT after fixing these issues?
It depends on which issue you're fixing. Retrieval-based fixes (robots.txt, CDN access, llms.txt) can show results in Perplexity within days. For ChatGPT and Claude's base knowledge (no browsing), changes only take effect when the model is retrained, which can take weeks to months. The fastest path to AI visibility is Perplexity, followed by ChatGPT Browse, then base model training.
My competitors are smaller than me but show up in AI answers. Why?
Size doesn't determine AI visibility — content quality and technical access do. A smaller competitor with clean robots.txt, complete structured data, specific product descriptions, and an llms.txt file will outperform a larger brand that blocks crawlers, uses vague marketing copy, and has no structured data. AI models optimize for extractable information, not brand size.
Should I pay for an AI SEO service to fix this?
Most of the fixes described here are one-time technical changes that any web developer can implement. Editing robots.txt, adding structured data, and creating llms.txt don't require specialized services. Where external help is genuinely useful: ongoing SOMV (Share of Model Voice) monitoring across platforms, content strategy for AI-optimized content, and building third-party brand mentions through PR and partnerships.