The pages winning AI citations are not the most authoritative on the web, they are the most topically focused and written in natural language. Two new studies analyzing millions of queries, prompts, and web pages reach the same conclusion: when it comes to AI search, answering the exact question matters more than traditional authority signals.

These findings have significant implications for how executives should think about content strategy, AI Search investment, and the growing class of vendors promising to “get you ranked on ChatGPT.”

AI Search Is Downstream Of Google Search

The most practically useful insight from these studies is that ChatGPT and other AI search providers are not parallel systems requiring a parallel strategy. As the research demonstrates, these platforms operate downstream of traditional search. Rank at the top of Google and you are almost certainly near the top of ChatGPT's retrieval list for that same query.

The AirOps study of 354,000 web pages makes this concrete:

  • Pages in the first position are cited 58% of the time
  • By position ten, that drops to 14%
  • Pages cited consistently across multiple query runs had a median retrieval rank of 2.5
  • Pages never cited had a median rank of 13

The Ahrefs study of 1.4 million ChatGPT prompts doubles down on that data: 88% of all cited URLs come from the general web search index.

What research is showing is that ChatGPT draws on sites like Reddit, Wikipedia and other major publishers to understand topics and build context, then turns around and cites someone else. It learns from the crowd and credits the institution.

Answer The Question, Not The Topic

If a page is ranking, the deciding factor then becomes how precisely its content matches the specific question being asked. The difference at this stage is simple: traditional keyword queries the internet has operated on for years such as “Auto repair near me” or “Med spa in Los Angeles” are now submitted as natural language queries with a lot more semantic value. With each query AI answer engines generate sub-queries—a process called query fan out—and look for pages that answer each one directly.

The Ahrefs data shows cited pages had meaningfully higher semantic similarity to those internal sub-questions than non-cited pages. The AirOps data supports this as well: Pages whose headings closely matched the user’s query were cited 41% of the time vs. 30% for weaker matches and even lower for those with little or no semantic match to the users’ natural language query.

Breadth Is Not As Valuable As Depth

Researchers assumed the domains with the highest authority would show up the most. What was discovered was that publishers like Forbes, Reddit, TechRadar, Vogue, and Consumer Reports tended to appear less reliably than expected. The analysts believe that by covering many topics across many verticals but rarely owning any particular question, has actually harmed their ability to be cited consistently. The always-cited pages belong to a different category: narrowly focused resources that surface for fewer queries but win consistently when they do.

  • Pages covering 26–50% of related subtopics outperformed pages covering 100% of them
  • Optimal article length for citations is 500 to 2,000 words; pages over 5,000 words underperform pages under 500
  • Health-focused publishers like Healthline and WebMD achieved citation rates above 46%

What this means is that "ultimate guide" format that tries to cover every angle is not as valuable in AI search. Spreading attention across too many questions was inversely related to consistency. A focused page built around one question consistently outperformed comprehensive guides built around a topic. A focused site with comprehensive knowledge from different angles becomes the aggregator of topical authority, which research shows AI prefers.

Wikipedia represents the outer boundary of this principle. It achieves a 59.2% citation rate despite ranking at a median position of 24 in search results. Researchers believe Wikipedia only receives this treatment due to the density of its content—exhaustive documentation within each topic it covers. The AirOps researchers explicitly mentioned this strategy is non-replicable for most publishers, and they believe that the average publisher would be better off maintaining a focus and depth within that focus.

Own The Next Wave Of AI Search

Search has changed. Not just functionally, but behaviorally. Users arrive with full sentences, specific context, and follow-up questions—all in one chat. AI systems are built to match that. The difference is less about the technical nature of the site or page and more about how well you can respond to the natural language query with authoritative answers. AI search rewards depth more consistently, and penalizes the "cover everything" approach more directly than traditional search ever did.

Ranking gets you cited today. Building topical depth gets you embedded in the model’s knowledge graph before the next search is even made.

Create content for natural language queries. Match titles and URLs to how people actually ask questions. Build content that is the clearest possible answer to one specific question. Do that with sustained focus in a defined subject area and you are not just optimizing for today’s results, you are building topical authority that shapes future training data. The structured, topically authoritative content an organization publishes today becomes the automatic answer two and three years from now as models are trained on newer data.

The unwritten truth illuminated from this research is that SEO is not dead, it just got bigger.