There is a version of this post that opens with a dramatic declaration about the death of SEO as we know it. This is not that post.
Agentic search is real. AI Overviews, Perplexity, ChatGPT with browsing — these are not experimental features any more. They are where a growing number of queries go to live, and they do change some things. But the changes that matter are mostly at the edges of what SEOs should already be doing, not the core. The fundamentals have not moved as much as the LinkedIn posts would have you believe.
Here is an honest breakdown.
What has actually changed
New crawlers, new robots.txt decisions
GPTBot, PerplexityBot, ClaudeBot, Anthropic-ai — these crawlers are now visiting your site. (For a broader look at how robots.txt decisions go wrong at scale, see Robots.txt Mistakes.) If you have a blanket Disallow: / for unknown agents, you may be blocking them without realising it. Whether you want to block them is a legitimate business decision. But it should be a deliberate one, not an accidental byproduct of a robots.txt rule someone wrote in 2018.
The practical implication: audit your robots.txt. Know which AI crawlers are hitting your site. Decide whether you want to be included in their training data or retrieval pools.
Brand visibility in AI answers is not the same as ranking
You can rank on page one and not appear in a single AI-generated answer. You can have strong authority on a topic and be completely absent from Perplexity's citations. The mechanisms are different. AI systems select sources they can summarise confidently and that have strong topical signals — not necessarily the pages Google ranks first.
This is new territory. It does not replace traditional SEO work, but it adds a layer: are you easy to cite? Is your content structured in a way that a language model can extract a clear, citable answer from?
Zero-click is more prevalent, not new
AI Overviews answer queries directly in the SERP. This increases zero-click behaviour, particularly for informational queries. This is an acceleration of something that has been happening for years with featured snippets — not a category shift.
The response is the same as it has always been: if zero-click queries are a significant part of your organic traffic, your strategy needs to account for that. Either optimise to be the cited source, or redirect effort toward queries where the click still happens.
Clarity matters more
Vague content, thin content, and content that says three contradictory things in one page has always been a problem. It is a bigger problem now. Language models struggle to summarise content that lacks a clear position. If your pages hedge everything, qualify every claim to the point of meaninglessness, and avoid saying anything direct — they will be skipped.
This is not a new principle. It is an existing one with higher stakes.
The SEOs who discover this earliest are usually the ones who notice their informational content has dropped in AI Overviews while their transactional pages are fine. Clean, specific, direct content gets cited. Content that covers all angles without committing to any of them doesn't.
What has not changed
Crawlability and indexability are still the entry requirements
Google AI Overviews is built on top of Google's existing index. If Googlebot cannot crawl and index your pages, you will not appear in AI Overviews. Full stop. The same technical prerequisites that have always mattered — clean canonicals, correct hreflang, no crawl blocks on important content, functional internal linking — are the foundation of AI search visibility too.
Structured data still works
Structured data still works for what it has always worked for: helping Google understand your content and qualify your pages for rich results. Whether LLMs use schema markup directly during inference is less clear — the honest answer is we don't know yet. What we do know is that well-structured content is easier to parse and summarise, with or without explicit markup.
Authority signals still matter
Large language models are trained on web data. The sources they weight most are sources that have demonstrated expertise and earned citations over time. Backlinks, topical authority, E-E-A-T signals — these are the same signals that have always defined authority. LLMs did not invent a new definition of credibility.
Technical architecture still matters
Site migrations, hreflang at scale, crawl efficiency, parameter handling — none of this has become less important. If anything, a technically clean site is a better signal to AI systems trying to understand what your domain is actually about.
Content quality has always been the filter
The SEOs who will struggle most with agentic search are the same SEOs who were already struggling with thin content and poor information architecture. The ones who have been building genuinely useful, well-structured content for specific audiences are, for the most part, fine.
The short version
If your site is technically clean, your content is clear and specific, your structured data is implemented correctly, and your authority is real — agentic search is not the crisis it is being sold as.
If your strategy has been built on quantity over quality, on thin content at scale, or on technical tricks rather than genuine expertise — agentic search is accelerating problems that were already there.
The fundamentals did not change. The margin for ignoring them got smaller.
← Back to Notes