For e-commerce SEOs with "agentic optimisation" on the roadmap and nothing in the brief beyond the buzzword (yet). And for the developers expected to build it anyway.
I recently wrote a strategic overview of digital shelf optimisation in agentic commerce for Intrepid Digital. It covers the market shift, the framework, and why this matters for e-commerce leadership. If you want the "what's happening and why should I care" version, start there.
This post is the practitioner companion. No frameworks. No strategic narratives. This is about what to actually check, fix, and add in your product data: schema markup, Merchant Center feeds, PDP content, and the alignment layer between them. That way, AI shopping agents can find, evaluate, and transact on your products.
Because the uncomfortable reality is this: most of what agents need from your product data is the same structured data you should already have in place for Google Shopping. The problem isn't that agentic commerce requires something fundamentally new. The problem is that most sites never got the existing layer right.
If you want the full breakdown of how AI shopping agents work, how the user journey changes, and why the traditional marketing funnel doesn't apply, that's covered in What Is Agentic Commerce? This post starts where that one ends: with your data.
The three layers agents evaluate — and where most sites break
Your product data reaches AI systems through three distinct layers. If you've worked on e-commerce SEO, these are familiar. What's different is that agents cross-reference all three simultaneously, and inconsistencies between them create a trust problem that didn't exist when a human was the evaluator.
| Layer | What it is | Who processes it | Common failure |
|---|---|---|---|
| PDP content | The visible product page: title, description, specs, images, reviews | Humans, crawlers, LLMs reading page content | Rich on story, thin on structured attributes |
| Product feed | GMC feed, ChatGPT product feed, marketplace feeds | Google Shopping, AI agent discovery systems | Truncated titles, missing GTINs, stale prices |
| Schema markup | JSON-LD structured data in the page <head> |
Google rich results, AI agents evaluating on-page data | Hardcoded, missing variants, no shipping/returns |
When all three layers agree (the PDP says £89, the feed says £89, the schema says £89), there's no ambiguity. The agent has high confidence. When they disagree (the PDP says £89, the feed says £79 because of yesterday's sale price that didn't update, and the schema says £99 from hardcoding at launch that was never touched), the agent has three conflicting signals and no way to determine which is correct.
A human might land on the PDP and see the current price. The agent, evaluating from feed and schema data before it ever renders your page, may simply move on.
There's also a layer that sits across all three: trust signals. Reviews, ratings, verified purchase indicators, certifications, and seller reputation. Whether these are collected on your own site or aggregated from third-party platforms, agents treat them as confidence indicators when evaluating the reliability of your product data. A product with 2,000 verified reviews and a 4.6 rating sends a different signal than one with no reviews at all, even if the structured data is otherwise identical. Agents weigh these signals during the comparison stage, and they're available as structured data: aggregateRating and review in your on-page schema, review feeds in Merchant Center, and seller ratings across platforms. If your competitors have strong review signals and you don't, that's a competitive gap that clean data alone won't close.
Product schema audit for AI agents
If you've implemented Product schema on your e-commerce site, you're ahead of most. But "implemented" and "complete" are different things, and the gap between them is where agent visibility is lost.
Here's the audit I'd run. Check each property against your actual markup (not what your CMS documentation says it outputs, but what actually appears in the rendered HTML). Use the Rich Results Test or view source.
Product-level properties: the baseline
| Property | Agent impact | What I typically find |
|---|---|---|
name |
Primary identification. Must match feed title closely. | Usually present, but often a marketing title rather than the descriptive title in the feed |
sku |
Feed-to-schema matching. Must match feed id or use the same GTIN. |
Missing in ~40% of implementations I audit |
gtin13 |
Universal product identifier. How agents confirm product identity across sources. | Often missing entirely or at parent level instead of variant level |
brand.name |
Brand filtering and constraint matching | Usually present but sometimes set to the retailer name instead of the product brand |
description |
Semantic understanding. Agents parse this for attributes, use cases, compatibility. | Either missing or a duplicate of the meta description (too short to be useful) |
image |
Product representation. Multiple aspect ratios recommended. | Present but often a single image URL rather than an array |
Offer-level properties: where money lives
| Property | Agent impact | What I typically find |
|---|---|---|
price + priceCurrency |
The constraint agents match most frequently. Must be the active, current price. | Usually present. Often stale if hardcoded. |
availability |
Agents filter out of stock items. Must reflect real-time inventory. | Hardcoded to InStock on ~60% of sites, regardless of actual stock |
priceValidUntil |
Signals how current the price data is. Agents use this for confidence. | Almost always missing |
itemCondition |
New/Refurbished/Used filtering | Missing on most sites. Defaults to ambiguous. |
shippingDetails |
Delivery cost and speed. Increasingly a comparative signal for agents. | Missing on ~80% of e-commerce sites |
hasMerchantReturnPolicy |
Return conditions. Part of the trust evaluation. | Missing on ~90% of sites |
Look at that last column. Prices and names are usually there. But SKUs, GTINs, shipping, returns, and condition (the attributes that agents use to compare and rank) are consistently absent. This is the gap: not missing schema entirely, but missing the properties that move you from "technically present" to "competitively positioned."
InStock for a product that's actually out of stock, you're not just misleading Google — you're creating a data integrity failure that agents will detect when they cross-reference your feed (which does reflect real inventory) against your markup (which doesn't). Mismatches like this erode trust at the domain level, not just the product level.
Variants: the biggest opportunity gap
I wrote about this in detail in the Product vs ProductGroup post, and the Product Variants schema guide covers the full implementation. Here's why it matters specifically for agentic commerce.
When a user tells an agent "blue running shoes, size 9," the agent needs to find a specific variant (not a product family). If your schema only describes the parent product with an AggregateOffer price range, the agent can't confirm that the blue version exists, that it comes in size 9, or what it costs. It can tell the user you sell running shoes somewhere in the £89–£129 range. That's a weaker signal than a competitor whose schema says "Blue, Size 9, £99, InStock."
The ProductGroup + variant Product structure gives agents what they need: individual variants with their own SKU, GTIN, price, availability, colour, size, and image. Each variant is independently matchable against user constraints.
The critical rules haven't changed from the variants guide, but they're worth restating in this context because agent systems are less forgiving than Google's traditional Shopping pipeline:
- Every dimension in
variesBymust appear as a property on every variant. If you declarevariesBy: color, sizebut a variant is missing itssizevalue, agents can't classify it. - Every variant must have its own
offersobject. Missingofferson one variant can invalidate the entire ProductGroup in some agent implementations. variesBymust use full schema.org URIs (e.g.,https://schema.org/color, not"Color"). Plain text is technically valid schema, but it doesn't trigger Shopping swatches and agents may not interpret it as a filterable dimension.- Keep
OutOfStockvariants in schema with the correct availability value. Removing them entirely means agents lose track of the full variant set.
Organization-level markup: the layer most brands skip entirely
This is where I see the biggest gap between what Google's documentation recommends and what e-commerce sites actually implement.
Google supports (and increasingly rewards) Organization-level structured data for shipping policies (ShippingService), return policies (MerchantReturnPolicy), and loyalty programmes (MemberProgram). These are defined once on your Organization page and referenced from individual product offers, rather than duplicated across every product page.
For agents, this is significant because shipping and returns are comparison signals. When a user asks an agent to find a product with free shipping and easy returns, the agent needs structured data to evaluate that. If your competitor's schema includes shippingRate: 0, currency: GBP, transitTime: 1-3 days and yours doesn't mention shipping at all, the agent has no basis to include your product in a "free shipping" filtered result.
ShippingService: define once, reference everywhere
{
"@context": "https://schema.org/",
"@type": "Organization",
"name": "Your Store",
"url": "https://www.yourstore.com",
"hasShippingService": {
"@type": "ShippingService",
"@id": "https://www.yourstore.com/shipping#standard",
"shippingConditions": {
"@type": "ShippingConditions",
"shippingRate": {
"@type": "MonetaryAmount",
"value": 0,
"currency": "GBP"
},
"shippingDestination": {
"@type": "DefinedRegion",
"addressCountry": "GB"
},
"deliveryTime": {
"@type": "ShippingDeliveryTime",
"handlingTime": {
"@type": "QuantitativeValue",
"minValue": 0, "maxValue": 1, "unitCode": "DAY"
},
"transitTime": {
"@type": "QuantitativeValue",
"minValue": 1, "maxValue": 3, "unitCode": "DAY"
}
}
}
}
}
Then on each product page, reference it from the Offer:
"shippingDetails": {
"@type": "OfferShippingDetails",
"hasShippingService": {
"@id": "https://www.yourstore.com/shipping#standard"
}
}
MerchantReturnPolicy: same pattern
"hasMerchantReturnPolicy": {
"@type": "MerchantReturnPolicy",
"@id": "https://www.yourstore.com/returns#policy",
"applicableCountry": "GB",
"returnPolicyCategory":
"https://schema.org/MerchantReturnFiniteReturnWindow",
"merchantReturnDays": 30,
"returnMethod": "https://schema.org/ReturnByMail",
"returnFees": "https://schema.org/FreeReturn"
}
This is not optional for agent competitiveness. It's the difference between your product appearing in a constrained search ("free returns, next-day delivery") and being invisible to it.
Feed alignment: where your data contradicts itself
The GMC feed has always mattered for Shopping performance. In an agentic world, it matters even more: Google's shopping agents draw from your feed data when making recommendations, and OpenAI's product feed specification introduces a parallel channel with its own requirements.
The most common feed-to-schema mismatches I find during audits:
| Mismatch | Why it happens | Agent impact |
|---|---|---|
| Feed price ≠ schema price | Schema is hardcoded; feed updates dynamically from PIM | Agent has two conflicting price signals. Deprioritises or excludes. |
Feed id ≠ schema sku |
Different systems, different identifiers, no mapping | Agent can't confirm feed product and schema product are the same item. |
| Feed availability ≠ schema availability | Feed reflects real inventory; schema hardcoded to InStock | Trust failure. Agent learns your schema is unreliable. |
| Feed has GTINs; schema doesn't | GTIN added to feed for GMC compliance; never added to schema | Agent can't cross-reference product identity across sources. |
| Feed title ≠ schema name | Feed title optimised for Shopping; schema name from CMS field | Minor but adds ambiguity. Agent may treat as different products. |
| Feed has variant data; schema is parent-only | Feed exports individual variants; schema uses AggregateOffer | Agent can find variants in feed but can't validate them on-page. |
Every one of these is fixable. Most of them are fixable quickly if your schema is dynamically generated from the same data source as your feed. The root cause, in almost every case, is that the feed and the schema are maintained by different teams using different data sources with no cross-validation.
If you haven't run a feed audit recently, the GMC Feed Analyzer will flag the feed-side issues. The schema-side equivalent is running your product URLs through the Rich Results Test and comparing what appears against your feed data field by field.
The PDP content gap: what agents read vs what you wrote
Your product descriptions were written for humans. Understandably. The problem is that AI agents also read them, and they're looking for different things.
A human reads "Luxurious comfort for all-day wear" and fills in the meaning from context, images, and prior brand experience. An agent reads it and extracts zero structured attributes: no material, no weight, no use case, no compatibility information.
Compare:
What agents can't use
- "Premium quality you can feel"
- "Designed for the modern adventurer"
- "Our most popular style"
- "Perfect for any occasion"
What agents can parse
- "400 thread count, 100% Egyptian cotton"
- "Waterproof to 10,000mm, seam-sealed"
- "Compatible with Shimano SPD-SL cleats"
- "Machine washable at 40°C"
This doesn't mean you need to strip all personality from your PDPs. It means the structured, attribute-rich information needs to be present alongside the brand narrative and ideally surfaced in both the visible content and the structured data layer.
The practical check: for each product, can an agent determine these from your PDP content, feed, or schema?
- What it's made of (material, composition)
- What it's for (use case, activity, compatibility)
- What distinguishes it from similar products (key specifications)
- What constraints it satisfies (size range, weight capacity, certifications)
- What condition it's sold in (new, refurbished)
If any of those are only communicated through brand copy that a machine can't reliably extract, you have a gap.
ChatGPT already sources from your Google Shopping feed
This is the data point that changes the entire conversation about "optimising for AI agents."
A 2026 study by Peec AI, published on Search Engine Land, analysed over 43,000 ChatGPT product carousel items and 200,000 organic shopping results from both Google and Bing. The finding: 83% of ChatGPT's product carousel items were strong matches with Google Shopping's top 40 organic results. For Bing, that figure was 11%. Only 70 products across the entire dataset (0.16%) appeared exclusively in Bing and not in Google.
The mechanism is something called a shopping query fan-out. When you ask ChatGPT a product question, it doesn't answer purely from training data. It generates background search queries (fan-outs) to retrieve live product data. Shopping fan-outs are distinct from the regular search fan-outs ChatGPT uses for contextual information: they're shorter (averaging 7 words versus 12 for regular fan-outs), there are fewer of them per prompt (1.16 versus 2.4), and they differ from regular fan-outs 98.3% of the time. They exist specifically to hit shopping indexes for structured product listings.
The shopping index they overwhelmingly hit is Google's.
Let that sink in for a moment. The product carousel in ChatGPT (the thing the entire industry is trying to figure out how to optimise for) is drawing from Google Shopping organic results. The same Google Shopping results that are powered by your Merchant Center feed, your product schema markup, and your PDP content.
This reframes everything. The work you do on your GMC feed (titles, descriptions, GTINs, pricing, availability, product types, variant data) is not just Google Shopping work. It is AI agent visibility work. The feed quality checklist is the same. The schema alignment requirements are the same. The data governance discipline is the same.
60% of matched carousel products came from Google's top 10 organic shopping results. Ranking well in Google Shopping isn't just correlated with ChatGPT visibility; it's the primary pathway to it.
The ACP feed: a parallel channel, not a replacement
That said, OpenAI's Agentic Commerce Protocol (ACP), launched with Stripe in September 2025, does introduce a separate product feed specification. At launch, over one million Shopify merchants were in the onboarding queue. The ACP feed isn't a copy of your GMC feed; it's optimised for LLM-driven product understanding rather than Shopping ad placement.
Key differences from GMC feeds:
- Descriptions matter more. In GMC, descriptions are secondary to titles. In an LLM feed, the description is primary content the agent uses to understand the product's purpose, use case, and differentiators.
- Categorisation may use different taxonomy. GMC uses Google's product category taxonomy. ChatGPT's feed may use different classification systems or rely on the agent to classify from descriptions.
- Attribute granularity is rewarded. Every additional structured attribute (care instructions, compatibility, certifications, warranty) gives the agent more dimensions to match against user constraints.
But here's the nuance the Peec AI data reveals: even with ACP, the carousel discovery layer is still largely powered by Google Shopping. The ACP feed matters for transactional execution (completing the purchase within ChatGPT), but discovery and initial product selection happen upstream through shopping fan-outs against Google's index.
Which means the priority sequence is clear: get your Google Shopping performance right first. That's the gatekeeping layer. Products that don't make it into Google's top 40 for the relevant shopping fan-out query are excluded from the ChatGPT carousel selection pool before any other factors come into play. Then layer on ACP feed optimisation for the transactional side of things.
If your current feed workflow already pulls from a well-structured PIM with rich attribute data, both feeds are manageable. If your PIM is thin and your GMC feed is the richest version of your product data, you have a bigger problem: your canonical product data isn't rich enough for any agent environment, whether Google's or OpenAI's.
Agentic commerce optimization checklist
Here's the sequence I'd follow. Start with your top 20 revenue-driving SKUs. Get them right, then scale the patterns across the catalogue.
Schema markup (per product page)
- Is the schema dynamically generated from product data, or hardcoded?
- Does
pricein schema match the live PDP price and the feed price? - Does
availabilityreflect real-time inventory? - Is
skupresent and does it match the feedid? - Is
gtin13(or appropriate GTIN) present at variant level? - For variant products: does
ProductGrouphavevariesBywith full schema.org URIs? - Does every variant have its own
Productwithoffers,color/size, andsku? - Is
shippingDetailspresent on the Offer? - Is
hasMerchantReturnPolicypresent on the Offer? - Does
descriptionin schema contain attribute-rich content (not just a marketing tagline)?
Organization-level markup
- Is
ShippingServicedefined at Organization level with a referenceable@id? - Is
MerchantReturnPolicydefined at Organization level? - Do product-level Offers reference these policies via
@id?
Feed alignment
- Do feed
idvalues match schemaskuvalues? - Do feed prices match schema prices match PDP prices?
- Do feed availability values match schema availability values?
- Are GTINs present in both feed and schema?
- Are feed titles reasonably consistent with schema
namevalues? - If the feed has variant data, does the schema also have variant-level markup?
PDP content
- Does the product description include explicit material, use case, and specification attributes?
- Can an agent determine what distinguishes this product from competitors without relying on images?
- Are key attributes (size, material, compatibility) in text, not just in images or PDFs?
Governance
- Do schema and feed pull from the same canonical data source?
- When a price changes in the PIM, does the schema update automatically?
- When a product goes out of stock, does the schema reflect it in real time?
- Is there a defined owner for product data integrity across PDP, feed, and schema?
What this means practically
Agentic commerce doesn't require you to throw out your existing structured data work and start over. It requires you to complete it.
The Product schema you implemented two years ago? Still the foundation. The GMC feed your team maintains? Still the primary discovery channel. The PDP content your copywriters produced? Still matters. It just needs to do double duty, serving both human readers and machine evaluators.
What's new is the standard you're measured against. In traditional Shopping, incomplete schema meant you missed some rich result enhancements. In agentic commerce, incomplete data means an agent can't confidently include your product in its recommendation set. The same gaps exist. The consequence of leaving them is larger.
The brands that will perform well in agent-driven commerce are not the ones with the most sophisticated AI strategy. They're the ones with the cleanest, most complete, most consistent product data layer (that's what agents evaluate). And the work to build that layer is unglamorous, specific, and entirely within reach of any e-commerce team that takes structured data seriously.
Which, if you're reading this, you probably already do.
Frequently asked questions
Do I need new schema markup for agentic commerce?
Probably not a full rebuild. Audit and extend instead. If your existing Product, ProductGroup, and Merchant Listing schema is complete and accurate, you're ahead of most merchants. The gap for most sites is in Organization-level policies (shipping, returns), feed-to-schema alignment, and the completeness of variant-level attributes. AI agents process the same structured data Google does; they just have less tolerance for gaps.
What is the difference between a GMC feed and an AI product feed?
A GMC feed is optimised for Google Shopping; it follows Google's attribute specification and is validated against Google's requirements. AI product feeds (like those for ChatGPT) may have different attribute requirements, different categorisation taxonomies, and different expectations for description quality. The core product data is the same, but AI feeds reward richer, more descriptive attribute data because the agent needs to reason about your product, not just list it.
Does structured data affect AI shopping agent recommendations?
Yes. AI shopping agents (whether Google's, OpenAI's, or Amazon's) evaluate products based on structured signals: price, availability, reviews, specifications, shipping, and return policies. Schema markup is one of the primary sources of these signals. If your structured data is incomplete, the agent has less information to work with than competitors whose data is complete. In an environment where the agent makes the selection, data completeness is directly competitive.
Should I optimise for Google or OpenAI shopping agents?
Both. The good news is that the foundation is the same. Complete, accurate, well-structured product data benefits you across all agent environments. The differences are in feed formats and platform-specific protocols (Google's UCP vs OpenAI's ACP), but the underlying product data quality that makes your products agent-eligible is universal. Start with your core data layer and then adapt output formats per platform.
What is the Agentic Commerce Protocol (ACP)?
ACP is an open standard developed by OpenAI and Stripe that enables purchases to be completed directly within ChatGPT. It allows AI agents to discover products, present options to users, and execute transactions without the consumer visiting the merchant's website. At launch, over one million Shopify merchants were in the onboarding queue. Google has introduced a parallel protocol (the Universal Commerce Protocol, or UCP) for its own agent infrastructure.
Sources
- Why Digital Shelf Optimization Is the Most Important Skill in E-commerce Right Now | Intrepid Digital (original strategic framework)
- Buy it in ChatGPT / ACP launch | OpenAI
- Stripe ACP announcement
- Global study: 73% of shoppers using AI in shopping journey | Riskified, October 2025
- Holiday Shopping / AI retail traffic data | Adobe
- ChatGPT sources 83% of carousel products from Google Shopping via shopping query fan-outs | Search Engine Land / Peec AI, 2026