OpenAI's Agentic Commerce Protocol: What to Know
Introduction
OpenAI has released what they're calling an agentic commerce protocol: a standardized framework for how AI agents discover, evaluate, and transact with businesses. This isn't a product announcement or a feature update. It's an infrastructure standard, the equivalent of what HTTP was for the web or what Google's crawling specifications were for search.
The protocol defines how AI agents should interact with businesses at every stage of the transaction process: discovery, evaluation, selection, and action. It specifies the data formats agents expect, the transaction patterns they follow, and the machine-readable signals they process.
For business owners, this protocol matters because it's the first formal specification of what your business needs to provide for AI agents to interact with it. Until now, agent-readiness was a best-guess exercise. Now there's a standard. And the businesses that implement it early will be the ones agents select first.
What the protocol actually standardizes
The agentic commerce protocol covers four interaction stages.
Stage 1: Discovery.
How agents find businesses that match a user's request.
The protocol specifies that agents should discover businesses through: web search results (processed from Bing or other search APIs), structured data on business websites (Schema.org vocabulary), business directory APIs, and booking platform integrations.
What this means for you: The discovery stage uses the same signals that drive current AI recommendations. Citation depth, entity consistency, structured data, and content authority remain the foundation of agent discovery. The protocol formalizes what we've been optimizing for.
Stage 2: Evaluation.
How agents assess whether a business meets the user's criteria.
The protocol specifies that agents should evaluate: service descriptions (from Service schema), pricing data (from Offer schema), availability (from openingHoursSpecification and booking platform data), quality signals (AggregateRating, reviews), and credentials (hasCredential, memberOf).
What this means for you: Comprehensive structured data is now a formally specified requirement, not just a best practice. Each element the protocol lists as an evaluation input should be present in your schema markup.
Stage 3: Selection.
How agents choose among evaluated options.
The protocol describes a weighted selection process that considers: relevance to user criteria, quality signals (reviews, ratings, credentials), availability match, pricing match, and transaction capability (can the agent complete the booking or purchase?).
What this means for you: Being selected isn't just about having strong entity signals. It's about having strong entity signals AND operational readiness. A business with 5-star reviews but no online booking may lose to a business with 4.5-star reviews and instant booking capability, because the protocol weights transaction completion.
Stage 4: Action.
How agents complete the transaction.
The protocol specifies standard interaction patterns for: booking appointments (through ReserveAction in schema or booking platform APIs), making purchases (through standard e-commerce checkout flows), submitting inquiries (through contact forms with standard field structures), and requesting quotes (through structured quote-request forms).
What this means for you: Your transaction endpoint (booking page, checkout, contact form) needs to be accessible through standard web patterns. Booking platforms (Calendly, Zocdoc, OpenTable, Shopify) already comply. Custom-built forms may need updates to match the protocol's expected patterns.
The protocol compliance checklist
Based on the protocol specifications, here's what your business needs to be fully compliant.
Discovery compliance:
- [ ] 30+ citations across independent, authoritative web sources
- [ ] Entity data consistent across all web mentions
- [ ] Website indexed on both Google and Bing
- [ ] Comprehensive structured data on website (specific business type, services, location)
Evaluation compliance:
- [ ] Service schema for each core service with name, description, duration, and pricing
- [ ] Offer schema with price or priceRange for each service
- [ ] openingHoursSpecification for all operating hours
- [ ] AggregateRating on website (reflecting actual review data)
- [ ] hasCredential for professional certifications
- [ ] memberOf for professional association memberships
- [ ] areaServed with specific geographic coverage
Selection optimization:
- [ ] Reviews on 3+ platforms with recent activity
- [ ] Content authority (published articles demonstrating expertise)
- [ ] Entity consistency score 8+/10 across web sources
- [ ] Online booking or transaction capability (for transaction weighting)
Action compliance:
- [ ] Online booking through a standard platform (Calendly, Zocdoc, OpenTable, etc.) OR
- [ ] E-commerce checkout through a standard platform (Shopify, WooCommerce, etc.) OR
- [ ] Contact form with standard fields (name, email, phone, message) accessible via direct URL
- [ ] potentialAction in schema pointing to the transaction endpoint
- [ ] No CAPTCHA or anti-bot measures on the primary transaction endpoint (agents can't solve CAPTCHAs)
That last point deserves emphasis: if your booking form or contact form has a CAPTCHA, agents cannot interact with it. CAPTCHAs are designed to block bots, and AI agents are, technically, bots. Businesses that want agent-mediated transactions need to find alternative anti-spam measures (honeypot fields, rate limiting, server-side validation) that don't block agent interactions.
What the protocol means for the competitive landscape
The release of a formal protocol creates three competitive dynamics.
Dynamic 1: A new compliance bar.
Until now, there was no standard for "agent-readiness." Businesses could claim to be optimized for AI agents based on their own definitions. The protocol creates an objective compliance standard. Businesses that meet it are agent-accessible. Businesses that don't are agent-blocked.
This is similar to how Google's mobile-friendliness standard created a binary: your site was mobile-friendly or it wasn't. The agentic commerce protocol creates a similar binary for agent accessibility.
Dynamic 2: Early implementers get a head start.
The protocol was just released. Adoption will be gradual. The businesses that implement compliance in the next 3 to 6 months will be agent-accessible before their competitors, capturing agent-mediated transactions in a low-competition environment.
Dynamic 3: Platform convergence.
OpenAI's protocol will likely become the de facto standard that Google, Microsoft, and other AI platforms adopt or adapt. Just as Schema.org became the universal structured data vocabulary (originally created by Google, Microsoft, Yahoo, and Yandex jointly), the agentic commerce protocol (or something very similar) will likely become the universal standard for agent-business interactions across all AI platforms.
Implementing compliance now prepares you for all agent platforms, not just OpenAI's.
The CAPTCHA problem (and how to solve it)
This is the most immediately actionable issue for many businesses.
If your contact form, booking page, or checkout includes a CAPTCHA (reCAPTCHA, hCaptcha, image-selection CAPTCHAs), AI agents cannot interact with it. The CAPTCHA blocks the agent the same way it would block a spam bot.
The problem: You need anti-spam protection. You also need agent accessibility. CAPTCHAs solve the first and block the second.
Solutions:
Honeypot fields. Add hidden form fields that humans don't see (they're hidden via CSS) but bots fill in. If the hidden field is filled, the submission is spam. If it's empty, it's legitimate. Agents are sophisticated enough to skip hidden fields (they process the form structure, not the visual layout), making honeypot fields agent-compatible.
Rate limiting. Limit the number of form submissions from a single IP address within a time period. This blocks spam bots that submit hundreds of forms but doesn't block a single agent submission.
Server-side validation. Validate form data on the server (checking for valid email format, realistic phone numbers, non-empty required fields) rather than using client-side CAPTCHAs.
Token-based verification. Implement invisible verification tokens (like reCAPTCHA v3's invisible scoring) that assess the submission's likelihood of being legitimate without requiring user interaction. Agents that behave like legitimate users (single submission, realistic timing, valid data) pass these checks.
The transition from CAPTCHA-based to CAPTCHA-free anti-spam should be a priority for any business preparing for agentic AI.
How protocol-compliant is your business? Run your free AI visibility audit at yazeo.com and assess your entity foundation, structured data implementation, and transaction accessibility. The audit evaluates the discovery and evaluation compliance requirements. The action compliance (booking capability, CAPTCHA status) requires the operational assessment described above.
Key findings
- OpenAI's agentic commerce protocol formally standardizes how AI agents discover, evaluate, select, and transact with businesses.
- The protocol specifies four compliance stages: discovery (entity signals), evaluation (structured data), selection (quality + transaction capability), and action (booking/checkout accessibility).
- CAPTCHAs block agent transactions and need to be replaced with agent-compatible anti-spam measures (honeypot fields, rate limiting, server-side validation).
- The protocol will likely become the cross-platform standard adopted by Google, Microsoft, and other AI platform providers.
- Early implementers get a compliance head start that translates into agent-mediated customer acquisition before competitors are ready.
