How Brands Can Adapt When AI Agents Do the Shopping
Harvard Business Review confirms the AXD Institute's founding thesis: agentic commerce requires a trust layer, not merely a technology layer. The article identifies five failure modes that map directly onto AXD practice frameworks.
The shift from SEO to GEO (generative engine optimisation) signals the end of persuasion-based discovery. Machine customers evaluate structured data, not marketing messages.
HBR draws the analogy to early e-commerce trust infrastructure (SSL, PCI standards), reinforcing the position that trust architecture is structural material, not a brand attribute.
Five failure modes identified - agent misunderstanding, authority overreach, data liability, brand misrepresentation, and recovery failure - provide a diagnostic framework for organisations preparing for agentic commerce.
HBR arrives at what the AXD Institute has argued since its founding: that agentic commerce demands a trust layer, not merely a technology layer. The article identifies five failure modes - agent misunderstanding, authority overreach, data liability, brand misrepresentation, and recovery failure - that map directly onto the AXD practice frameworks of delegation design, consent architecture, agent observability, and failure architecture. The authors' call for 'generative engine optimisation' (GEO) over traditional SEO confirms the shift from persuasion to performance that defines the machine customer era. Most significantly, the article draws the analogy to early e-commerce trust infrastructure (SSL, PCI standards), reinforcing the AXD position that trust is not a brand attribute but structural material.
What does HBR say about the future of brand strategy in agentic commerce?
Harvard Business Review's analysis arrives at a conclusion the AXD Institute has held since September 2024: that agentic commerce is not a technology upgrade but a structural transformation requiring new design disciplines. The article identifies five distinct failure modes that organisations face when AI agents begin acting as customers on behalf of humans.
These failure modes - agent misunderstanding, authority overreach, data liability, brand misrepresentation, and recovery failure - are not speculative risks. They are the operational consequences of deploying autonomous systems without the trust architecture to govern them. Each maps directly onto an AXD practice framework.
How do the five failure modes map to AXD frameworks?
Agent misunderstanding is a delegation design problem: the human's intent was not translated accurately into the agent's action parameters. Authority overreach is a consent horizon failure: the agent exceeded the boundaries of its delegated authority. Data liability is an agent observability gap: the organisation cannot trace what data the agent accessed, processed, or shared.
Brand misrepresentation is a trust architecture failure: the agent presented the brand in ways the organisation did not authorise. Recovery failure is a failure architecture gap: when something goes wrong, there is no designed pathway back to a trustworthy state.
Agent misunderstanding maps to Delegation Design - intent was not accurately translated
Authority overreach maps to Consent Horizon - agent exceeded delegated boundaries
Data liability maps to Agent Observability - no trace of data access or sharing
Brand misrepresentation maps to Trust Architecture - unauthorised brand presentation
Recovery failure maps to Failure Architecture - no designed pathway to recovery
Why does the shift from SEO to GEO matter?
The authors' call for generative engine optimisation (GEO) over traditional SEO confirms a shift the AXD Institute has been mapping since its founding. Traditional SEO optimises for human attention: rankings, click-through rates, and time on page. GEO optimises for machine evaluation: structured data, factual accuracy, and protocol-level discoverability.
This is not a minor adjustment to marketing strategy. It is a fundamental reorientation of how commercial organisations present themselves. The machine customer does not browse, scroll, or respond to emotional appeals. It evaluates structured data against delegated criteria and transacts accordingly.
What is the trust infrastructure analogy and why does it matter?
HBR's most significant contribution is the analogy to early e-commerce trust infrastructure. When online shopping emerged in the late 1990s, the technology existed before the trust layer. SSL certificates, PCI compliance standards, and payment card security frameworks had to be built before consumers would transact online at scale.
Agentic commerce faces the same structural requirement. The AI capability exists. The protocols are emerging. But the trust infrastructure - the standards, verification mechanisms, and governance frameworks that enable humans to delegate commercial authority to autonomous systems with confidence - is still being designed. This is the work of Agentic Experience Design.
What are the five failure modes HBR identifies for agentic commerce?
HBR identifies five failure modes: agent misunderstanding (the agent misinterprets human intent), authority overreach (the agent exceeds its delegated scope), data liability (the agent accesses or shares data inappropriately), brand misrepresentation (the agent presents the brand in unauthorised ways), and recovery failure (no designed pathway exists when things go wrong). Each maps to an AXD practice framework.
What is generative engine optimisation (GEO)?
Generative engine optimisation (GEO) is the practice of optimising commercial content for AI-driven discovery rather than traditional search engine rankings. Unlike SEO, which targets human attention through rankings and click-through rates, GEO focuses on structured data, factual accuracy, and machine-readable product information that AI agents can evaluate programmatically.
How does agentic commerce compare to early e-commerce?
HBR draws a direct analogy: just as early e-commerce required trust infrastructure (SSL, PCI standards) before consumers would transact online at scale, agentic commerce requires trust architecture before humans will delegate commercial authority to AI agents. The technology capability exists, but the governance and verification frameworks are still being designed.
What should brands do to prepare for AI shopping agents?
Brands should build trust architecture (not just technology), ensure their product data is machine-readable and structured for agent evaluation, design delegation frameworks that specify what agents can and cannot do on behalf of consumers, and implement failure recovery mechanisms for when agent-mediated transactions go wrong.
Founder, AXD Institute
Tony Wood is the founder of the AXD (Agentic Experience Design) Institute and the originator of AXD - the design discipline for trust-governed human-agent interaction in agentic AI systems. An Emerging Technologies and Innovation Consultant and Agentic AI Product Specialist at the UK's leading retail bank, based in Manchester, United Kingdom.
Return to the full intelligence feed for more curated analysis of the agentic commerce landscape.
All News & Intelligence