Imagine a city that rebuilds itself every morning. Not the buildings - the streets. The pathways between buildings rearrange overnight based on where people actually walked the day before. The shops that were most visited move closer to the residential areas. The park expands on sunny days and contracts on rainy ones. The hospital is always exactly where you need it to be, because the city knows you have an appointment.
This is not a fantasy. It is a description of what happens when autonomous agents assemble interfaces from composable components. The buildings - the functional modules - remain constant. But the arrangement, the pathways, the proximity, the emphasis - all of these are determined dynamically by an agent that understands the user's context, intent, and history.
In screen-based design, the interface was fixed. The designer determined the layout, the navigation, the information hierarchy. Every user saw the same structure. Personalisation, where it existed, was cosmetic - a recommended product here, a greeting there. The architecture was immutable.
In the agentic age, the architecture itself becomes dynamic. The interface is not designed once and deployed - it is assembled in real time from modular components, composed by an agent that understands what the human needs right now. This is composable interface design, and it represents the most significant shift in interface architecture since the invention of responsive design.
What Are Composable Interfaces
A composable interface is an experience surface assembled from discrete, self-contained components that can be combined, rearranged, and contextualised by an autonomous agent. Each component is a complete unit of functionality - it has its own data, its own interaction logic, its own visual presentation. But it is designed to be combined with other components in configurations that the original designer may never have anticipated.
The concept borrows from software engineering's composable architecture movement - the shift from monolithic applications to microservices, from tightly coupled systems to loosely coupled modules. But composable interfaces apply this principle to the experience layer, not just the technical layer. The question is not "How do we build software from interchangeable parts?" but "How do we build experiences from interchangeable parts?"
Three properties distinguish a composable interface from a traditional one:
Modularity. Each component is self-contained and independently deployable. A balance display component works whether it appears on a dashboard, in a notification, or embedded in a third-party application. It carries its own data connection, its own error handling, its own accessibility features.
Contextual assembly. The arrangement of components is determined by context, not by a fixed layout. The same set of components might be assembled differently for a first-time user and a power user, for a mobile device and a desktop, for a routine check and an urgent alert.
Agent orchestration. The assembly is performed by an autonomous agent, not by the user navigating a fixed structure. The agent decides which components to surface, in what order, with what emphasis, based on its understanding of the human's current needs.
"A composable interface is not a dashboard with widgets. It is an experience that an agent builds for you, from parts that were designed to be combined in ways their creators never fully predicted."
The Monolith Problem
Most digital interfaces today are monoliths. They are designed as complete, fixed experiences - a banking app with a predetermined set of screens, a predetermined navigation structure, a predetermined information hierarchy. The designer decides what goes where. The user navigates the structure the designer built.
But the monolith has three structural weaknesses that become critical in the agentic age.
First, monoliths cannot adapt to individual context. A banking app shows the same structure to a customer checking their balance and a customer in the middle of a mortgage application. The navigation is the same. The information hierarchy is the same. The emphasis is the same. The monolith cannot reshape itself around the user's current intent.
Second, monoliths cannot serve agents. When an autonomous agent needs to surface information to a human, it must work within the monolith's fixed structure. The agent cannot create a bespoke interface for the specific situation - it can only navigate the human to the relevant screen within the existing app. This is like asking a doctor to communicate a diagnosis by pointing at pages in a medical textbook instead of speaking directly to the patient.
Third, monoliths cannot compose across services. A human's financial life spans multiple institutions - a current account here, a mortgage there, investments elsewhere. Each institution has its own monolithic interface. An agent managing the human's overall financial health cannot create a unified view because each monolith is a walled garden.
Four Principles of Composability
Designing for composable interfaces requires a fundamentally different approach to interface design. I propose four principles that govern how composable components should be created, connected, and orchestrated.
I. Self-Describing Components
Every composable component must carry a machine-readable description of what it does, what data it needs, what interactions it supports, and what context it is appropriate for. This description is not documentation for human developers - it is a contract for the agent that will assemble the interface. The agent must be able to read the component's self-description and determine, without human guidance, whether this component is appropriate for the current context.
II. Contextual Contracts
Components must define the contexts in which they can operate. A "recent transactions" component might specify that it requires a minimum display width of 320 pixels, an authenticated user session, and a data connection to the transaction API. It might also specify that it should not appear alongside a "pending disputes" component because the two share visual patterns that could confuse the user. These contextual contracts allow the assembling agent to make intelligent composition decisions.
III. Graceful Degradation
Because components will be assembled in configurations the designer did not anticipate, every component must degrade gracefully when its ideal conditions are not met. If the display is too narrow, the component should simplify rather than break. If the data connection is slow, the component should show a meaningful loading state rather than a spinner. If a neighbouring component fails, the remaining components should continue to function. Composable interfaces must be resilient by design, not by accident.
IV. Semantic Interoperability
Components from different sources must be able to work together without custom integration. This requires a shared semantic layer - a common vocabulary for describing financial concepts, user states, interaction patterns, and visual relationships. Without semantic interoperability, composable interfaces become a Tower of Babel: each component speaks its own language, and the agent cannot translate between them.
"Composable interface design is not about building widgets. It is about building a language - a shared vocabulary of experience components that agents can speak fluently."
Agent-Assembled Experiences
The most radical implication of composable interfaces is that the agent becomes the interface designer. Not the human designer - the agent. The human designer creates the components and the rules for composition. The agent applies those rules in real time to assemble an experience tailored to the specific human, the specific moment, the specific context.
This is not personalisation as we have known it. Personalisation in the screen era was cosmetic: "Hello, Tony" at the top of the page, a recommended product based on browsing history, a notification badge showing unread messages. The structure of the experience was unchanged. The architecture was fixed. Only the content varied.
Agent-assembled experiences are structurally dynamic. The architecture itself changes. When you open your banking interface on a Monday morning, the agent might assemble a "weekly financial health" view: your spending summary, upcoming bills, investment performance, and a flag for an insurance renewal that is approaching. When you open the same interface on a Friday evening, the agent might assemble a "weekend readiness" view: your available balance, any pending transactions that might affect it, and a reminder that your savings agent has identified a better-rate account.
The components are the same. The assembly is different. And the assembly is determined not by a designer sitting in a product team meeting, but by an agent that has a model of your financial life, your behavioural patterns, and your current context.
The Context Engine
At the heart of every composable interface system is what I call the Context Engine - the agent subsystem that determines which components to assemble, in what configuration, for what purpose. The Context Engine is not a recommendation algorithm. It is an experience architect that operates in real time.
The Context Engine processes four categories of input:
Temporal context. What time is it? What day? What season? Is this a routine check or an unusual visit? Has the human just received an A2UI alert that prompted this visit? Temporal context determines urgency, relevance, and emotional register.
Behavioural context. What has the human done recently? What patterns characterise their usage? Do they typically check their balance daily or weekly? Do they engage deeply with investment data or glance at the summary? Behavioural context determines depth, complexity, and information density.
Situational context. What is happening in the human's financial life right now? Is a mortgage payment due? Has an investment threshold been triggered? Is there an unusual transaction that needs attention? Situational context determines which components are relevant and which can be deferred.
Environmental context. What device is the human using? What is their connectivity? Are they in a private or public setting? Environmental context determines the form factor, the level of sensitive information displayed, and the interaction modality.
Composability in Banking
Banking is the sector where composable interfaces will have the most transformative impact, because banking is the sector with the most fragmented customer experience. A typical UK consumer has financial relationships with five to seven institutions. Each institution has its own monolithic interface. The customer's financial life is scattered across walled gardens that do not communicate with each other.
Open Banking was supposed to solve this. It created the data pipes - the ability for authorised third parties to access account data across institutions. But Open Banking did not solve the experience problem. The data flows, but the experience remains fragmented. Each institution still presents its own monolithic interface, and the aggregators that attempt to unify the view are themselves monoliths - fixed structures that cannot adapt to individual context.
Composable interfaces, assembled by autonomous agents, complete what Open Banking started. The agent pulls data from multiple institutions, selects the relevant components from each, and assembles a unified financial view that is tailored to the human's current context. The "mortgage health" component comes from the mortgage provider. The "spending analysis" component comes from the current account provider. The "investment performance" component comes from the wealth manager. But the assembly - the arrangement, the emphasis, the narrative - is determined by the agent.
This is not a dashboard. Dashboards are static arrangements of data visualisations. This is a composed experience - a dynamic, contextual, agent-curated view of the human's financial life that changes every time they look at it because their context has changed.
The Design System Imperative
Composable interfaces cannot exist without design systems. A shared library of components is the prerequisite for any kind of composition, whether by human developers or autonomous agents.
But the design systems required for composable interfaces are fundamentally different from the design systems we have today. Current design systems are libraries of components designed to be used by human developers in predetermined layouts. They specify visual properties - colours, typography, spacing, interaction patterns - but they do not specify composition rules. They tell you what a button looks like, but they do not tell an agent when to use a button versus a link versus a gesture.
Composable design systems must include what I call composition grammar - a set of rules that govern how components can be combined. Which components can appear together? What is the maximum number of components in a single view? How should components be prioritised when screen space is limited? What transitions should occur when the composition changes? These rules are not visual specifications - they are architectural specifications that the assembling agent must follow.
The composition grammar must also include emotional coherence rules. A composed interface must feel like a unified experience, not a collection of parts. This means the grammar must specify how components relate to each other emotionally - how an urgent alert component affects the tone of neighbouring components, how a celebratory component (a savings goal achieved) should be positioned relative to a cautionary component (an upcoming large payment).
"The design system of the agentic age is not a component library. It is a composition grammar - a set of rules that teach agents how to build experiences that feel coherent, contextual, and human."
Multi-Agent Orchestration
The composable interface challenge becomes exponentially more complex when multiple agents are involved. In a mature agentic ecosystem, a human might have a savings agent, a mortgage agent, an insurance agent, an investment agent, and a tax agent - each operating autonomously, each with its own components to surface, each with its own view of what is important right now.
Without orchestration, this becomes a cacophony. Five agents, each assembling their own components, each competing for the human's attention, each unaware of what the others are surfacing. The result is the notification problem at the architectural level - not just too many interrupts, but too many composed views, each optimised for its own domain but collectively overwhelming.
The solution is what I call the orchestration layer - a meta-agent that sits above the domain agents and manages the overall composition. The orchestration layer receives composition requests from each domain agent, evaluates them against the human's current context and attention budget, and assembles a unified experience that balances the competing demands.
The orchestration layer is the urban planner of the modular city. Individual architects design the buildings. The urban planner determines how they relate to each other - which buildings are adjacent, which streets connect them, which views are preserved, which noise is contained. Without the urban planner, you get sprawl. With the urban planner, you get a city that works.
Designing the orchestration layer is one of the most challenging problems in agentic experience design. It requires understanding not just individual agent capabilities but the relationships between agents, the conflicts between their priorities, and the human's capacity to process information from multiple autonomous sources simultaneously.
The City That Builds Itself
The city I described at the beginning of this essay - the city that rebuilds itself every morning - is not a utopia. It is a design challenge of extraordinary complexity. A city that rearranges itself must still feel like home. The streets may change, but the landmarks must remain. The layout may adapt, but the character must persist. The experience may be dynamic, but the identity must be stable.
This is the central tension of composable interface design: dynamism versus familiarity. The interface must adapt to context, but it must also feel recognisable. The human must be able to find what they need, even when the arrangement has changed. The experience must feel curated, not random - assembled with intelligence, not assembled by accident.
The designers who solve this tension will draw on a skill that has not traditionally been part of the digital design toolkit: the skill of designing systems that generate experiences, rather than designing experiences directly. The designer does not design the interface the human sees. The designer designs the rules by which the agent assembles the interface the human sees. The designer is the city planner, not the architect of individual buildings.
This is a profound shift in the nature of design work. It requires systems thinking at a level that screen-based design never demanded. It requires the ability to reason about emergence - about how simple rules produce complex, coherent outcomes. It requires comfort with the fact that the designer will never see the final product, because the final product is different for every human, every context, every moment.
The modular city is being built. The components are being designed. The agents are learning to assemble. The question is whether the composition grammar - the rules that govern how the city arranges itself - will be written with the rigour, the empathy, and the architectural vision that the challenge demands. That is the work of composable interface design. That is the work of AXD.
"The designer of the agentic age does not design the interface the human sees. The designer designs the rules by which the agent assembles the interface the human sees. The designer is the city planner, not the architect of individual buildings."
