Foundation
Agentic Design Language
The canonical interaction design reference for autonomous AI interfaces. Seven interaction domains, each with design rules, signal questions, pattern references, and anti-patterns. This is the vocabulary that makes algorithmic authority legible.
Agent Identity Signals
Who is acting, and what is their authority?
Every agent action must be attributable. When an agent writes, reads, delegates, or decides — the interface must make its identity legible without requiring the user to navigate away. Identity signals are not decorative; they are the org chart of your digital life made visible.
Signal question
Is it always clear which agent is acting, under what authority, and on whose behalf?
Design rules
Show agent identity at point of action, not on demand.
Burying agent attribution in an audit log fails the moment of consequence. The user should see who is acting in the same viewport where the action occurs.
Use --ds-color-agency (red) for active agent states only.
Agency red is a semantic token meaning 'an AI is currently exercising authority here.' Using it decoratively destroys its signal value.
Distinguish agent roles visually: orchestrator, executor, observer.
A multi-agent system has a hierarchy. Orchestrators delegate; executors act; observers audit. Collapsing them into a single 'AI' label loses critical accountability information.
Never show an agent as always-on. Reflect actual execution state.
A spinning indicator on a dormant agent creates false urgency and erodes trust. StatusDot must reflect real-time state.
Anti-pattern
A dashboard footer that says 'Powered by AI' — with no indication of which model, which agent, what permissions it has, or what it did.
Patterns & components
Token references
--ds-color-agency--ds-color-temporal--ds-surface-inverseTrust & Provenance Display
Where did this come from, and how much should I trust it?
Data lineage is not a compliance checkbox. It is the primary interface for calibrated trust. When an agent surfaces a number, a recommendation, or a synthesis, the user deserves to know: the source, its age, its transformation path, and the model's confidence. Collapsing this into a single 'AI generated' disclaimer is a design failure.
Signal question
Can the user trace any data point back to its source without leaving the current view?
Design rules
Every synthesized output must link to at least one source.
Synthesis without citation is assertion. Users cannot evaluate, correct, or act responsibly on information they cannot trace.
Show data freshness inline, not in a tooltip.
If data is 3 hours old in a fast-moving market, that information is decision-critical. Hiding it behind a hover interaction means most users never see it.
Distinguish retrieval confidence from model confidence.
A model may be very confident in its reasoning about uncertain data. These are different claims and require separate signals.
Use --ds-color-validation (gold) for verification and quality signals.
Gold means 'this has been checked, enriched, or validated.' It is the semantic token for epistemic status.
Anti-pattern
A research summary with a single footnote: 'Sources: internet.' No links, no timestamps, no confidence scores.
Patterns & components
Token references
--ds-color-validation--ds-color-temporal--ds-text-mutedMulti-Agent Coordination UI
How do agents hand off, coordinate, and report?
When agents work together, the interface must reveal the coordination topology — not hide it. Users need to understand: which agent delegated to which, what each agent was authorized to do, and how their outputs were combined. A black-box pipeline that surfaces only its final output is not transparent; it is opaque with a summary attached.
Signal question
Can a user understand, at a glance, how many agents were involved and what each one did?
Design rules
Render handoff events as explicit state transitions, not silent background actions.
When Agent A delegates to Agent B, that transition has implications for accountability, latency, and cost. It should be observable.
Show the delegation graph when more than two agents are involved.
Linear timelines break down at 3+ agents. A graph makes orchestration topology immediately legible.
Display agent-to-agent communication as first-class events.
An internal API call between agents is no different from a human decision-point in terms of accountability. Both should appear in the audit trail.
Label each agent's scope boundary. What was it allowed to access?
Permission scope is not just a security concern — it is the primary variable a user needs to evaluate how much to trust each agent's output.
Anti-pattern
A workflow result that says 'processed by 4 agents' with no detail on which agents, what they did, or what data they accessed.
Patterns & components
Token references
--ds-color-agency--ds-border-structure--ds-surface-raisedApproval Ceremony Flows
Friction that creates meaning.
Consequential AI actions must feel weighty. The design philosophy is ceremony over efficiency for high-stakes decisions. An approval interaction should communicate: what is being authorized, what the consequences are, which agent is requesting it, and what the user is forfeiting if they decline. Approval UIs that optimize purely for conversion rate have inverted their purpose.
Signal question
Does this approval action feel proportional to the consequence it gates?
Design rules
Show consequence disclosure before, not after, the approval action.
An irreversible action that reveals its consequences only in a post-confirmation screen is a dark pattern regardless of intent.
Name the requesting agent explicitly in every approval prompt.
Approving 'access to your calendar' is meaningless without knowing which agent is requesting it and why.
Calibrate friction to reversibility. Irreversible actions require more ceremony.
A 'send email' confirmation and a 'delete all records' confirmation should not look identical. The visual weight of the interaction signals the weight of the decision.
Provide a 'why this now' explanation in the approval context.
An approval request with no explanation of why the agent needs this permission at this moment will be declined or rubber-stamped. Neither outcome serves the user.
Never auto-approve. Even low-stakes approvals must have a human moment.
The approval ceremony is also a system health check. Auto-approvals erode the human's capacity to notice when the ceremony becomes inappropriate.
Anti-pattern
A modal that says 'AI wants to send this email. [Cancel] [OK]' — no agent name, no email preview, no explanation of why now.
Patterns & components
Token references
--ds-color-agency--ds-color-error--ds-color-validationHuman-in-the-Loop Handoff
The grammar of AI ↔ human transitions.
Human-in-the-loop is not an edge case — it is the default design posture. Every AI action should have a defined handoff pattern: the AI presents, the human reviews, the human decides, the AI executes. When the AI cannot proceed without human input, the interface must make the wait state legible and the return path clear. Ambiguous handoffs leave users unable to act and agents unable to continue.
Signal question
When control transfers between AI and human, is the transition unambiguous to both parties?
Design rules
Make the blocked-waiting state visually distinct from active-processing.
An agent waiting for human approval looks identical to an agent running if both show a spinner. The user cannot tell if they need to act.
Show exactly what human input is required. Not a general 'review needed.'
Vague review requests create decision paralysis. 'Approve the email draft' is actionable. 'AI needs your attention' is not.
Provide a correction path at every AI output, not just error states.
The ability to correct is not just for failures. A user should be able to redirect, refine, or reject any AI output, even a technically correct one.
After human action, confirm the AI has received it and is proceeding.
A human who approves an action and sees no visible response cannot tell if their input was registered. Explicit acknowledgment closes the feedback loop.
Anti-pattern
An AI that stops and shows 'Awaiting review' with no indication of what is being reviewed, by whom, or what happens when the review is complete.
Patterns & components
Token references
--ds-color-temporal--ds-color-agency--ds-surface-raisedLLM Latency & Loading States
Honesty about time and computation.
LLM inference has distinct temporal characteristics that differ from traditional API loading: streaming output, variable latency, token-by-token generation, and context window consumption. Loading states designed for REST APIs are semantically wrong for LLM responses. The design language must represent these characteristics accurately — false immediacy is as dishonest as a fabricated statistic.
Signal question
Does this interface accurately communicate the nature, progress, and cost of computation in flight?
Design rules
Distinguish streaming from loading. Use StreamingDot for token generation, Skeleton for layout reservation.
A skeleton implies a known shape will arrive. Streaming implies content is being generated in real time. These are different temporal experiences and require different signals.
Show reasoning traces when the model is thinking, not just when it responds.
Chain-of-thought reasoning is computation the user paid for. Making it visible improves calibration and builds appropriate trust in the output.
Display token cost and context window consumption inline for power users.
Context window limits and token costs are first-class constraints in LLM workflows. Hiding them produces surprise failures and unexpected bills.
Represent retry and failure states distinctly from initial loading.
A second attempt is semantically different from a first attempt. The RetryLedger pattern exists to make this distinction visible without causing alarm.
Never fake streaming with setTimeout reveals. Reveal content as it arrives.
Simulated streaming is deceptive. It delays the user's ability to act on partial output and misrepresents the model's actual generation pattern.
Anti-pattern
A full-page spinner that says 'Thinking...' for 8 seconds, then dumps a complete 500-word response with no indication of what was computed or how.
Patterns & components
Token references
--ds-motion-stream--ds-color-temporal--ds-type-mono-fontUncertainty Communication
What the model doesn't know, expressed honestly.
LLMs produce outputs with variable epistemic status: some claims are well-grounded, others are extrapolations, some are fabrications presented as facts. The interface must give the model's uncertainty a visual language — not to undermine trust, but to calibrate it correctly. An interface that presents all outputs with equal confidence trains users to misuse AI.
Signal question
Can a user tell, from the interface alone, how much confidence to place in any given output?
Design rules
Map confidence scores to visual weight, not just color.
Users with color vision deficiencies cannot read a confidence heatmap. Use size, opacity, and layout density as primary uncertainty signals.
Distinguish types of uncertainty: retrieval uncertainty, model uncertainty, and data uncertainty.
'I couldn't find a source' is different from 'my training suggests this but I'm not sure' which is different from 'the source data itself is ambiguous.' Each requires a different response from the user.
Show alternatives when confidence is below threshold.
A low-confidence single answer is worse than presenting three plausible alternatives. Let the user choose rather than defaulting to the model's best guess.
Never suppress low-confidence outputs without surfacing the uncertainty.
Filtering out uncertain outputs creates a false impression of high-confidence results. Show the uncertainty; let the user decide what to do with it.
Anti-pattern
A legal document summary with no confidence indicators, presented identically whether the model found 10 directly relevant sources or zero.
Patterns & components
Token references
--ds-color-validation--ds-color-agency--ds-text-mutedRelated
Agent Identity
Avatar system, role badges, capability indicators
Related
Trust & Provenance
Reasoning chains, approval ceremony, confidence states
Related
LLM Latency States
Streaming, tool-calling, thinking, failure communication
Related
Agentic Playground
Live interactive demos of all core agentic components
Related
Patterns
Full catalog of 86 AI-native pattern components
Related
A2UI Architecture
Protocol design for agent-to-UI surfaces