Skip to content

Design Axioms

Seven Principles

Every component and pattern in this system is a direct implementation of one of these seven axioms. The principles are not aesthetic preferences — they are structural commitments.

01

Visibility Over Invisibility

Make the algorithm present, not hidden. Default to revealing what's happening. The starting assumption is revelation.

In the system

The AgencyIndicator component renders a red orbital animation on any surface an AI agent is actively reading or writing — not in a status bar at the bottom of the screen, but on the exact panel being touched. The user sees the agency event happening where it happens.

Why this matters for AI-native design

In conventional software, the system does exactly what you tell it. In AI-native systems, the system can act without explicit instruction. Inference happens silently by default. Visibility isn't optional here — it's the difference between a tool and a black box.

Can a user tell, at a glance, whether AI is active in this interface?
Anti-pattern

An AI feature that only reveals its presence in a settings menu or privacy policy.

02

Ceremony Over Efficiency

Important actions should feel weighty. Granting an AI access to your calendar isn't a checkbox — it's a decision with real consequences. Friction, applied deliberately, creates meaning.

In the system

The ConsentFlow pattern requires a user to explicitly name the scope of agent access — not just 'allow calendar access,' but 'allow this agent to read events from this calendar for the next 30 days.' Each delegation step renders the exact boundary being granted and demands a named confirmation, not a toggle.

Why this matters for AI-native design

AI systems acquire permissions incrementally and often ambient. Without ceremony, a user may not realize they've granted persistent access until it's exercised weeks later. Ceremony is the UX mechanism for trust calibration — the central problem in agentic system design.

Does this delegation action have appropriate weight? Would a user accidentally trigger it?
Anti-pattern

A single checkbox or instant toggle that grants an agent calendar write access.

03

Auditability as Aesthetic

The ability to trace what happened — who acted, when, why, with what outcome — is not just a compliance requirement. It's a design value. Beautiful systems are inspectable systems.

In the system

The AuditTrail pattern surfaces every agent action as a first-class record: tool invoked, arguments passed, result returned, timestamp, duration. Each row is expandable to show the full call context. The component is designed to be skimmable — tracing what happened should feel as fast as reading a feed.

Why this matters for AI-native design

When an AI makes a decision or takes an action, the reasoning is often opaque and the data pathway invisible. Auditability closes the loop: even when process is hidden, the record of outcome is explicit. In regulated industries — finance, health, legal — this is the difference between a deployable system and an unusable one.

Can a user reconstruct exactly what happened, without leaving the interface?
Anti-pattern

A generated summary with no trace back to source data or agent actions.

04

Temporal Honesty

Represent time accurately. If computation was instant, find aesthetics of instantaneity. If it took time, show duration. Don't collapse time into a false 'now'.

In the system

The DataTag component stamps every data value with its retrieval timestamp and source. A treasury balance pulled from a cached API call shows '2h ago · Kyriba API' alongside the figure — not a live indicator, not a spinner, an honest age label. There is no 'fresh data' UX lie.

Why this matters for AI-native design

LLM knowledge has a training cutoff. Retrieved data has a refresh latency. Agent actions have processing time. AI-native interfaces that collapse all this into an instant 'now' are systematically misleading. Temporal honesty surfaces properties — staleness, latency, duration — that traditional software never needed to expose.

Is this interface honest about when data was retrieved, how old it is, and how long processing took?
Anti-pattern

Displaying an AI response with no indication of when the model was trained or when data was retrieved.

05

Sovereignty as Feeling

Your data, your attention, your agency should feel like yours. Not abstractly, in a privacy policy, but perceptually, in the interface. The vault metaphor: substantial, secure, yours.

In the system

The VaultBoundary pattern renders data that belongs to you inside a visually distinct container — a heavier border, a background shift, a lock icon at the corner. When an AI requests access, the vault border animates: the boundary becomes visible exactly at the moment a decision is being made about it.

Why this matters for AI-native design

In conventional software, 'your data' is abstract — stored on servers you don't control, governed by terms you haven't read. AI systems that continuously process your data need to make sovereignty perceptual, not just legal. If users can't feel what they own, they can't make meaningful decisions about access.

Does this interface give the user a perceptible sense of what data they own and what AI can access?
Anti-pattern

A permission grant buried in app settings, expressed only as text, never visible in the active interface.

06

Agent Legibility

When AI acts on your behalf, you should always know: who, what, when, and under what authority. This is the org chart for your digital life — not hidden in settings, but visible in the interface.

In the system

In the Nodus task interface, every activity log entry is attributed to a named agent with its authority tier — not just 'AI did this,' but 'Codex (tier 2: code-only) ran a shell command at 14:03:12.' The agent card is always one click away from any attributed action.

Why this matters for AI-native design

Multi-agent pipelines create a maze of delegated authority where no single actor is clearly responsible. This principle directly addresses the accountability gap: in AI-native systems, tracing authority is not a debugging task — it is a core UX requirement. The org chart for your digital life must be visible, not inferred.

Is it always clear which agent is acting, what authority it was given, and who delegated it?
Anti-pattern

A multi-agent pipeline that surfaces only its final output, with no trace of which agents ran or what they were permitted to do.

07

Learning Acceleration

Does this interaction make the system measurably better at future interactions? Every feedback signal, every correction, every retry should close a loop. The system should visibly learn — not just execute. Theory, decision, outcome, delta, update, theory.

In the system

The LearningLedger pattern records each feedback signal — a corrected output, a rejected suggestion, a flagged hallucination — and displays it in the agent's profile alongside a change-in-behavior indicator. The update is not just stored; it is shown. The user can see that their correction was applied.

Why this matters for AI-native design

AI systems that appear to learn but don't close the loop create false confidence. Visible improvement — where the user can see that a correction was applied and behavior changed — creates calibrated trust. This principle makes the learning loop an observable system property, not a background optimization process.

Can a user see evidence that the system learned from prior interactions?
Anti-pattern

A feedback button that says 'Thank you!' but never influences future behavior. The signal evaporates.