Skip to content

Components / AI & Agent

AI & Agent

Five components purpose-built for AI-native interfaces: streaming output, context visibility, model configuration, tool call observability, and generative form assistance. Each integrates with the Claude API and follows the Nodus token system.

DAN-153Sprint 26claude-sonnet-4-6streamingtool use

Streaming State Machine

IDLE

No active stream. Content may be stale or empty.

STREAMING

Tokens arriving. Cursor blinks. Auto-scroll active.

COMPLETE

Full response received. Copy enabled. Cursor hidden.

ERROR

Stream interrupted. Error message shown in header.

IDLESTREAMINGstream start
STREAMINGCOMPLETEstream end
STREAMINGERRORAPI error / abort
ERRORIDLEreset
COMPLETEIDLEreset / new request

StreamingArtifact

Renders streamed code, markdown, or text with progressive disclosure, syntax highlighting, diff view, and copy action.

View component →

When to use

  • Displaying AI-generated code as it streams in
  • Showing before/after diffs from an AI edit
  • Rendering markdown output from an agent response

When not to use

  • Static content that will never stream — use a plain code block instead
  • Short single-line values — use DataTag or Badge

Key Props

content*
string

Grows as stream progresses

kind
"code" | "markdown" | "text"
state
"streaming" | "complete" | "error"
diff
DiffLine[]

Activates diff rendering mode

language
string

e.g. 'typescript', 'python'

Integration with Claude API

import { StreamingArtifact } from "@/components/ai";

<StreamingArtifact
  kind="code"
  language="typescript"
  state="streaming"
  content={streamedContent}
  label="Generated function"
/>
Variantscodemarkdowntextdiff

ContextWindowMeter

Visualizes token usage across a context window. Segmented bar (system/user/assistant/tools), overflow warning, legend. Warning at 80%, critical at 95%.

View component →

When to use

  • Showing users how much context remains in a Claude API call
  • Debugging prompt token distribution in dev tooling
  • Compact status indicator on agent dashboards

When not to use

  • Showing non-token metrics — use a standard ProgressBar or Meter component
  • General purpose usage bars unrelated to AI context

Key Props

segments*
TokenSegment[]

label + tokens per segment

limit*
number

Total context window size in tokens

model
string

Displayed in header

warnAt
number
compact
boolean

Integration with Claude API

import { ContextWindowMeter } from "@/components/ai";

<ContextWindowMeter
  model="claude-sonnet-4-6"
  limit={200000}
  segments={[
    { label: "system",    tokens: 1200 },
    { label: "user",      tokens: 8400 },
    { label: "assistant", tokens: 3100 },
    { label: "tools",     tokens: 640 },
  ]}
/>
Variantsdefault (full legend)compact (bar + total only)

ModelSelector

Model comparison dropdown or inline panel for the Claude model family. Each option shows capability tags, context window, latency indicator, and cost tier.

View component →

When to use

  • Letting users choose which Claude model to run a task on
  • Admin panels where operators configure model defaults
  • Side-by-side model comparison in evaluation tooling

When not to use

  • Simple single-model UIs where the model is fixed — use a static label instead
  • Non-Claude model families — props are designed for Anthropic model metadata

Key Props

options*
ModelOption[]

id, name, capabilities, costTier, latencyTier, contextWindow

value
string

Selected model ID

onChange
(modelId: string) => void
variant
"dropdown" | "panel"

Integration with Claude API

import { ModelSelector } from "@/components/ai";

const MODELS = [
  {
    id: "claude-sonnet-4-6",
    name: "Claude Sonnet 4.6",
    contextWindow: 200000,
    capabilities: ["tools", "streaming", "files"],
    costTier: "standard",
    latencyTier: "standard",
    badge: "Recommended",
  },
  {
    id: "claude-opus-4-6",
    name: "Claude Opus 4.6",
    contextWindow: 200000,
    capabilities: ["tools", "streaming", "extended-thinking"],
    costTier: "premium",
    latencyTier: "slow",
  },
];

<ModelSelector
  options={MODELS}
  value={selectedModel}
  onChange={setSelectedModel}
  variant="panel"
/>
Variantsdropdownpanel

FunctionCallVisualizer

Renders AI tool call execution: name, input JSON, status badge (pending/running/done/error), output preview, duration, call ID. Collapsible detail panel.

View component →

When to use

  • Showing tool use in an agent reasoning trace
  • Audit logs where users need to inspect what a tool was called with
  • Real-time agent dashboards streaming active tool calls

When not to use

  • General API call logs unrelated to AI tool use — use a table or log viewer
  • High-frequency automated calls where rendering hundreds of instances is needed — virtualize

Key Props

name*
string

Tool / function name

input*
Record<string, unknown>

Input parameters

status*
"pending" | "running" | "done" | "error"
output
unknown

Available when status is done

durationMs
number
callId
string

Shown for traceability

Integration with Claude API

import { FunctionCallVisualizer } from "@/components/ai";

<FunctionCallVisualizer
  name="search_documents"
  callId="call_abc123"
  status="done"
  input={{ query: "WCAG 2.2 contrast requirements", limit: 5 }}
  output={{ results: 5, topScore: 0.94 }}
  durationMs={312}
/>
Variantsexpandedcollapsed (defaultCollapsed=true)

GenerativeFormFill

Form field with AI-assist affordances: ghost-text suggestion, accept/reject controls, Tab-to-accept shortcut, confidence badge, and generating state. WCAG AA compliant.

View component →

When to use

  • Forms where Claude can pre-fill fields from context (e.g. vendor name from invoice)
  • Data entry workflows where AI suggestions reduce manual input
  • Any field where surfacing AI confidence builds trust

When not to use

  • Fields where AI suggestions would create compliance or liability risk if silently accepted
  • Read-only display fields — use GroundingIndicator instead

Key Props

suggestion
string

AI-generated ghost text

generating
boolean
confidence
number

0–1, shown in provenance badge

model
string

Model name shown in badge

onAccept
(value: string) => void
onReject
() => void

Integration with Claude API

import { GenerativeFormFill } from "@/components/ai";

<GenerativeFormFill
  label="Counterparty Name"
  suggestion="Acme Treasury Ltd."
  model="claude-sonnet-4-6"
  confidence={0.91}
  onAccept={(v) => setCounterparty(v)}
  onReject={() => setSuggestion(null)}
/>
Variantsidlegeneratingsuggestion shownacceptedrejected

Related patterns

For higher-level compositional patterns (agent conversation threads, multi-agent orchestration views, human-in-the-loop approval flows), see the Patterns catalog and the A2UI templates.