Components / AI & Agent
AI & Agent
Five components purpose-built for AI-native interfaces: streaming output, context visibility, model configuration, tool call observability, and generative form assistance. Each integrates with the Claude API and follows the Nodus token system.
Streaming State Machine
No active stream. Content may be stale or empty.
Tokens arriving. Cursor blinks. Auto-scroll active.
Full response received. Copy enabled. Cursor hidden.
Stream interrupted. Error message shown in header.
StreamingArtifact
Renders streamed code, markdown, or text with progressive disclosure, syntax highlighting, diff view, and copy action.
When to use
- Displaying AI-generated code as it streams in
- Showing before/after diffs from an AI edit
- Rendering markdown output from an agent response
When not to use
- Static content that will never stream — use a plain code block instead
- Short single-line values — use DataTag or Badge
Key Props
Grows as stream progresses
Activates diff rendering mode
e.g. 'typescript', 'python'
Integration with Claude API
import { StreamingArtifact } from "@/components/ai";
<StreamingArtifact
kind="code"
language="typescript"
state="streaming"
content={streamedContent}
label="Generated function"
/>ContextWindowMeter
Visualizes token usage across a context window. Segmented bar (system/user/assistant/tools), overflow warning, legend. Warning at 80%, critical at 95%.
When to use
- Showing users how much context remains in a Claude API call
- Debugging prompt token distribution in dev tooling
- Compact status indicator on agent dashboards
When not to use
- Showing non-token metrics — use a standard ProgressBar or Meter component
- General purpose usage bars unrelated to AI context
Key Props
label + tokens per segment
Total context window size in tokens
Displayed in header
Integration with Claude API
import { ContextWindowMeter } from "@/components/ai";
<ContextWindowMeter
model="claude-sonnet-4-6"
limit={200000}
segments={[
{ label: "system", tokens: 1200 },
{ label: "user", tokens: 8400 },
{ label: "assistant", tokens: 3100 },
{ label: "tools", tokens: 640 },
]}
/>ModelSelector
Model comparison dropdown or inline panel for the Claude model family. Each option shows capability tags, context window, latency indicator, and cost tier.
When to use
- Letting users choose which Claude model to run a task on
- Admin panels where operators configure model defaults
- Side-by-side model comparison in evaluation tooling
When not to use
- Simple single-model UIs where the model is fixed — use a static label instead
- Non-Claude model families — props are designed for Anthropic model metadata
Key Props
id, name, capabilities, costTier, latencyTier, contextWindow
Selected model ID
Integration with Claude API
import { ModelSelector } from "@/components/ai";
const MODELS = [
{
id: "claude-sonnet-4-6",
name: "Claude Sonnet 4.6",
contextWindow: 200000,
capabilities: ["tools", "streaming", "files"],
costTier: "standard",
latencyTier: "standard",
badge: "Recommended",
},
{
id: "claude-opus-4-6",
name: "Claude Opus 4.6",
contextWindow: 200000,
capabilities: ["tools", "streaming", "extended-thinking"],
costTier: "premium",
latencyTier: "slow",
},
];
<ModelSelector
options={MODELS}
value={selectedModel}
onChange={setSelectedModel}
variant="panel"
/>FunctionCallVisualizer
Renders AI tool call execution: name, input JSON, status badge (pending/running/done/error), output preview, duration, call ID. Collapsible detail panel.
When to use
- Showing tool use in an agent reasoning trace
- Audit logs where users need to inspect what a tool was called with
- Real-time agent dashboards streaming active tool calls
When not to use
- General API call logs unrelated to AI tool use — use a table or log viewer
- High-frequency automated calls where rendering hundreds of instances is needed — virtualize
Key Props
Tool / function name
Input parameters
Available when status is done
Shown for traceability
Integration with Claude API
import { FunctionCallVisualizer } from "@/components/ai";
<FunctionCallVisualizer
name="search_documents"
callId="call_abc123"
status="done"
input={{ query: "WCAG 2.2 contrast requirements", limit: 5 }}
output={{ results: 5, topScore: 0.94 }}
durationMs={312}
/>GenerativeFormFill
Form field with AI-assist affordances: ghost-text suggestion, accept/reject controls, Tab-to-accept shortcut, confidence badge, and generating state. WCAG AA compliant.
When to use
- Forms where Claude can pre-fill fields from context (e.g. vendor name from invoice)
- Data entry workflows where AI suggestions reduce manual input
- Any field where surfacing AI confidence builds trust
When not to use
- Fields where AI suggestions would create compliance or liability risk if silently accepted
- Read-only display fields — use GroundingIndicator instead
Key Props
AI-generated ghost text
0–1, shown in provenance badge
Model name shown in badge
Integration with Claude API
import { GenerativeFormFill } from "@/components/ai";
<GenerativeFormFill
label="Counterparty Name"
suggestion="Acme Treasury Ltd."
model="claude-sonnet-4-6"
confidence={0.91}
onAccept={(v) => setCounterparty(v)}
onReject={() => setSuggestion(null)}
/>Related patterns
For higher-level compositional patterns (agent conversation threads, multi-agent orchestration views, human-in-the-loop approval flows), see the Patterns catalog and the A2UI templates.