agent(config)
Creates an AgentRunner from a config object and returns it for chaining. The runner is not executing until you call .prompt() or .stream().
Signature
function agent(config: AgentConfig): AgentRunnerAgentConfig
interface AgentConfig {
/** System prompt — the agent's role and instructions. */
instructions: string;
/** Tools the agent can call. Triggers the agentic loop when present. */
tools?: Tool[];
/** Schema for structured output — fluent builder function or a Zod schema. */
schema?: SchemaFn | ZodType;
/** Model identifier (e.g. 'openai/gpt-4o-mini'). Overrides global default. */
model?: string;
/** Provider instance. Overrides global default. */
provider?: AIProvider;
/** Maximum agentic loop iterations before throwing. Default: 10. */
maxIterations?: number;
/** Sampling temperature (0.0–2.0). */
temperature?: number;
/** Maximum output tokens. */
maxTokens?: number;
}AgentRunner.prompt<T>(input, history?)
Runs the agent and returns a complete response, or an InterruptedResponse if a tool threw an InterruptError.
async prompt<T = unknown>(
input: string,
history?: Message[],
): Promise<AgentResponse<T> | InterruptedResponse>Parameters
| Parameter | Type | Description |
|---|---|---|
input | string | The user message / task |
history | Message[] | Prior conversation messages (optional) |
Returns AgentResponse<T> | InterruptedResponse
interface AgentResponse<T = unknown> {
text: string; // Raw text of the final assistant message
structured: T; // Parsed JSON (only when schema is set)
usage: Usage; // Accumulated token usage across all iterations
messages: Message[]; // Full conversation including history
checkpoint: Checkpoint; // Serialisable state — pass to resume() to continue later
}Use isInterrupted(result) to narrow the union, or assertComplete(result) to throw if interrupted.
AgentRunner.resume(checkpoint, answer)
Continues a paused run by injecting the user's answer as the tool result and resuming the loop. See Checkpointing for full details.
async resume(
checkpoint: Checkpoint & { pendingToolUseId: string },
answer: string,
): Promise<AgentResponse<T> | InterruptedResponse>AgentRunner.stream(input, history?)
Streams the agent's output chunk by chunk.
async *stream(
input: string,
history?: Message[],
): AsyncGenerator<string, StreamedAgentResponse>Yields string — text fragments as they arrive.
Returns (generator return value) StreamedAgentResponse
interface StreamedAgentResponse {
text: string; // Full accumulated text
usage: Usage; // Accumulated token usage
messages: Message[]; // Full conversation history
}Examples
Simple prompt
const response = await agent({
instructions: 'You are a helpful assistant.',
}).prompt('What is 2 + 2?');
console.log(response.text); // "4"
console.log(response.usage); // { inputTokens: 20, outputTokens: 3 }With structured output
type Result = { answer: number; confidence: number };
const response = await agent({
instructions: 'Extract the answer and your confidence level.',
schema: (s) => ({
answer: s.number().required(),
confidence: s.number().min(0).max(1).required(),
}),
}).prompt<Result>('What is the speed of light in km/s?');
console.log(response.structured.answer); // 299792
console.log(response.structured.confidence); // 0.99With structured output (Zod)
Pass a Zod schema to get automatic JSON schema generation and safeParse validation on the response. Fields are required unless marked .optional().
import { z } from 'zod';
const Review = z.object({
score: z.number().int().min(1).max(10),
approved: z.boolean(),
issues: z.array(z.string()),
});
const response = await agent({
instructions: 'Evaluate the content.',
schema: Review,
}).prompt<z.infer<typeof Review>>('Rate: "The quick brown fox."');
console.log(response.structured.score); // 7
console.log(response.structured.approved); // false
console.log(response.structured.issues); // ["Too short", ...]Zod setup
npm install zod zod-to-json-schemaWith tools
import { WebFetch } from '@daedalus-ai-dev/ai-sdk';
const response = await agent({
instructions: 'Answer questions using the web.',
tools: [new WebFetch()],
maxIterations: 5,
}).prompt('What is the current Node.js LTS version?');With conversation history
const history: Message[] = [];
const r1 = await agent({ instructions: 'Be helpful.' }).prompt('My name is Bob.');
history.push(...r1.messages);
const r2 = await agent({ instructions: 'Be helpful.' }).prompt('What is my name?', history);
console.log(r2.text); // "Your name is Bob."Streaming
for await (const chunk of agent({
instructions: 'Write a haiku about TypeScript.',
}).stream('Go.')) {
process.stdout.write(chunk);
}