Multi-Agent Patterns
These patterns come directly from Anthropic's Building Effective Agents playbook and are the same ones implemented in Laravel's AI SDK.
When to use agents at all
The key question is not "can I use an agent here?" but "does this task need an agent?"
Start with the simplest approach:
- Single LLM call — one
agent().prompt()with a good system prompt - Prompt chaining — break into sequential steps, validate in between
- Routing — classify input, dispatch to specialist
- Full agent loop — let the model decide what tools to call and when
Only add complexity when simpler approaches don't meet the quality bar.
Pattern comparison
| Pattern | Structure | Use when |
|---|---|---|
| Prompt Chaining | A → B → C | Fixed sequence of steps, each building on the last |
| Routing | Input → Classifier → Specialist | Inputs vary by type or required expertise |
| Parallelization | A + B + C → Merge | Independent sub-tasks that can run simultaneously |
| Orchestrator-Workers | Planner → Tools → Workers | Dynamic planning; the model chooses what to do next |
| Evaluator-Optimizer | Generate → Evaluate → Improve | Quality bar requires iterative refinement |
Combining patterns
These patterns compose naturally. A common production architecture:
Routing (classify by intent)
└─► Prompt Chaining (fixed workflow for that intent)
├─► Parallelization (run independent reviewers)
└─► Evaluator (polish the final output)Key principles
1. Prefer deterministic gates over agent decisions. If you can check a condition in code (e.g., "does the score exceed 8?"), do it in code — not by asking the model.
2. Keep context windows lean. Each iteration of the agentic loop costs tokens. Summarise or discard stale history when it is no longer needed.
3. Instrument everything. Collect response.usage to track costs per workflow. Log response.messages to debug unexpected loops.
4. Cap iterations. Always set maxIterations to prevent runaway loops. The default is 10.