Interactive visualization of how LLM agents actually work — the real API message flow between User, Agent, and LLM.
Agents aren't magic. They just assemble context.
system_prompt + tools[] + messages[]→ LLM API → iftool_use: execute → appendtool_result→ repeat.That loop is the entire "intelligence" of an LLM agent.
An interactive sequence diagram showing 6 turns of a real agent session with actual JSON payloads:
| Turn | What happens | Key insight |
|---|---|---|
| 1 | User request → Agent assembles context → LLM API call | system + tools[] + messages[] — that's all the LLM receives |
| 2 | Tool result fed back → LLM decides next action | tool_result goes in as "role": "user" — there is no "tool" role |
| 3 | Test failure → self-correction | Error logs in context → LLM can reason about failures |
| 4 | MCP tool call | MCP tools are just mixed into tools[] — the LLM doesn't know MCP exists |
| 5 | Skill invocation | A skill is just a prompt template injected into the user message |
| 6 | Loop termination | No tool_use in response = agent stops the loop |
┌──────────┐ ┌──────────┐ ┌──────────┐
│ User │ ──→ │ Agent │ ──→ │ LLM │
│ (human) │ ←── │ (program)│ ←── │(only AI) │
└──────────┘ └──────────┘ └──────────┘
- User — a human giving instructions
- Agent — a program (deterministic code) that assembles context and executes tools
- LLM — the only AI in the system; it reads context and outputs the next action
- Step-by-step or auto-play — walk through each turn or watch the full sequence
- Real JSON payloads — see actual API message structure with syntax highlighting
- 5 languages — 한국어 · English · 中文 · 日本語 · Español
- Key insights — each step explains what's really happening and why
"Agent engineering" sounds complex, but once you see the actual messages flowing between components, it clicks: the Agent is just a loop, and the LLM is just reading context.
I couldn't find a visual that showed the real API payloads at each step, so I made one.
while (true) {
// 1. Assemble context
const request = {
system: "You are Claude Code...", // persona + rules + skill list
tools: [...builtinTools, ...mcpTools], // all tools mixed together
messages: conversationHistory // grows every turn
};
// 2. Call LLM
const response = await llm.call(request);
// 3. Check response
if (response.hasToolUse()) {
const result = await executeTools(response.toolUse);
conversationHistory.push(response); // assistant message
conversationHistory.push(asUser(result)); // tool_result as "user" role
} else {
showToUser(response.text); // no tool_use = done
break;
}
}
Single HTML file. No build step. No dependencies.
- Vanilla JS + CSS
- Token-based JSON syntax highlighter
- CSS animations for sequence diagram flow
- SVG favicon
MIT
Agent Engineering = Context Engineering
Built by @JeGwan