From 3b4b27de02b5e61e8628a60661bb2194868f5090 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E3=83=84=E3=83=B3=E3=83=87=E3=83=AC?= Date: Sat, 21 Mar 2026 14:54:22 +0200 Subject: [PATCH] docs: restore custom model integrations section and fix architecture diagram --- docs/ARCHITECTURE.md | 47 ++++++++++++++++++++++++++------------------ 1 file changed, 28 insertions(+), 19 deletions(-) diff --git a/docs/ARCHITECTURE.md b/docs/ARCHITECTURE.md index f9f60ca..78ae0ce 100644 --- a/docs/ARCHITECTURE.md +++ b/docs/ARCHITECTURE.md @@ -49,30 +49,39 @@ Agents can respond with just `PASS` if they genuinely have nothing meaningful to Beyond the configured turn interval, the `needs_human_input()` method performs an additional check: if the **last agent message explicitly addresses the user by name**, the HITL prompt fires immediately. This ensures the conversation never inadvertently "speaks for" the human participant. **Decision Flow:** -``` -generate_next_turn() - ↓ -Orchestrator due? → speak (or PASS → skip) - ↓ -_forced_next_agent set? → use it, clear it - ↓ -session_type = DYNAMIC? - → @mention in last message? → forced agent - → Score all agents by expertise → pick best - → fallback: round robin - ↓ -agent.generate_response() - ↓ -response == "PASS"? → skip, return {skipped: True} - ↓ -append to history with timestamp, return turn_data +```mermaid +flowchart TD + +A["generate_next_turn()"] --> B{"Orchestrator due?"} +B -->|Yes| C["Speak or PASS"] +C -->|Speak| EndOrch["Append to history & return turn"] +C -->|PASS| D + +B -->|No| D{"_forced_next_agent set?"} +D -->|Yes| E["Use forced agent & clear flag"] --> L +D -->|No| F{"session_type = DYNAMIC?"} + +F -->|No| K["Fallback: round robin / argumentative"] --> L +F -->|Yes| G{"@mention in last message?"} + +G -->|Yes| H["Use mentioned agent"] --> L +G -->|No| I["Score agents by expertise"] + +I --> J{"Top score > 0?"} +J -->|Yes| Best["Pick best matching agent"] --> L +J -->|No| K + +L["agent.generate_response()"] --> M{"response == PASS?"} + +M -->|Yes| N["Skip turn, return skipped: True"] +M -->|No| O["Append to history with timestamp"] --> P["Return turn_data"] ``` ## Custom Model Integrations (Bring Your Own Code) -If you do not want to use LiteLLM at all, the framework allows you to inject arbitrary Python scripts as the "brain" for an agent. +If you do not want to use LiteLLM at all, the framework allows you to inject any arbitrary python script as the "brain" for an agent. -1. Create a python file (e.g. `my_model.py`). +1. Create a python file (e.g. `my_model.py`). 2. Write a function that accepts a `List[Dict[str, str]]` (the conversation history) and returns a `str` (the agent's reply). 3. In the CLI wizard, select `custom_function` as the Model Type. 4. Provide the path to `my_model.py` and the exact name of the function you wrote.