Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions docs/ARCHITECTURE.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,4 +75,21 @@ L["agent.generate_response()"] --> M{"response == PASS?"}

M -->|Yes| N["Skip turn, return skipped: True"]
M -->|No| O["Append to history with timestamp"] --> P["Return turn_data"]
```

## Custom Model Integrations (Bring Your Own Code)

If you do not want to use LiteLLM at all, the framework allows you to inject any arbitrary python script as the "brain" for an agent.

1. Create a python file (e.g. `my_model.py`).
2. Write a function that accepts a `List[Dict[str, str]]` (the conversation history) and returns a `str` (the agent's reply).
3. In the CLI wizard, select `custom_function` as the Model Type.
4. Provide the path to `my_model.py` and the exact name of the function you wrote.

The framework will dynamically import your file at runtime and use it exclusively for that agent's turns.

## CI/CD and Robustness
To ensure the framework remains stable as it grows, we maintain a comprehensive CI/CD pipeline using **GitHub Actions**. Every contribution is automatically tested against Python 3.13 for:
- **Linting**: High-standard code hygiene via `flake8`.
- **Logic Robustness**: Detailed edge-case testing including word-boundary expertise matching and non-repeating HITL triggers.
- **Regression Testing**: Ensuring core orchestration types (Round Robin, Dynamic, Argumentative) remain deterministic.
Loading