A legendary, production-ready CLI for GPT-5 β featuring streaming responses, multi-agent support, rich exports, and an enhanced terminal experience.
β¨ Features β’ βοΈ Installation β’ π Quick Start β’ π» Commands β’ π Models β’ ποΈ Architecture β’ π Security β’ π₯ Authors β’ π License
- Overview
- Features
- Requirements
- Installation
- Quick Start
- Interactive Commands
- File Inclusion
- Export Formats
- Model Comparison
- Configuration
- Project Structure
- Performance Metrics
- Security
- Troubleshooting
- Authors
- Contributing
- License
GPT-5 CLI Agent is a professional, unified command-line interface engineered to harness the full power of OpenAI's GPT-5 model family. Built with developer experience in mind, it provides a seamless terminal-native workflow for AI-assisted development, analysis, and automation.
Whether you need the raw intelligence of GPT-5, the efficiency of GPT-5 Mini, or the blazing speed of GPT-5 Nano β this single CLI handles all three with persistent history, advanced configuration, and beautiful formatted output.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π€ GPT-5 Unified CLI Agent v1.0.0 β
β Powered by OpenAI GPT-5 | Built for Builders β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
|
|
|
| Dependency | Version | Purpose |
|---|---|---|
python |
β₯ 3.10 | Runtime |
requests |
β₯ 2.31.0 | HTTP API calls |
pyyaml |
β₯ 6.0.1 | Config file parsing |
colorama |
β₯ 0.4.6 | Cross-platform terminal colors |
git clone https://github.com/simonpierreboucher02/gpt5-cli-agent.git
cd gpt5-cli-agent# Create environment
python -m venv venv
# Activate β macOS / Linux
source venv/bin/activate
# Activate β Windows
venv\Scripts\activatepip install -r requirements.txt# macOS / Linux (add to ~/.zshrc or ~/.bashrc for persistence)
export OPENAI_API_KEY=sk-...your-key-here...
# Windows (Command Prompt)
set OPENAI_API_KEY=sk-...your-key-here...
# Windows (PowerShell)
$env:OPENAI_API_KEY="sk-...your-key-here..."Tip: Add the export line to your shell profile to avoid setting it every session.
python main.py --agent-id my-agent --model gpt-5python main.py --agent-id my-agent --model gpt-5-minipython main.py --agent-id my-agent --model gpt-5-nanopython main.py --listpython main.py --agent-id my-agent --config# Export as HTML
python main.py --agent-id my-agent --export html
# Export as Markdown
python main.py --agent-id my-agent --export md
# Export as JSON
python main.py --agent-id my-agent --export jsonpython main.py --helpOnce inside a chat session, use these slash commands:
| Command | Description | Example |
|---|---|---|
/help |
Display all available commands | /help |
/history [n] |
Show last n messages (default: 10) |
/history 20 |
/search <term> |
Full-text search across conversation | /search authentication |
/stats |
Show token usage, message count, session duration | /stats |
/config |
Display current agent configuration | /config |
/export <format> |
Export conversation (json/txt/md/html) |
/export md |
/clear |
Clear conversation history | /clear |
/files |
List files available for inclusion | /files |
/info |
Show agent ID, model, and metadata | /info |
/quit |
Exit the chat session | /quit |
Embed file contents directly into your prompts using the {filename} syntax:
> Please review this code: {main.py}
> Compare these two configs: {config.yaml}, {settings.json}
> Analyze the project structure based on: {README.md}
| Category | Extensions |
|---|---|
| Python | .py, .pyw |
| JavaScript / TypeScript | .js, .ts, .jsx, .tsx |
| Systems | .c, .cpp, .h, .go, .rs |
| Web | .html, .css, .scss |
| Data / Config | .json, .yaml, .yml, .toml, .ini |
| Documentation | .md, .rst, .txt |
| Shell | .sh, .bash, .zsh, .ps1 |
| Format | Extension | Best For |
|---|---|---|
| JSON | .json |
Programmatic processing, data pipelines |
| Plain Text | .txt |
Simple archival, sharing |
| Markdown | .md |
Documentation, GitHub wikis |
| HTML | .html |
Styled reports, client presentations |
Exports are saved to: agents/<agent-id>/exports/
Is the task complex or requires deep reasoning?
YES β Use GPT-5
NO β Do you need a good balance of speed and quality?
YES β Use GPT-5 Mini
NO β Use GPT-5 Nano (fastest, cheapest)
Each agent stores its configuration in agents/<agent-id>/config.yaml:
# agents/my-agent/config.yaml
model: gpt-5 # gpt-5 | gpt-5-mini | gpt-5-nano
temperature: 0.7 # 0.0 (deterministic) β 2.0 (creative)
max_tokens: 4096 # Maximum response tokens
reasoning_effort: medium # low | medium | high (GPT-5 only)
stream: true # Enable token streaming
system_prompt: |
You are a senior software engineer.
Provide concise, production-ready code.| Parameter | Type | Default | Description |
|---|---|---|---|
model |
string | gpt-5 |
Model variant to use |
temperature |
float | 0.7 |
Response randomness (0β2) |
max_tokens |
int | 4096 |
Max tokens per response |
reasoning_effort |
string | medium |
Reasoning depth (GPT-5 only) |
stream |
bool | true |
Stream tokens in real time |
system_prompt |
string | β | Custom system instructions |
gpt5-cli-agent/
β
βββ π main.py # CLI entrypoint, argument parsing, session loop
βββ π§ agent.py # Agent class, API calls, streaming logic (462 lines)
βββ βοΈ config.py # Configuration management, YAML read/write (89 lines)
βββ π€ export.py # Multi-format export engine (424 lines)
βββ π§ utils.py # File inclusion, formatting utilities (285 lines)
βββ π requirements.txt # Python dependencies
βββ π README.md # This file
β
βββ agents/ # Per-agent data directory (auto-created)
βββ {agent-id}/
βββ config.yaml # Agent-specific configuration
βββ history.json # Persistent conversation history
βββ secrets.json # API keys (git-ignored)
βββ backups/ # Automatic history backups
βββ logs/ # Session logs
βββ exports/ # Exported conversations
| Module | Lines | Responsibility |
|---|---|---|
main.py |
616 | CLI entry, argument parsing, REPL loop |
agent.py |
462 | GPT-5 API integration, streaming |
export.py |
424 | JSON / TXT / Markdown / HTML export |
utils.py |
285 | File inclusion, text formatting |
config.py |
89 | YAML config read/write |
| Total | 1,876 | Full project |
| Metric | GPT-5 | GPT-5 Mini | GPT-5 Nano |
|---|---|---|---|
| Avg. First Token | ~3β8s | ~1β3s | ~0.5β1.5s |
| Avg. Full Response | ~30β120s | ~15β60s | ~5β20s |
| Token Throughput | ~30 tok/s | ~60 tok/s | ~100 tok/s |
| Context Window | 128K tokens | 128K tokens | 128K tokens |
| Streaming Support | β | β | β |
Performance varies based on OpenAI infrastructure load and response complexity.
- π API keys are stored in
secrets.jsonwhich is automatically excluded via.gitignore - π« No sensitive data is ever included in logs or exports
- β
Environment variable support β use
OPENAI_API_KEYinstead of file storage - π Multi-model key support β separate keys per model if needed
- π Agent isolation β each agent's data is fully sandboxed in its own directory
- π‘οΈ No telemetry β no usage data is sent anywhere except OpenAI's API
# Always use environment variables in production
export OPENAI_API_KEY=sk-...
# Never commit secrets β verify .gitignore is active
cat .gitignore | grep secrets
# Restrict permissions on the agents directory
chmod 700 agents/| Error | Cause | Fix |
|---|---|---|
ModuleNotFoundError |
Missing dependency | pip install -r requirements.txt |
AuthenticationError |
Invalid API key | Verify OPENAI_API_KEY is set and valid |
TimeoutError |
Request too long | Reduce max_tokens or lower reasoning_effort |
FileNotFoundError |
File inclusion failed | Run /files to list available files |
JSONDecodeError |
Corrupted history | Delete agents/<id>/history.json to reset |
RateLimitError |
API quota exceeded | Wait and retry, or upgrade OpenAI tier |
# Enable verbose logging
python main.py --agent-id my-agent --model gpt-5 --debug# Delete agent history (keeps config)
rm agents/my-agent/history.json
# Full reset (removes all agent data)
rm -rf agents/my-agent/|
Creator & Lead Developer AI/ML engineer and researcher building production-grade AI tooling. Passionate about making large language models accessible through elegant command-line interfaces.
|
AI Co-Author & Documentation Assistant This README was co-authored with Claude, Anthropic's AI assistant, to ensure comprehensive documentation, accurate technical details, and a polished developer experience.
|
Contributions are welcome! Here's how to get started:
# 1. Fork the repository on GitHub
# 2. Clone your fork
git clone https://github.com/YOUR_USERNAME/gpt5-cli-agent.git
cd gpt5-cli-agent
# 3. Create a feature branch
git checkout -b feature/my-new-feature
# 4. Make your changes and test
python main.py --agent-id test --model gpt-5-nano
# 5. Commit your changes
git add .
git commit -m "feat: add my new feature"
# 6. Push and open a Pull Request
git push origin feature/my-new-feature- Follow PEP 8 style for Python code
- Add docstrings to new functions and classes
- Test with all three models before submitting
- Update this README if you add user-facing features
- Keep PRs focused β one feature or fix per PR
MIT License
Copyright (c) 2025 Simon-Pierre Boucher
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
Built with β€οΈ by Simon-Pierre Boucher & Claude
If this project helped you, consider giving it a β on GitHub!