feat: Coolify deployment support and Telegram UI/UX enhancements#239
feat: Coolify deployment support and Telegram UI/UX enhancements#239mrbeandev wants to merge 12 commits intosipeed:mainfrom
Conversation
This PR introduces comprehensive support for Coolify deployment and several improvements to the Telegram channel: Coolify Deployment: - Added COOLIFY.md guide with 3 configuration methods. - Added entrypoint-coolify.sh to generate config.json from environment variables. - Added Dockerfile.coolify and docker-compose-coolify.yml optimized for Coolify. - Support for full JSON configuration via PICOCLAW_CONFIG_JSON env var. Telegram Enhancements: - Persistent 'typing' indicator that repeats every 4s while AI is thinking. - Automatic registration of bot commands (/model, /models) on startup. - Consolidated /model command that supports 'provider/model' syntax for atomic switching. - Dynamic /models command that shows actually configured providers and active model. Configuration: - Improved AgentLoop to support hot-switching models and providers without restart. These changes improve cloud deployability and user experience in chat channels.
The root cause: Go's env.Parse() in LoadConfig reads PICOCLAW_* environment variables AFTER loading config.json, silently overwriting user-provided values. Dockerfile.coolify had hardcoded Gemini defaults (PICOCLAW_AGENTS_DEFAULTS_PROVIDER=gemini) baked into the image layer, so even when config.json correctly said 'vllm', the env vars won. Fix: - entrypoint-coolify.sh: When using PICOCLAW_CONFIG_JSON (Method 1), or mounted config (Method 2), unset all PICOCLAW_* env vars before calling picoclaw so the JSON file is the single source of truth. - docker-compose-coolify.yml: Remove hardcoded gemini defaults from agent and doctor services. - Updated docs header to recommend PICOCLAW_CONFIG_JSON as primary.
…t agent blocking (fixes #197)
9ee5328 to
179527d
Compare
|
@Zepan Coolify deployment support plus Telegram UI/UX enhancements at +1032 lines. This bundles deployment infrastructure with channel improvements — two distinct concerns in one PR. Recommendation: Consider splitting. The Telegram UX improvements are useful independently. Coolify deployment support is a niche deployment target that adds maintenance burden. Splitting would make review easier. |
|
Addressed the latest review feedback to split concerns in this PR. I removed the Coolify-specific deployment bundle from this branch (docs, compose, entrypoint, Coolify Dockerfile, and related release/workflow wiring) so PR #239 focuses on the channel/runtime improvements instead of mixing deployment infrastructure + Telegram UX. Latest commit on this PR branch: |
|
Update:
Coolify-only PR: #345 |
This PR introduces comprehensive support for Coolify deployment and several improvements to the Telegram channel to enhance the self-hosting experience.
☁️ Coolify Deployment
Dockerfile.coolifyand docker-compose-coolify.yml optimized for Coolify's environment.PICOCLAW_CONFIG_JSONenvironment variable.🤖 Telegram Enhancements
/model,/models) with Telegram on startup usingsetMyCommands.provider/modelsyntax (e.g.,/model vllm/qwen3-coder-next:cloud) to switch both the backend API and the model name in a single message./modelscommand now dynamically reads the loaded configuration to show actually available providers and the currently active model.⚙️ Core Improvements
These changes make PicoClaw significantly easier to deploy on cloud platforms and improve the interactability of the Telegram bot.
🔗 Related Issues
/model provider/model.PICOCLAW_CONFIG_JSON.auth_methodcontrols API type); Anthropicapi_baseis ignored on Messages path #206: Makes API routing explicit and user-controlled.cc @sipeed