Reusable AI chat assistant that monitors messaging platforms, classifies messages semantically, and responds using the Anthropic API.
- Multi-platform — Signal, Teams, Webhook, CLI (extensible adapter system)
- Semantic classification — LLM-powered REPLY/RELAY/IGNORE routing
- Zero dependencies — Python stdlib only, runs anywhere with Python 3.10+
- Docker ready — single image, built-in health checks
- Observable — per-adapter metrics, token cost tracking, structured JSON logs
- Configurable — everything via
COCONUT_*environment variables
# CLI mode (local testing)
COCONUT_ADAPTER_CLI_ENABLED=true \
ANTHROPIC_API_KEY=your-api-key \
python coconut.py
# Docker with Signal
cp config/coconut.env.example coconut.env
# Edit coconut.env with your values
docker compose up -d
bash scripts/signal-register.sh +1234567890coconut.py Poll loop, orchestration, signal handling
core/
config.py COCONUT_* env var loader
llm.py Anthropic API client (retry + backoff)
classifier.py Semantic message classifier
cache.py Rolling message cache with archiving
health.py Health writer, metrics, cost estimation
quotes.py Teams quote chain resolution
ratelimit.py Per-adapter sliding window rate limiter
logrotate.py Size-based log rotation (5MB default)
adapters/
base.py Abstract adapter interface (poll/send)
signal_adapter.py Signal via signal-cli REST API
teams_adapter.py Teams via MS Graph API
cli_adapter.py stdin/stdout for testing
webhook_adapter.py HTTP inbound/outbound with HMAC auth
Adapter.poll() → Cache → Classifier → REPLY: LLM response → Adapter.send()
→ RELAY: Forward to external system
→ IGNORE: Skip
All settings via environment variables. See config/coconut.env.example for the full list.
| Variable | Default | Description |
|---|---|---|
ANTHROPIC_API_KEY |
(required) | Anthropic API key |
COCONUT_NAME |
Coconut | Bot display name |
COCONUT_MODEL |
claude-haiku-4-5-20251001 | LLM model |
COCONUT_POLL_INTERVAL |
3 | Seconds between polls |
COCONUT_ADAPTER_SIGNAL_ENABLED |
false | Enable Signal adapter |
COCONUT_ADAPTER_TEAMS_ENABLED |
false | Enable Teams adapter |
COCONUT_ADAPTER_CLI_ENABLED |
false | Enable CLI adapter |
COCONUT_ADAPTER_WEBHOOK_ENABLED |
false | Enable webhook adapter |
COCONUT_WEBHOOK_PORT |
8000 | Webhook listen port |
COCONUT_WEBHOOK_SECRET |
(none) | HMAC-SHA256 shared secret |
COCONUT_RATE_LIMIT_MAX |
10 | Max replies per window per adapter |
COCONUT_RATE_LIMIT_WINDOW |
60 | Rate limit window in seconds |
COCONUT_LOG_MAX_BYTES |
5242880 | Log rotation size threshold (5MB) |
COCONUT_LOG_BACKUPS |
3 | Number of rotated log backups |
# K8s liveness probe / monitoring
python coconut.py --health
# Exit 0 = healthy, 1 = stale (no heartbeat in 5 min)
# Outputs JSON with uptime, processed count, adapter stats, token cost- Create
adapters/your_adapter.pyextendingBaseAdapter - Implement
poll()→ returnslist[Message] - Implement
send(text)→ delivers formatted response - Add config loading in
core/config.pyandcoconut.py
bash scripts/test/test-coconut.sh # Core E2E (7 tests)
bash scripts/test/test-multi-adapter.sh # Multi-adapter (6 tests)
bash scripts/test/test-hardening.sh # Retry, metrics, health (8 tests)
bash scripts/test/test-webhook.sh # Webhook adapter (8 tests)
bash scripts/test/test-ratelimit.sh # Rate limiter (8 tests)
bash scripts/test/test-logrotate.sh # Log rotation (5 tests)
bash scripts/test/test-docker.sh # Dockerfile validationMIT