I build production-grade agentic AI systems — evidence-backed, measurable, and trusted in enterprise workflows.
Let your AI agents talk about each other.
Local-first CLI for evidence-backed cross-agent coordination across Codex, Claude, Gemini, and Cursor. One agent reads another's session with citations and structured evidence — no orchestrator required.
- Dual implementation: Node.js + Rust with identical conformance tests
- Session reads, diffing, comparisons, agent-to-agent messaging, secret redaction
- Context Pack: 5-doc agent-first repo briefing that eliminates cold-start re-reads
- Zero npm prod dependencies, works offline, nothing leaves your machine
| agent-chorus | CrewAI / AutoGen | ccswarm / claude-squad | |
|---|---|---|---|
| Approach | Read-only evidence layer | Full orchestration framework | Parallel agent spawning |
| Agents | Codex, Claude, Gemini, Cursor | Provider-specific | Usually Claude-only |
| Dependencies | Zero npm prod deps | Heavy Python/TS stack | Moderate |
| Cold-start solution | Context Pack | None | None |
Repo: cote-star/agent-chorus
Choose the trust posture before a local tool gets credential-backed access.
macOS-first local trust broker for agent-mediated tool execution. Secrets stay local, wrappers and binaries are trust-pinned, and you choose the trust mode per task.
- Three shipped modes: handoff (env injection), oneshot (bounded run), brokered (per-operation request)
- Swift 6.0, macOS Keychain-backed, JSONL audit trail with enforced preflight
- Fail-closed on drift, hijack, or bypass — defense in depth, not "secure agents solved"
Repo: cote-star/latchkeyd
- Production multi-agent systems and coordination reliability
- LLM/VLM engineering with research-to-production translation
- Fine-tuning, structured generation, and inference optimization
- LLMOps, evaluation, and governance for enterprise deployment
- Local-first agent security and trust infrastructure
Practical field notes on enterprise AI reliability and agent systems:
- The Silo Tax — why multi-agent workflows break
- Why Agents Don't Fail Fast — they fail slow
More at Confessions of a Thoughtful Engineer
- Evidence over assumptions
- Reliability over demo polish
- Measurable outcomes over vague AI claims
- Simplicity first; orchestration only when needed





