When the claws 🦞 are asleep the gloamies 🦉 come out to play !
Hi I'm Gloamy, I execute tasks on your behalf without deleting your stuff, leaking your private business, oh and I'm lightweight unlike that damn lobster (openclaw).
Gloamy is built around explicit subsystem contracts:
Providerfor model backendsChannelfor messaging platformsToolfor execution surfacesMemoryfor persistence and recallObserverfor observabilityRuntimeAdapterfor runtime isolationPeripheralfor boards and device integrations
The project goal is simple: one runtime, one configuration model, and swappable integrations without rewriting the core agent loop.
Gloamy can:
- chat with you in your terminal
- stay running in the background to handle messages
- control apps on your pc with permissions
- let desktop and external API clients connect through the gateway
- use different AI models to answer questions
- run tools and scripts on your computer or online, safely
- remember things between sessions
- connect to devices and smart hardware
Out of the box, the runtime supports:
- local CLI usage
- channel-based operation through providers such as Telegram, Discord, Slack, Matrix, WhatsApp, iMessage, Email, IRC, Nostr, and others
- OpenAI-compatible and non-OpenAI model providers
- SQLite, Markdown, Lucid, PostgreSQL, and no-op memory modes
High-level repository map:
src/main.rs: CLI entrypoint and command routingsrc/lib.rs: shared exports and command enumssrc/agent/: orchestration loopsrc/config/: config schema, loading, merging, env overridessrc/providers/: model provider implementations and factory wiringsrc/channels/: channel integrationssrc/tools/: tool execution surfacesrc/memory/: memory backendssrc/security/: policy, pairing, secret handlingsrc/gateway/: HTTP and websocket gatewaysrc/runtime/: runtime adapterssrc/peripherals/: hardware integrationsdesktop/: Tauri + Vue desktop applicationdocs/: operator, reference, and contribution docs
- Trait-driven architecture
- Secure-by-default runtime behavior
- Small binary and low runtime overhead
- Explicit config and CLI contracts
- Swappable providers, channels, memory backends, and tools
- Deterministic, Rust-first deployment model
You need:
- a working Rust toolchain
- standard platform build tools
- an API key or local model endpoint, depending on your provider
On macOS:
xcode-select --installOn Debian or Ubuntu:
sudo apt install build-essential pkg-configInstall Rust:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
rustup default stableVerify:
rustc --version
cargo --versionInstall from crates.io:
cargo install gloamy --lockedThen run full onboarding:
gloamy onboard --interactiveIf you prefer to run from source, clone the repository and start onboarding:
git clone https://github.com/iBz-04/gloamy.git
cd gloamy
cargo run -- onboard --interactiveThe crates.io install is the recommended default for end users. The source path is better when you want to develop, patch, or test unreleased changes.
The full onboarding flow lets you:
- choose your provider and default model
- configure channels
- set memory behavior
- create workspace identity files
- avoid partial setup drift
After onboarding, start a direct CLI session:
cargo run -- agentIf you want persistent channel operation after setup:
cargo run -- daemonIf you prefer the bootstrap path:
./bootstrap.sh --interactive-onboardUseful bootstrap variants:
./bootstrap.sh
./bootstrap.sh --install-system-deps --install-rust
./bootstrap.sh --prefer-prebuilt
./bootstrap.sh --prebuilt-only
./bootstrap.sh --dockerReference: docs/one-click-bootstrap.md
If you already know exactly what you want, you can skip the full wizard:
cargo run -- onboard --api-key YOUR_OPENAI_KEY --provider openai --model gpt-5-miniThis path is faster, but the interactive onboarding flow is the better default for first setup.
Gloamy can import memory from an existing OpenClaw workspace into your current Gloamy workspace.
Preview the migration first:
cargo run -- migrate openclaw --dry-runRun the import:
cargo run -- migrate openclawUse a custom OpenClaw workspace path if needed:
cargo run -- migrate openclaw --source /path/to/openclaw/workspaceWhat the migration does:
- reads importable memory from
~/.openclaw/workspaceby default - imports entries from
memory/brain.db,MEMORY.md, andmemory/*.md - skips unchanged entries on re-run and renames conflicting keys deterministically
- creates a backup of the target Gloamy memory before writing
What it does not do:
- it does not migrate arbitrary workspace files
- it does not convert OpenClaw config into Gloamy
config.toml
If you want a safe preview of the candidate entries without writing data, use --dry-run.
This repository also includes a desktop application in desktop/, built with Tauri (Rust backend) and Vue 3 (frontend).
The legacy browser dashboard has been removed. Use the desktop app for the primary UI, and use the gateway for webhook/API access.
If you want to run the desktop UI locally:
cd desktop
pnpm install
pnpm tauri devFor desktop-specific setup, development, and packaging details, see desktop/README.md.
Start a direct chat session:
cargo run -- agentRun a one-shot prompt:
cargo run -- agent -m "Summarize today's logs"Override provider and model for one run:
cargo run -- agent --provider openai --model gpt-5-mini -m "hello"Run the long-lived runtime:
cargo run -- daemonThe daemon starts:
- configured channels
- gateway server
- heartbeat
- scheduler
Use this mode when you want Telegram or other channels to stay online.
Run only the local gateway:
cargo run -- gatewayThe gateway exposes HTTP, webhook, and websocket endpoints for external integrations. It no longer serves a browser dashboard.
Run the configured channels without the full daemon stack:
cargo run -- channel startThe primary setup command is:
gloamy onboardCommon variants:
gloamy onboard --interactive
gloamy onboard --channels-only
gloamy onboard --force
gloamy onboard --api-key YOUR_KEY --provider openai --model gpt-5-miniIf you already have a config and only want to wire channels, use:
gloamy onboard --channels-onlyGloamy supports multiple inbound and outbound channels. Channel setup is stored in config, and the daemon uses that config at runtime.
Typical flow:
- Run
gloamy onboard --channels-only - Configure Telegram, Discord, Slack, WhatsApp, or another channel
- Start
gloamy daemon - Verify with
gloamy statusandgloamy channel doctor
Important operational note:
gloamy agentis for direct CLI usegloamy daemonis what you run for persistent channel operation
Canonical reference: docs/channels-reference.md
Default config location:
~/.gloamy/config.toml
Default workspace:
~/.gloamy/workspace
Minimal example:
api_key = "sk-..."
default_provider = "openai"
default_model = "gpt-5-mini"
default_temperature = 0.7
[memory]
backend = "sqlite"
auto_save = true
embedding_provider = "none"
vector_weight = 0.7
keyword_weight = 0.3Notes:
default_providercontrols the main runtime providerdefault_modelcontrols the default model for CLI, channel, and daemon flowsdefault_temperaturemay be ignored by some providers or models- several settings can also be overridden by environment variables
Canonical reference: docs/config-reference.md
Gloamy supports:
- direct providers such as OpenAI, Anthropic, Gemini, Groq, Mistral, DeepSeek, Venice, GLM, Qwen, and others
- OpenAI-compatible custom endpoints
- provider aliases and routed model configuration
- subscription-native auth flows for supported providers
Examples:
gloamy agent --provider openai --model gpt-5-mini -m "hello"
gloamy agent --provider anthropic -m "hello"
gloamy agent --provider openai-codex -m "hello"References:
Gloamy is designed to fail closed where practical.
Important defaults:
- localhost-first binding
- explicit pairing for gateway auth flows
- workspace-scoped file access
- deny-by-default channel allowlists
- encrypted secret support
- explicit security policy for tool and command execution
Operational guidance:
- do not expose the gateway directly without understanding the bind and tunnel settings
- use allowlists for channels instead of wide-open routing
- keep secrets in config or environment variables, not in workspace files
- review tool and command permissions before enabling broad autonomy
References:
Supported memory backends include:
- SQLite
- Markdown
- Lucid
- PostgreSQL
none
SQLite is the usual default because it gives:
- local persistence
- keyword and vector search support
- practical low-friction setup
Reference: docs/config-reference.md
gloamy status
gloamy doctor
gloamy channel doctor
gloamy agent
gloamy daemon
gloamy gateway
gloamy onboard --interactive
gloamy onboard --channels-only
gloamy auth status
gloamy service install
gloamy service status
gloamy service restartCanonical command reference: docs/commands-reference.md
Recommended local validation:
cargo fmt --all -- --check
cargo clippy --all-targets -- -D warnings
cargo testPreferred full local validation path when available:
./dev/ci.sh allUseful development commands:
cargo run -- status
cargo run -- doctor
cargo run -- agent -m "hello"
cargo run -- daemonIf you are working on docs, start here:
Start from the docs hub:
- Docs hub:
docs/README.md - Unified TOC:
docs/SUMMARY.md - Getting started:
docs/getting-started/README.md - Reference:
docs/reference/README.md - Operations:
docs/operations/README.md - Security:
docs/security/README.md - Hardware:
docs/hardware/README.md - Contributing workflow:
docs/pr-workflow.md
High-signal runtime references:
docs/commands-reference.mddocs/providers-reference.mddocs/channels-reference.mddocs/config-reference.mddocs/operations-runbook.mddocs/troubleshooting.md
Official source of truth:
If you encounter impersonation or a misleading fork, open an issue in the official repository.
If you want to contribute:
- review
AGENTS.mdfor repository engineering expectations - read
docs/pr-workflow.md - use
docs/reviewer-playbook.mdfor review standards
Good entry points:
- new provider in
src/providers/ - new channel in
src/channels/ - new tool in
src/tools/ - new memory backend in
src/memory/ - new observer in
src/observability/
Gloamy is licensed under MIT.

