Skip to content

banbury-cheese/workloom

Repository files navigation

Workloom

Workloom is a local-first desktop activity tracker for knowledge work. It watches the active window, captures screenshots only after filter checks, summarizes them with a local Ollama model, merges them into sessions, and generates searchable history plus daily digests.

Built by Kay @ itskay.co.

What It Does

  • Captures activity privately on your machine
  • Uses local Ollama models for screenshot understanding and session merging
  • Builds daily digests with OpenAI, Anthropic, or fully local Ollama
  • Lets you search past work and ask questions about recent activity
  • Keeps behavior customizable through config.yaml and user_prompts/

Quick Start

  1. Create and activate a virtualenv:

    python3 -m venv .venv
    source .venv/bin/activate
    python -m pip install --upgrade pip
    pip install -r requirements.txt
  2. Create a local config:

    cp config.example.yaml config.yaml
  3. Pull the recommended local model:

    ollama pull granite3.2-vision
  4. Start Ollama:

    ollama serve
  5. Fill in config.yaml. For a fully local setup, use digest_provider: "ollama" and point digest_model at a local model or reuse text_model.

  6. Start Workloom:

    python run.py

Common Commands

python run.py                 # open the interactive UI
python run.py ui              # re-open / attach the UI
python run.py start           # run headless
python run.py status          # runtime status
python run.py inspect --today # inspect today's captures and sessions
python run.py digest-preview --today
python run.py trends --range week

Inside the UI:

  • /inspect
  • /search <query>
  • /digest
  • /trends [week|month|30d]
  • /pause
  • /resume
  • /shutdown

Configuration

There are two main customization surfaces:

  • config.yaml Machine-level settings like models, providers, redaction, retention, and chat context.
  • user_prompts/ Plain markdown files that control behavior and tone.

Important prompt files:

  • digest.md Controls digest structure and section instructions with a simple heading-based format.
  • chat.md Controls how the local assistant answers.
  • filters.md Controls which apps are skipped or marked low-signal.
  • persona.md Controls digest tone and framing.
  • summariser.md Controls screenshot interpretation.

If you still have old user_prompts/insight_style.md or user_prompts/digest_format.md files from an older version, they are obsolete and ignored.

Privacy

  • Screenshots are only taken after filter checks pass.
  • Screenshots are not persisted to disk.
  • Capture text is redacted before persistence based on config.yaml.redaction.
  • Cloud digest providers only receive redacted session text by default.
  • retention_days controls automatic pruning of older captures.

Recommended Defaults

Known-good baseline:

schedule_interval_seconds: 60
vision_model: "granite3.2-vision"
text_model: "granite3.2-vision"
digest_provider: "openai"
digest_model: "gpt-4o-mini"

Alternative local setup to try:

vision_model: "qwen3-vl:4b"
text_model: "qwen3-vl:4b"

Documentation

Outputs

  • SQLite database: outputs/tracker.db
  • Chroma storage: outputs/chroma/
  • Log file: outputs/tracker.log
  • Runtime state: outputs/runtime_state.json
  • Daily digests: outputs/daily_digests/YYYY-MM-DD.md

Notes

  • python run.py is the main operator experience.
  • inspect.py still exists, but run.py inspect ... is the primary interface.
  • The scheduler is designed to keep running even if model or provider calls fail.

About

Local-first desktop activity tracker with Ollama vision, sessionization, and daily LLM digests

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages