Your private sanctuary for writing with AI — 100% Local, 100% Yours
A local-first markdown editor that uses your own LLMs (Ollama/LM Studio) as a collaborative writing companion. Like Cursor for code, but for prose.
| Feature | SanctumWriter | Other AI Writers |
|---|---|---|
| Privacy | ✅ 100% local - nothing leaves your machine | ❌ Data sent to cloud servers |
| Cost | ✅ Free forever (uses your local LLMs) | ❌ Monthly subscriptions |
| Internet | ✅ Works offline | ❌ Requires connection |
| Your Data | ✅ Stored locally, you control it | ❌ Stored on company servers |
| Open Source | ✅ MIT License | ❌ Usually closed source |
| Feature | SanctumWriter | SanctumWriter Pro |
|---|---|---|
| LLM Providers | Ollama, LM Studio (local only) | Local + OpenRouter, OpenAI, Anthropic, Google, xAI |
| Privacy | 100% Local | Choice of local or cloud |
| Cost | Free | Free + API costs |
| Best For | Privacy-focused writers | Writers wanting access to frontier models |
| Port | localhost:3125 |
localhost:3130 |
💡 Choose SanctumWriter if privacy is paramount and you're happy with local models.
💡 Choose Pro if you need GPT-4, Claude, or other cloud models.
- 📝 Rich Markdown Editor — Full-featured editor with syntax highlighting (CodeMirror 6)
- 👁️ Live Preview — See rendered markdown as you type
- 📁 Workspace Browser — Navigate and manage your documents (Obsidian-compatible!)
- 💾 Auto-Save — Never lose your work
- 🤖 Agentic Editing — AI directly modifies your document (no copy/paste!)
- 🎯 Selection-Aware — Highlight text and ask the AI to rewrite just that section
- 💬 Contextual Chat — AI sees your full document and selection
- 🔧 Hardware Optimization — Auto-detects your GPU and optimizes settings
- 👥 Council of Writers — Multiple AI reviewers analyze your work
- 🔍 Research Integration — Search with SearXNG for fact-checking
- 📊 Quality Assurance — Hallucination detection, fact verification, AI artifact removal
- 📋 Writing Workflow — Guided checklist from outline to polish
- 📈 Readability Metrics — Flesch-Kincaid and other scores
- 🎯 Focus Mode — Distraction-free writing
- 📚 Citations & Bibliography — Key-based citation management
- 📤 Export — PDF, DOCX, HTML, TXT formats
- 🧠 RAG Knowledge Base — Use your documents as AI context
- 💭 Session Memory — AI remembers your writing preferences
- 🎨 Image Studio — Generate images via local ComfyUI (Stable Diffusion)
SanctumWriter integrates with several local services. Here's what you need:
| Requirement | Version | Install Link |
|---|---|---|
| Node.js | 18+ | nodejs.org |
| npm | Included with Node.js | — |
| # | Service | Purpose | Install Link |
|---|---|---|---|
| 1 | Ollama ⭐ | Local LLM inference (recommended) | ollama.ai |
| — | OR LM Studio | Alternative local LLM with GUI | lmstudio.ai |
| # | Service | Purpose | Install Link | Requires |
|---|---|---|---|---|
| 2 | Docker Desktop | Container runtime for search services | docker.com | — |
| 3 | SearXNG | Privacy-focused web search | GitHub | Docker |
| 4 | Perplexica | AI-powered search with summaries | GitHub | Docker + Ollama |
| 5 | ComfyUI | Local image generation (Stable Diffusion) | GitHub | GPU recommended |
REQUIRED (Native Install):
─────────────────────────
1. Node.js (for SanctumWriter)
2. Ollama OR LM Studio (for AI) ← Install directly, NOT in Docker
OPTIONAL (Docker-based):
────────────────────────
3. Docker Desktop (only if using search features below)
4. SearXNG (privacy search - runs in Docker)
5. Perplexica (AI search - runs in Docker)
FINALLY:
────────
6. SanctumWriter
⚠️ Note: Install Ollama/LM Studio natively on your machine (not in Docker). While Docker versions exist, native installation gives better performance and GPU access.
# 1. Install Ollama from https://ollama.ai
# 2. Pull a writing-focused model
ollama pull qwen3:latest
# 3. Start the server (runs on port 11434)
ollama serve- Download from lmstudio.ai
- Load a model (e.g., Llama 3, Mistral, Qwen)
- Go to Local Server tab → Start Server (runs on port 1234)
SearXNG (Privacy-focused search):
# Using Docker
docker run -d --name searxng -p 4000:8080 searxng/searxngPerplexica (AI-powered search):
# Clone and follow setup instructions
git clone https://github.com/ItzCrazyKns/Perplexica.git
cd Perplexica
# See their README for Docker setup (runs on port 3000)# Clone the repo
git clone https://github.com/lafintiger/SanctumWriter.git
cd SanctumWriter
# Install dependencies
npm install
# Start the app
npm run devOpen http://localhost:3125 in your browser.
Quick Start (uses Ollama on your host machine):
# Clone the repo
git clone https://github.com/lafintiger/SanctumWriter.git
cd SanctumWriter
# Build and run
docker-compose up -d
# View logs
docker-compose logs -fWith Ollama in Docker (no local Ollama needed):
# Start app + Ollama container
docker-compose --profile ollama up -d
# Pull a model into the container
docker exec sanctum-ollama ollama pull qwen3:latestDevelopment with hot-reloading:
docker-compose -f docker-compose.dev.yml up| Docker Command | Description |
|---|---|
docker-compose up -d |
Start app (connects to host Ollama) |
docker-compose --profile ollama up -d |
Start app + Ollama container |
docker-compose down |
Stop all services |
docker-compose logs -f |
View live logs |
docker-compose build --no-cache |
Rebuild after code changes |
Environment Variables:
# Create .env file for custom settings
OLLAMA_URL=http://host.docker.internal:11434
LMSTUDIO_URL=http://host.docker.internal:1234
DEFAULT_PROVIDER=ollama
DEFAULT_MODEL=llama3💡 Tip: Your documents are persisted in
./documentsfolder and LanceDB data in a Docker volume.
- Click + to create a new document
- Write markdown in the editor
- Documents auto-save as you type
- Type a message in the chat panel
- The AI sees your document and any selected text
- Ask for help: "Make this more engaging" or "Expand this section"
- Highlight text in the editor
- Chat shows "Selection active"
- Ask: "Rewrite this" or "Make it more concise"
- AI directly modifies just the selected text
- Open Settings → Council Configuration
- Enable reviewers (Style, Clarity, Fact-checker, etc.)
- Click Start Council Review
- Review suggestions in the Review Document
Configure your local services in the Settings modal. All URLs are customizable if you use non-default ports:
| Service | Default URL | Purpose | Project |
|---|---|---|---|
| Ollama | http://localhost:11434 |
Local LLM inference | ollama.ai |
| LM Studio | http://localhost:1234 |
Alternative local LLM | lmstudio.ai |
| SearXNG | http://localhost:4000 |
Privacy-focused search | GitHub |
| Perplexica | http://localhost:3000 |
AI-powered search | GitHub |
| ComfyUI | http://localhost:8188 |
Image generation | GitHub |
💡 Tip: If running services in Docker, use
localhost:PORT— SanctumWriter runs on your host machine and can reach Docker containers via localhost.
Set your working directory in Settings → Workspace. Works great with Obsidian vaults!
| Shortcut | Action |
|---|---|
Ctrl/Cmd + S |
Save document |
Ctrl/Cmd + Z |
Undo |
Ctrl/Cmd + Shift + Z |
Redo |
Ctrl/Cmd + F |
Find in document |
Escape |
Toggle Focus Mode |
- Framework: Next.js 14 (App Router)
- Editor: CodeMirror 6
- Styling: Tailwind CSS
- State: Zustand
- Vector DB: LanceDB (for RAG)
- LLM: Ollama / LM Studio
# Make sure Ollama is running
ollama serve
# Check it's accessible
curl http://localhost:11434/api/tags# Pull a model first
ollama pull qwen3:latest
# Or for a smaller model
ollama pull gemma3:4bThe app runs on port 3125 by default. If you need a different port, modify package.json.
MIT - See LICENSE for details.
Built with ❤️ for writers who value their privacy.
SanctumWriter — Your words. Your sanctuary. Your privacy.