A real-time collaborative sketching app built with event sourcing and WebSockets.
Try it live: skedoodle.top
Multiple people draw on the same canvas at once. Every stroke, shape, and edit is captured as an immutable command in an append-only log. The log is the single source of truth — the canvas is just a projection of it.
- 0% CPU at idle — fully event-driven rendering, no polling or animation loops when nothing is changing
- ~40% CPU during heavy brush usage on a single core (measured against Figma's 150–200% under comparable conditions)
- ~20% CPU max with throttling enabled
- Configurable frame rate: 120fps, 60fps, or 15fps battery-saver mode
- Freehand brush with configurable stabilization (1–10 pixel smoothing) and real-time path simplification during drawing:
- Douglas-Peucker algorithm (1–100 tolerance)
- Visvalingam-Whyatt with three variants: angle-based, distance-based, and triangle-area (1–100 sensitivity each)
- Simplification runs on the stroke as it's drawn, not as a post-processing step
- Lines with optional arrowheads
- Rectangles with configurable stroke, fill, and corner radius
- Bezier curves with control points
- Text with inline editing
- Pointer for selecting and dragging shapes
- Eraser, hand/pan, zoom
- Infinite canvas with pan and zoom up to 10,000% (practical precision limit, expandable)
- Incremental rendering — only dirty regions are redrawn, not the full canvas
- Configurable grid system: line or dot rendering with customizable color
- Grid auto-hides at configured zoom thresholds to reduce visual clutter
- Viewport culling: off-screen shapes are excluded from the render pass
- Real-time sync via WebSocket rooms — commands broadcast to all participants
- Remote cursor tracking with color-coded labels
- User presence indicators
- Offline-first: works locally with localStorage, reconciles on reconnect
- Scrub through the full command history at any point
- Branch from any position to create a new sketch from that state
All mutations go through a command pipeline:
User action → Command { id, ts, uid, type, sid, data } → append to log → broadcast → render
Commands are typed as create, update, remove, undo, or redo. Each has a ULID for global ordering and deduplication. Undo generates inverse commands (create/remove swap, updates store pre-mutation snapshots for field-level rollback) rather than popping from a stack — this keeps the log append-only even across undo/redo.
The server is stateless: rooms load their command log from the database when the first client joins, then relay commands between participants. On reconnect, the client compares log lengths and replays the delta in either direction.
Authentication is delegated to PocketID, a self-hosted OIDC Identity Provider. The client uses Authorization Code + PKCE flow via oidc-client-ts. PocketID issues access tokens; the server validates them directly against PocketID's JWKS endpoint (using jose) — no internal JWT issuance.
On first login the server automatically creates a local user record from the OIDC claims (sub, preferred_username). Subsequent logins update the username if it changed in PocketID.
Future: integrating OpenTDF for attribute-based access control (ABAC) — per-sketch permissions based on user attributes, roles, or organizational policies.
The database stores users, sketches (metadata), and the command log. The schema is designed around the append-only pattern: commands are inserted but never updated or deleted. SQLite is a good fit here — single-writer, no connection pool overhead, and the write pattern is sequential appends.
WebSocket rooms handle the real-time layer:
- Client sends
joinwith auth token → server sends full command log + active users - Commands relay to all room members (excluding sender)
- Cursors broadcast at ~10fps with throttling
- Empty rooms clean up after a 30-second grace period
Conflict resolution is last-write-wins by command order. If user A deletes a shape that user B is editing, B's update silently no-ops. All clients converge to the same state by replaying the same ordered command log.
- Event-driven rendering: the render loop only runs in response to user input or incoming sync events — zero idle CPU
- Incremental dirty-region updates: only shapes that changed are re-rendered
- Viewport culling skips shapes outside the visible area
- Real-time path simplification (Douglas-Peucker / Visvalingam-Whyatt) reduces vertex count during drawing, not after
- Configurable frame throttling (120/60/15fps) trades smoothness for CPU headroom
| Layer | Stack |
|---|---|
| Frontend | React, Vite, TypeScript, Two.js (vector rendering), Zustand, Tailwind CSS |
| Backend | Express 5, TypeScript, WebSocket (ws), JWT |
| Database | SQLite via Prisma ORM |
| Auth | PocketID OIDC (oidc-client-ts + jose) |
| Infra | Docker, Caddy, GitHub Actions → GHCR → VPS |
- Node.js 22+
- pnpm
git clone https://github.com/eugenioenko/skedoodle.git
cd skedoodle
# Install dependencies
cd client && pnpm install && cd ..
cd server && pnpm install && cd ..
# Configure environment
cp client/.env.example client/.env
cp server/.env.example server/.env# Start PocketID — runs at http://localhost:1411
docker compose -f docker-compose.dev.yml up -d- Open
http://localhost:1411and complete the first-run admin setup. - In the PocketID admin UI, go to OIDC Clients → Create.
- Name:
Skedoodle - Client ID:
skedoodle - Redirect URIs:
http://localhost:5173/auth/callback - Post-logout redirect URIs:
http://localhost:5173/auth/logout - Grant type: Authorization Code (PKCE — no client secret required)
- Name:
- Copy the Client ID (
skedoodle) into both.envfiles:client/.env: setVITE_OIDC_CLIENT_ID=skedoodleserver/.env: setOIDC_CLIENT_ID=skedoodle
# Apply database migrations
cd server && npx prisma migrate deploy && cd ..# Terminal 1: server
cd server && pnpm run dev:http
# Terminal 2: client
cd client && pnpm devOpen http://localhost:5173. You'll be redirected to your local PocketID instance to sign in.
skedoodle/
├── client/ # React frontend (Vite)
│ └── src/
│ ├── canvas/ # Drawing tools, rendering, history/command system
│ ├── components/ # UI components
│ ├── services/ # API and storage clients
│ ├── stores/ # Zustand state stores
│ └── sync/ # WebSocket sync client and models
├── server/ # Express + WebSocket backend
│ └── src/
│ ├── routes/ # REST API (auth, sketches)
│ └── utils/ # OIDC token validation, auth middleware
│ └── prisma/ # Schema and migrations
├── scripts/ # Docker build/run helpers
├── Dockerfile # Multi-stage build (client + server)
├── docker-compose.yml # Production deployment
└── Caddyfile # Reverse proxy config
The app runs as a single Docker container behind a Caddy reverse proxy on a VPS.
GitHub push to main
→ CI builds Docker image (multi-stage: Vite client + Node server)
→ Push to GitHub Container Registry (ghcr.io)
→ SSH into VPS, pull image, docker compose up -d
The Dockerfile produces one image that serves both the client (static files via Express) and the server (REST API + WebSocket). Prisma migrations run automatically on container startup.
- VPS: Single node running Docker
- Reverse proxy: Caddy (auto HTTPS, routes HTTP to port 3013 and WebSocket
/wsto port 3014) - Networking: Shared Docker network (
web) connecting Caddy to app containers - Database: SQLite file mounted as a Docker volume for persistence across deploys
- CI/CD: GitHub Actions —
ci.ymlvalidates builds on PRs,deploy.ymlships to production on merge to main
./scripts/docker-build.sh # Build image
./scripts/docker-run.sh # Run at http://localhost:3013See deploying.md for the full step-by-step VPS setup guide.
MIT — see LICENSE.