The backend API for the Typelets Application - a secure, encrypted notes management system built with TypeScript, Hono, and PostgreSQL. Features end-to-end encryption support, file attachments, and folder organization.
- Features
- Tech Stack
- Prerequisites
- Local Development Setup
- Alternative Installation Methods
- Available Scripts
- API Endpoints
- Database Schema
- Security Features
- Environment Variables
- Monitoring with Grafana Cloud
- Development
- Docker Support
- Production Deployment
- Contributing
- License
- Acknowledgments
- 🔐 Secure Authentication via Clerk
- 📝 Encrypted Notes with client-side encryption support
- 📁 Folder Organization with nested folder support
- 📎 File Attachments with encrypted storage
- 🏷️ Tags & Search for easy note discovery
- 🗑️ Trash & Archive functionality
- ⚡ Fast & Type-Safe with TypeScript and Hono
- 🐘 PostgreSQL with Drizzle ORM
- 🚀 Valkey/Redis Caching for high-performance data access with cluster support
- 📊 Observability with Grafana Cloud and OpenTelemetry for distributed tracing, metrics, and logging
- 💻 Code Execution via self-hosted Piston engine
- 🛡️ Comprehensive Rate Limiting for HTTP, file uploads, and code execution
- 🏥 Health Checks with detailed system status and readiness probes
- 📈 Structured Logging with automatic event tracking and error capture
- Runtime: Node.js 22+ (LTS recommended)
- Framework: Hono - Fast, lightweight web framework
- Database: PostgreSQL with Drizzle ORM
- Cache: Valkey/Redis Cluster for high-performance caching
- Authentication: Clerk
- Validation: Zod
- Observability: Grafana Cloud with OpenTelemetry for tracing, metrics, and logging
- Logging: Structured JSON logging with automatic error capture
- TypeScript: Strict mode enabled for type safety
- Node.js 22+ (LTS recommended)
- pnpm 9.15.0+
- PostgreSQL database (local installation or Docker)
- Clerk account for authentication (sign up here)
- Valkey/Redis cluster for caching (optional - improves performance)
- Grafana Cloud account for monitoring (optional - sign up here)
- Piston code execution engine (optional - self-hosted)
Recommended approach for development: PostgreSQL in Docker + API with npm for hot reload and easy debugging
- Clone and install dependencies:
git clone https://github.com/typelets/typelets-api.git
cd typelets-api
pnpm install- Start PostgreSQL with Docker:
# Start PostgreSQL database for local development
docker run --name typelets-postgres \
-e POSTGRES_PASSWORD=devpassword \
-e POSTGRES_DB=typelets_local \
-p 5432:5432 -d postgres:15- Set up environment variables:
cp .env.example .env-
Configure environment variables:
- Create a free account at Clerk Dashboard
- Create a new application
- (Optional) Set up self-hosted Piston for code execution
- Update
.envwith your settings:
CLERK_SECRET_KEY=sk_test_your_actual_clerk_secret_key_from_dashboard CORS_ORIGINS=http://localhost:5173,http://localhost:3000 # Optional: For code execution features (self-hosted Piston) PISTON_API_URL=http://localhost:2000
-
Set up database schema:
pnpm run db:push- Start development server:
pnpm run dev🎉 Your API is now running at http://localhost:3000
The development server will automatically restart when you make changes to any TypeScript files.
# Start/stop database
docker start typelets-postgres # Start existing container
docker stop typelets-postgres # Stop when done
# API development
pnpm run dev # Auto-restart development server
pnpm run build # Test production build
pnpm run lint # Check code qualityDevelopment Features:
- ⚡ Auto-restart: Server automatically restarts when you save TypeScript files
- 📝 Terminal history preserved: See all your logs and errors
- 🚀 Fast compilation: Uses tsx with esbuild for quick rebuilds
# 1. Start PostgreSQL
docker run --name typelets-postgres -e POSTGRES_PASSWORD=devpassword -e POSTGRES_DB=typelets_local -p 5432:5432 -d postgres:15
# 2. Build and run API in Docker
docker build -t typelets-api .
docker run -p 3000:3000 --env-file .env typelets-apiIf you prefer to install PostgreSQL locally instead of Docker:
- Install PostgreSQL on your machine
- Create database:
createdb typelets_local - Update
.env:DATABASE_URL=postgresql://postgres:your_password@localhost:5432/typelets_local
pnpm run dev- Start development server with auto-restartpnpm run build- Build for productionpnpm start- Start production serverpnpm run lint- Run ESLintpnpm run format- Format code with Prettierpnpm run db:generate- Generate database migrationspnpm run db:push- Apply database schema changespnpm run db:studio- Open Drizzle Studio for database management
📚 Complete API documentation with interactive examples: https://api.typelets.com/docs (Swagger/OpenAPI)
The API provides comprehensive REST endpoints for:
- Users - Profile management and account deletion
- Folders - Hierarchical folder organization with nested support
- Notes - Full CRUD with encryption support, pagination, filtering, and search
- File Attachments - Encrypted file uploads and downloads
- Code Execution - Self-hosted Piston engine for running code in multiple languages
- Health Checks - System health checks and status monitoring
| Endpoint | Description |
|---|---|
GET / |
API information and version |
GET /health |
Enhanced health check with system status |
All /api/* endpoints require authentication via Bearer token:
Authorization: Bearer <clerk_jwt_token>
Visit the Swagger UI at /docs for:
- Complete endpoint reference with request/response schemas
- Interactive "Try it out" functionality
- Example requests and responses
- Schema definitions and validation rules
The application uses the following main tables:
users- User profiles synced from Clerkfolders- Hierarchical folder organizationnotes- Encrypted notes with metadatafile_attachments- Encrypted file attachments
- Authentication: All endpoints protected with Clerk JWT verification
- Encryption Ready: Schema supports client-side encryption for notes and files
- Input Validation: Comprehensive Zod schemas for all inputs
- SQL Injection Protection: Parameterized queries via Drizzle ORM
- CORS Configuration: Configurable allowed origins
- File Size Limits: Configurable limits (default: 50MB per file, 1GB total per note)
| Variable | Description | Required | Default |
|---|---|---|---|
DATABASE_URL |
PostgreSQL connection string | Yes | - |
CLERK_SECRET_KEY |
Clerk secret key for JWT verification | Yes | - |
CORS_ORIGINS |
Comma-separated list of allowed CORS origins | Yes | - |
PORT |
Server port | No | 3000 |
NODE_ENV |
Environment (development/production) | No | development |
| Caching (Optional) | |||
VALKEY_HOST |
Valkey/Redis cluster hostname | No | - |
VALKEY_PORT |
Valkey/Redis cluster port | No | 6379 |
| Monitoring (Optional) | |||
OTEL_EXPORTER_OTLP_ENDPOINT |
Grafana Cloud OTLP endpoint (prod only) | No | - |
GRAFANA_CLOUD_API_KEY |
Base64 encoded Grafana Cloud credentials | No | - |
OTEL_SERVICE_NAME |
Service name for OpenTelemetry | No | typelets-api |
OTEL_ENABLED |
Force enable OTEL in dev (not recommended) | No | false |
| Rate Limiting | |||
HTTP_RATE_LIMIT_WINDOW_MS |
HTTP rate limit window in milliseconds | No | 900000 (15 min) |
HTTP_RATE_LIMIT_MAX_REQUESTS |
Max HTTP requests per window | No | 1000 |
HTTP_FILE_RATE_LIMIT_MAX |
Max file operations per window | No | 100 |
CODE_EXEC_RATE_LIMIT_MAX |
Max code executions per window | No | 100 (dev), 50 (prod) |
CODE_EXEC_RATE_WINDOW_MS |
Code execution rate limit window in milliseconds | No | 900000 (15 min) |
| File & Storage | |||
MAX_FILE_SIZE_MB |
Maximum size per file in MB | No | 50 |
MAX_NOTE_SIZE_MB |
Maximum total attachments per note in MB | No | 1024 (1GB) |
FREE_TIER_STORAGE_GB |
Free tier storage limit in GB | No | 1 |
FREE_TIER_NOTE_LIMIT |
Free tier note count limit | No | 100 |
| Code Execution (Optional) | |||
PISTON_API_URL |
Self-hosted Piston API URL | No* | http://localhost:2000 |
*Required only for code execution features (self-hosted Piston)
The API integrates with Grafana Cloud using OpenTelemetry for comprehensive observability with distributed tracing, metrics collection, and log aggregation.
- Distributed Tracing: Automatic instrumentation for HTTP requests, database queries, and cache operations
- Metrics Collection: Real-time metrics exported every 60 seconds
- Log Aggregation: Structured JSON logs sent to Grafana Loki
- Automatic Instrumentation: Zero-code instrumentation for:
- HTTP/HTTPS requests (Hono framework)
- PostgreSQL database queries
- Redis/Upstash cache operations
- Performance Monitoring: Request duration, latency, and throughput tracking
- Error Tracking: Automatic error capture with full context and stack traces
- User Context: Requests are automatically tagged with user IDs
- Environment Tracking: Separate monitoring for development and production
Local Development Setup:
-
Sign up for Grafana Cloud (free tier available):
-
Get your credentials:
- Go to Connections → Add new connection → OpenTelemetry
- Copy the OTLP endpoint URL (e.g.,
https://otlp-gateway-prod-<region>.grafana.net/otlp) - Generate a token for authentication
-
Start Grafana Alloy locally:
# Set your Grafana Cloud token in .env.local echo "GRAFANA_CLOUD_TOKEN=glc_your_token_here" >> .env.local # Start Alloy with Docker Compose docker compose -f docker-compose.alloy.yml up -d
-
Configure your application: Add to
.env.local:# Point to local Alloy instance OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf OTEL_SERVICE_NAME=typelets-api OTEL_RESOURCE_ATTRIBUTES=deployment.environment=development,service.name=typelets-api
-
Start development server:
pnpm run dev
You should see logs appearing in Grafana Cloud Loki with service_name="typelets-api".
Production Setup:
In production (ECS), the Alloy sidecar runs in the same task as the API container. See the Production Deployment section for details.
Important Notes:
- Local dev sends to Alloy at
localhost:4318 - Alloy forwards to Grafana Cloud with authentication
- All telemetry (logs, traces, metrics) flows through Alloy
- If Alloy is not running, the app continues working normally (telemetry is optional)
Automatic Instrumentation:
- HTTP requests (method, path, status code, duration)
- PostgreSQL queries (operation, table, duration)
- Redis/Upstash operations (get, set, delete with cache hit/miss tracking)
Structured Logging:
- Authentication events (login, logout, token refresh)
- Rate limiting violations
- Security events (failed auth, suspicious activity)
- Billing limit violations
- File upload events and storage operations
- HTTP request/response logs
- Database query performance
- Cache operations and hit rates
- Business events (note creation, folder operations, etc.)
All logs, traces, and metrics are automatically sent to Grafana Cloud where you can:
- Visualize request traces with flame graphs
- Create custom dashboards for metrics
- Set up alerts for errors and performance issues
- Search and analyze logs with LogQL
- Correlate logs, metrics, and traces in one place
src/
├── db/
│ ├── index.ts # Database connection
│ └── schema.ts # Database schema definitions
├── lib/
│ ├── cache.ts # Valkey/Redis cluster caching layer
│ ├── cache-keys.ts # Centralized cache key patterns and TTL values
│ ├── logger.ts # Structured logging with automatic error capture
│ └── validation.ts # Zod validation schemas
├── middleware/
│ ├── auth.ts # Authentication middleware
│ ├── rate-limit.ts # Rate limiting middleware
│ ├── security.ts # Security headers middleware
│ └── usage.ts # Storage and usage limit enforcement
├── routes/
│ ├── code.ts # Code execution routes (Piston engine)
│ ├── files.ts # File attachment routes
│ ├── folders.ts # Folder management routes with caching
│ ├── notes.ts # Note management routes
│ └── users.ts # User profile routes
├── types/
│ └── index.ts # TypeScript type definitions
└── server.ts # Application entry point
This project uses TypeScript in strict mode with comprehensive type definitions. All database operations, API inputs, and responses are fully typed.
The API can be run in Docker containers for local testing. The architecture separates the API from the database:
# 1. Start PostgreSQL container for local testing
docker run --name typelets-postgres -e POSTGRES_PASSWORD=devpassword -e POSTGRES_DB=typelets_local -p 5432:5432 -d postgres:15
# 2. Build your API container
docker build -t typelets-api .
# 3. Run API container for local testing
docker run -p 3000:3000 --env-file .env typelets-api
# Run with environment file
docker run -p 3000:3000 \
-e NODE_ENV=development \
--env-file .env \
typelets-apiThis Docker setup is for local development and testing only.
| Environment | API | Database | Configuration |
|---|---|---|---|
| Local Testing | Docker container OR npm dev | Docker PostgreSQL container | .env file |
| Production | ECS container | AWS RDS PostgreSQL | ECS task definition |
This application is designed for production deployment using AWS ECS (Elastic Container Service) with a Grafana Alloy sidecar for observability.
┌─────────────────────────────────────────┐
│ ECS Task (Fargate) │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ typelets-api │──▶│ grafana-alloy│ │
│ │ (Port 3000) │ │ (Port 4318) │ │
│ └──────────────┘ └──────┬───────┘ │
│ │ │
└─────────────────────────────┼──────────┘
│
▼
Grafana Cloud OTLP
The API sends telemetry (logs, traces, metrics) to a local Alloy sidecar at http://localhost:4318, which then forwards to Grafana Cloud.
- API: ECS containers running in AWS
- Database: AWS RDS PostgreSQL (not Docker containers)
- Monitoring: Grafana Alloy sidecar + Grafana Cloud
- Environment Variables: ECS task definitions (not
.envfiles) - Secrets: AWS Parameter Store or Secrets Manager
- Container Registry: Amazon ECR
# Build and push the API image
pnpm run build
docker build -t typelets-api:latest .
docker tag typelets-api:latest YOUR_ECR_REPO/typelets-api:latest
docker push YOUR_ECR_REPO/typelets-api:latest
# Build and push the Alloy sidecar image
docker build -f Dockerfile.alloy -t grafana-alloy:latest .
docker tag grafana-alloy:latest YOUR_ECR_REPO/grafana-alloy:latest
docker push YOUR_ECR_REPO/grafana-alloy:latestYour ECS task definition should include two containers:
Container 1: typelets-api
- Image:
YOUR_ECR_REPO/typelets-api:latest - Port: 3000
- Essential:
true - Environment variables:
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318OTEL_EXPORTER_OTLP_PROTOCOL=http/protobufOTEL_SERVICE_NAME=typelets-api- All other app environment variables
Container 2: grafana-alloy
- Image:
YOUR_ECR_REPO/grafana-alloy:latest - Ports: 4318, 4317
- Essential:
false - Environment variables:
GRAFANA_CLOUD_TOKEN=your_grafana_cloud_tokenGRAFANA_CLOUD_ENDPOINT=your_otlp_gateway_endpoint_hereGRAFANA_CLOUD_INSTANCE_ID=your_instance_id
Task Resources:
- CPU: 1024 (1 vCPU)
- Memory: 2048 (2GB)
- Network Mode:
awsvpc(required for localhost communication)
# Register the task definition
aws ecs register-task-definition \
--cli-input-json file://ecs-task-definition.json \
--region us-east-1
# Update the service
aws ecs update-service \
--cluster YOUR_CLUSTER_NAME \
--service YOUR_SERVICE_NAME \
--task-definition typelets-api-td \
--force-new-deployment \
--region us-east-1Local Development with Grafana:
- Run Alloy locally:
docker compose -f docker-compose.alloy.yml up -d - Set
GRAFANA_CLOUD_TOKENin.env.local - Logs will appear in Grafana Cloud Loki
Production Secrets:
- Never commit ECS task definitions (they contain secrets)
- Task definition files are in
.gitignore - Store sensitive values in AWS Secrets Manager or Parameter Store
Monitoring:
- CloudWatch Logs:
/ecs/typelets-backend-td(app logs) - CloudWatch Logs:
/ecs/grafana-alloy(Alloy logs) - Grafana Cloud Loki: Structured app logs with trace correlation
Health Checks:
- App container: Uses
/healthendpoint - Alloy container: No health check needed (essential: false)
Container fails with "Cannot find module './instrumentation.js'":
- This is fixed in the Dockerfile by copying
instrumentation.jsto production - Rebuild and push the image
Logs not appearing in Grafana:
- Check Alloy container logs in CloudWatch
- Verify
GRAFANA_CLOUD_TOKENis set correctly - Ensure app is sending to
http://localhost:4318
Production vs Local:
- Local: Uses
.envfiles and Docker containers for testing - Production: Uses ECS task definitions and AWS RDS for real deployment
- Never use: Local testing setup in production
We welcome contributions from the community!
- Fork the repository on GitHub
- Clone your fork locally:
git clone https://github.com/your-username/typelets-api.git cd typelets-api - Install dependencies:
pnpm install - Set up environment:
cp .env.example .env - Start PostgreSQL:
docker run --name typelets-postgres \ -e POSTGRES_PASSWORD=devpassword \ -e POSTGRES_DB=typelets_local \ -p 5432:5432 -d postgres:15
- Apply database schema:
pnpm run db:push - Start development:
pnpm run dev
We use Conventional Commits for automatic versioning and changelog generation:
feat:New feature (minor version bump)fix:Bug fix (patch version bump)docs:Documentation changesstyle:Code style changes (formatting, etc.)refactor:Code refactoringperf:Performance improvementstest:Adding or updating testschore:Maintenance tasksci:CI/CD changes
Examples:
feat(auth): add refresh token rotation
fix(files): resolve file upload size validation
feat(api)!: change authentication header format- Create a feature branch:
git checkout -b feature/your-feature-name - Make your changes and commit using conventional commits
- Run linting and tests:
pnpm run lint && pnpm run build - Push to your fork and create a Pull Request
- Ensure all CI checks pass
- Wait for review and address any feedback
When reporting bugs, please include:
- Clear description of the issue
- Steps to reproduce
- Expected vs actual behavior
- Environment details (OS, Node version)
- Error messages or logs if applicable
DO NOT report security vulnerabilities through public GitHub issues. Please use GitHub's private vulnerability reporting feature or contact the maintainers directly.
This project is licensed under the MIT License - see the LICENSE file for details.
- Hono for the excellent web framework
- Drizzle ORM for type-safe database operations
- Clerk for authentication services