A high-throughput, Redis-backed rate limiter for Node.js with support for Token Bucket, Sliding Window, and Fixed Window algorithms. Ships with a REST API, Express middleware, Docker Compose, and Kubernetes manifests.
- Features
- Architecture
- Prerequisites
- Quick Start
- Configuration
- API Reference
- Using as Express Middleware
- Docker
- Kubernetes
- Testing & Benchmarks
- Project Structure
- License
- Three battle-tested algorithms — Token Bucket (burst-friendly), Sliding Window (smooth), Fixed Window (simple & fast)
- Atomic Redis operations via Lua scripts — no race conditions under high concurrency
- Standard rate-limit headers —
X-RateLimit-Limit,X-RateLimit-Remaining,X-RateLimit-Reset,Retry-After - Built-in metrics — request counts, allow/deny rates, and latency percentiles
- Pluggable Express middleware — drop into any existing Express app in one line
- Graceful shutdown — drains in-flight requests before closing Redis connections
- Docker & Kubernetes ready — multi-stage Dockerfile, Compose stack, HPA manifests included
┌─────────────────────────────────────────────────────┐
│ Express Application │
│ │
│ POST /api/check ──► Algorithm (Token Bucket / │
│ GET /api/status Sliding Window / │
│ DELETE /api/reset Fixed Window) │
│ GET /metrics ──► Metrics Service │
│ GET /health ──► Redis Ping │
└─────────────────────┬───────────────────────────────┘
│ ioredis + Lua scripts
▼
┌───────────────┐
│ Redis │
└───────────────┘
| Algorithm | Burst support | Memory per key | Use case |
|---|---|---|---|
| Token Bucket | Yes | O(1) | APIs that allow short bursts |
| Sliding Window | No | O(N requests) | Strict, smooth request throttling |
| Fixed Window | Boundary | O(1) | Simple quota enforcement |
| Dependency | Minimum version |
|---|---|
| Node.js | 18.x |
| Redis | 6.x |
| npm | 8.x |
# 1. Clone the repository
git clone https://github.com/ayomidelog/High-Beta.git
cd High-Beta
# 2. Install dependencies
npm install
# 3. Copy and edit environment variables
cp .env.example .env
# 4. Start Redis (requires Docker)
docker run -d -p 6379:6379 redis:7-alpine
# 5. Start the development server
npm run devThe server starts on http://localhost:3000 by default.
All configuration is driven by environment variables. Copy .env.example to .env and adjust the values.
| Variable | Default | Description |
|---|---|---|
PORT |
3000 |
HTTP server port |
HOST |
0.0.0.0 |
HTTP server bind address |
NODE_ENV |
development |
Runtime environment |
REDIS_HOST |
localhost |
Redis host |
REDIS_PORT |
6379 |
Redis port |
REDIS_PASSWORD |
(empty) | Redis password (optional) |
REDIS_DB |
0 |
Redis database index |
REDIS_CONNECT_TIMEOUT |
10000 |
Connection timeout in ms |
RATE_LIMITER_DEFAULT_ALGORITHM |
tokenBucket |
Default algorithm (tokenBucket / slidingWindow / fixedWindow) |
RATE_LIMITER_DEFAULT_LIMIT |
100 |
Default max requests per window |
RATE_LIMITER_DEFAULT_WINDOW |
60000 |
Default window size in ms |
RATE_LIMITER_BURST_MULTIPLIER |
1.5 |
Token bucket burst capacity multiplier |
RATE_LIMITER_KEY_PREFIX |
rl: |
Redis key namespace prefix |
METRICS_ENABLED |
true |
Enable in-memory metrics collection |
METRICS_FLUSH_INTERVAL |
60000 |
Metrics flush interval in ms |
LOG_LEVEL |
info |
Pino log level (trace / debug / info / warn / error) |
GET /health
GET /api/health
Response 200
{
"status": "healthy",
"redis": "connected",
"uptime": 42.3,
"timestamp": "2024-01-01T00:00:00.000Z"
}POST /api/check
Content-Type: application/json
Request body
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
key |
string | Yes | — | Unique identifier (e.g. user ID, IP) |
algorithm |
string | No | RATE_LIMITER_DEFAULT_ALGORITHM |
tokenBucket / slidingWindow / fixedWindow |
limit |
number | No | RATE_LIMITER_DEFAULT_LIMIT |
Maximum requests per window |
windowMs |
number | No | RATE_LIMITER_DEFAULT_WINDOW |
Window duration in milliseconds |
Response 200 — allowed
{
"allowed": true,
"remaining": 99,
"resetTime": "2024-01-01T00:01:00.000Z",
"retryAfter": 0
}Response 429 — rate limit exceeded
{
"allowed": false,
"remaining": 0,
"resetTime": "2024-01-01T00:01:00.000Z",
"retryAfter": 45
}Response headers set on every request:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 99
X-RateLimit-Reset: 2024-01-01T00:01:00.000Z
X-RateLimit-Algorithm: tokenBucket
Retry-After: 45 # only present on 429
GET /api/status/:key?algorithm=tokenBucket&limit=100&windowMs=60000
Returns the current bucket/window state without consuming a token.
DELETE /api/reset/:key?algorithm=tokenBucket
Response 200
{ "success": true, "key": "user:42", "algorithm": "tokenBucket" }GET /metrics # full snapshot
GET /metrics?detailed=true # includes per-key counters
GET /metrics/summary # lightweight totals + latency
DELETE /metrics/reset # clear in-memory counters
const { RateLimiterMiddleware } = require('./src/middleware/rateLimiter');
// Token Bucket — burst-friendly
app.use('/api', RateLimiterMiddleware.forTokenBucket({ limit: 60, windowMs: 60_000 }));
// Sliding Window — strict
app.use('/api', RateLimiterMiddleware.forSlidingWindow({ limit: 60, windowMs: 60_000 }));
// Fixed Window — simple quota
app.use('/api', RateLimiterMiddleware.forFixedWindow({ limit: 60, windowMs: 60_000 }));
// Advanced — custom key + skip logic
const limiter = new RateLimiterMiddleware({
algorithm: 'tokenBucket',
limit: 100,
windowMs: 60_000,
keyGenerator: (req) => req.headers['x-api-key'] || req.ip,
skipIf: (req) => req.user?.role === 'admin',
onLimitReached: (req, res, info) => {
console.warn(`Rate limit hit for ${req.ip}`, info);
},
});
app.use('/api', limiter.createMiddleware());# Build and start the full stack (app + Redis + Redis Commander UI)
docker compose -f docker/docker-compose.yml up --build
# App: http://localhost:3000
# Redis Commander: http://localhost:8081docker build -f docker/Dockerfile -t rate-limiter:latest .
docker run -p 3000:3000 -e REDIS_HOST=host.docker.internal rate-limiter:latestManifests are located in the k8s/ directory.
# Create the ConfigMap
kubectl apply -f k8s/configmap.yaml
# Deploy Redis
kubectl apply -f k8s/redis.yaml
# Deploy the application (3 replicas by default)
kubectl apply -f k8s/deployment.yaml
# Expose via Service
kubectl apply -f k8s/service.yaml
# Enable Horizontal Pod Autoscaler
kubectl apply -f k8s/hpa.yamlThe deployment is pre-configured with liveness/readiness probes pointing at /health and a preStop hook for graceful draining.
# Run all tests
npm test
# Unit tests only
npm run test:unit
# Integration tests (requires a running Redis instance)
npm run test:integration
# Throughput benchmark
npm run benchmarkTests are organised as:
tests/
├── unit/
│ ├── tokenBucket.test.js
│ ├── rateLimiter.test.js
│ └── redisService.test.js
├── integration/
│ └── rateLimiter.integration.test.js
└── benchmark/
└── throughput.benchmark.js
High-Beta/
├── src/
│ ├── algorithms/
│ │ ├── tokenBucket.js # Token Bucket implementation
│ │ ├── slidingWindow.js # Sliding Window implementation
│ │ └── fixedWindow.js # Fixed Window implementation
│ ├── config/
│ │ ├── index.js # Centralised config loader
│ │ └── redis.js # Redis connection config
│ ├── middleware/
│ │ ├── rateLimiter.js # Express middleware wrapper
│ │ └── index.js
│ ├── routes/
│ │ ├── api.js # /api/* endpoints
│ │ └── metrics.js # /metrics endpoints
│ ├── scripts/lua/ # Atomic Lua scripts for Redis
│ ├── services/
│ │ ├── redisService.js # Redis client + Lua script executor
│ │ └── metricsService.js # In-memory metrics aggregator
│ ├── utils/
│ │ ├── helpers.js # Key generation, response formatting
│ │ └── logger.js # Pino logger
│ └── app.js # Express app entry point
├── tests/ # Unit, integration & benchmark tests
├── docker/
│ ├── Dockerfile # Multi-stage production image
│ └── docker-compose.yml # Full local stack
├── k8s/ # Kubernetes manifests
├── .env.example # Environment variable template
└── package.json
This project is licensed under the ISC License.