Skip to content

feat: Cache middleware with Deno KV backend #7

@superWorldSavior

Description

@superWorldSavior

Motivation

Most MCP tool calls are reads — list documents, get schema, fetch records. When an agent calls list_invoices twice in 30 seconds with the same parameters, there's no reason to hit the backend again.

A cache middleware would memoize results by tool name + input hash, with configurable TTL per tool.

Proposal

As middleware

import { createCacheMiddleware } from "@casys/mcp-server";

server.use(createCacheMiddleware({
  defaultTtlMs: 30_000,
  perTool: {
    "list_invoices": { ttlMs: 60_000 },
    "create_invoice": { enabled: false }, // never cache writes
  },
}));

Storage backends — pluggable interface

interface CacheStore {
  get(key: string[]): Promise<unknown | null>;
  set(key: string[], value: unknown, opts?: { expireIn?: number }): Promise<void>;
  delete(key: string[]): Promise<void>;
}

Deno (default): native Deno.openKv()

Zero config, persistent (SQLite-backed), built-in expiry:

const kv = await Deno.openKv();
await kv.set(["cache", toolName, argsHash], result, { expireIn: ttlMs });
const cached = await kv.get(["cache", toolName, argsHash]);

Node: @deno/kv (optional) or in-memory Map

  • @deno/kv — same API as Deno KV, backed by SQLite on Node. Optional dependency. Gives persistent cache with zero behavior difference between runtimes.
  • In-memory Map — zero-dependency fallback, good enough for single-process. TTL via setTimeout or lazy expiry on read.
  • Custom — user provides their own CacheStore implementation (Redis, etc.)

Auto-detection: if Deno.openKv exists → use it. Else if @deno/kv is installed → use it. Else → in-memory Map.

Cache key

Hash of toolName + JSON.stringify(sortedArgs) — deterministic, collision-free for same inputs.

Behavior

  • Cache sits early in the pipeline (after auth, before circuit breaker) — cached responses skip the heavy path
  • Write tools (create_*, update_*, delete_*) disabled by default (opt-in only)
  • Cache invalidation: manual cache.clear(toolName) or automatic on related write tools
  • Metrics: cache hit/miss counters exposed via Prometheus

Pipeline position

auth → cache → circuit-breaker → retry → backpressure → handler

Scope

  • createCacheMiddleware() with TTL config
  • CacheStore pluggable interface
  • Deno KV backend (default on Deno)
  • @deno/kv backend (optional on Node)
  • In-memory Map backend (Node fallback)
  • Per-tool TTL + enable/disable
  • Auto-detection of best available backend
  • Cache hit/miss metrics
  • Documentation + examples
  • Tests

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions