The Problem
Every POST to `/verify`, `/mcp/`, and `/a2a/` does Ed25519 signature verification for every receipt in the chain, a DID resolution (with potential HTTP I/O for `did:web`), and a policy evaluation pass. There is no rate limiting on any of these endpoints.
An attacker sends a tight loop of requests with maximally deep chains. Each request triggers the full verification stack. CPU climbs to 100%. Legitimate requests queue up and time out.
This is not speculative. It is the default outcome when you expose cryptographic verification over HTTP without any admission control.
The Numbers
Ed25519 verify is approximately 60,000 operations per second on a single core. A 10-receipt chain costs 11 verifications (10 DRs + 1 invocation). At 100 parallel clients each sending back-to-back requests, you are at roughly 545 chains/second to saturate a single core — which is not a lot of requests for a service that expects to handle production MCP traffic.
`did:web` cache misses make it worse. The resolver holds a global lock (issue #10) during HTTP I/O. Under rate-unlimited load, every cache miss stalls the entire service.
What Must Change
- Add a token bucket rate limiter per IP address at the HTTP server level. `golang.org/x/time/rate` is in the Go standard extended library, no new dependency.
- Add a global request-per-second ceiling to prevent single-IP limits from being trivially bypassed with many IPs.
- Expose the limits as config: `RATE_LIMIT_PER_IP` (requests/sec) and `RATE_LIMIT_GLOBAL` (requests/sec). Sensible defaults: 100/s per IP, 1000/s global.
- Return `429 Too Many Requests` with a `Retry-After` header.
Rate limiting does not solve all DoS vectors, but it is the first line of defense and currently there is no line at all.
Severity
HIGH. No rate limiting + expensive crypto = trivial DoS. This is table stakes for any service exposed to a network.
The Problem
Every POST to `/verify`, `/mcp/`, and `/a2a/` does Ed25519 signature verification for every receipt in the chain, a DID resolution (with potential HTTP I/O for `did:web`), and a policy evaluation pass. There is no rate limiting on any of these endpoints.
An attacker sends a tight loop of requests with maximally deep chains. Each request triggers the full verification stack. CPU climbs to 100%. Legitimate requests queue up and time out.
This is not speculative. It is the default outcome when you expose cryptographic verification over HTTP without any admission control.
The Numbers
Ed25519 verify is approximately 60,000 operations per second on a single core. A 10-receipt chain costs 11 verifications (10 DRs + 1 invocation). At 100 parallel clients each sending back-to-back requests, you are at roughly 545 chains/second to saturate a single core — which is not a lot of requests for a service that expects to handle production MCP traffic.
`did:web` cache misses make it worse. The resolver holds a global lock (issue #10) during HTTP I/O. Under rate-unlimited load, every cache miss stalls the entire service.
What Must Change
Rate limiting does not solve all DoS vectors, but it is the first line of defense and currently there is no line at all.
Severity
HIGH. No rate limiting + expensive crypto = trivial DoS. This is table stakes for any service exposed to a network.