Per-agent scoped MCP tool proxy. One server process per agent — loads only the tools that agent is allowed to use, enforces resource boundaries between agents, holds credentials so agents never see them, and logs every tool call to a structured audit trail.
Multi-agent setups (Claude Code subagents, parallel workers, role-based agents) share the same MCP servers. Every agent sees every tool. Every agent holds credentials. Agent A can read Agent B's data. Audit logging is fragmented across a dozen server processes.
Existing solutions solve pieces:
- Aggregation gateways — combine servers, no scoping
- Access control proxies — filter tools per agent, no resource scoping
- Credential proxies — isolate credentials, no tool management
- Enterprise gateways — governance and auth, but cloud and team-oriented
None combine all four: tool filtering + resource scoping + credential isolation + audit logging.
scoped-mcp was built using the same multi-agent pattern it's designed to secure — a research agent evaluated the problem space, a dev agent implemented the code, each with scoped access to only the resources it needed. It runs in production as part of homelab-agent, a self-hosted Claude Code platform with purpose-built agents for different infrastructure domains.
Agent process (AGENT_ID=research-01, AGENT_TYPE=research)
│
▼
┌─────────────────────────────────────────┐
│ scoped-mcp (one process per agent) │
│ │
│ ① Load manifest for AGENT_TYPE │
│ ② Register allowed tool modules │
│ ③ Inject credentials into modules │
│ ④ Every tool call: │
│ → enforce resource scope │
│ → execute tool logic │
│ → write audit log entry │
└─────────────────────────────────────────┘
│ │ │
▼ ▼ ▼
Backend A Backend B Backend C
(scoped) (scoped) (scoped)
flowchart LR
subgraph agent["Agent Process"]
A["AGENT_ID=research-01<br/>AGENT_TYPE=research"]
end
subgraph proxy["scoped-mcp (single process)"]
direction TB
M["Manifest Loader<br/><i>research-agent.yml</i>"]
R["Module Registry"]
C["Credential Injector"]
AU["Audit Logger"]
M --> R
R --> C
C --> AU
end
subgraph backends["Backends (scoped)"]
FS["Filesystem<br/><code>agents/research-01/</code>"]
DB["SQLite<br/><code>schema: research_01</code>"]
NT["ntfy<br/><code>topic: research-research-01</code>"]
end
A -- "MCP (stdio)" --> proxy
AU --> FS
AU --> DB
AU --> NT
pip install scoped-mcp
# Set agent identity
export AGENT_ID="research-01"
export AGENT_TYPE="research"
# Run with a manifest
scoped-mcp --manifest manifests/research-agent.ymlClaude Code settings.json:
{
"mcpServers": {
"tools": {
"command": "scoped-mcp",
"args": ["--manifest", "manifests/research-agent.yml"],
"env": {
"AGENT_ID": "research-01",
"AGENT_TYPE": "research"
}
}
}
}See examples/claude-code/ for a complete multi-agent setup.
Agent Identity — AGENT_ID (unique instance) and AGENT_TYPE (role) set via environment variables at spawn time. The manifest maps agent types to allowed modules.
Tool Modules — one Python file per backend domain. Each module declares its tools, required credentials, and scoping strategy. The framework handles registration, credential injection, and audit wrapping.
Scoping Strategies — reusable patterns for resource isolation:
PrefixScope— file paths, object store keys, cache keys scoped toagents/{agent_id}/NamespaceScope— key-value operations prefixed with agent's namespace- Per-agent file — e.g. SQLite gives each agent its own database file at
{db_dir}/agent_{agent_id}.db - Custom — implement
ScopeStrategyfor your backend's isolation model
Credential Injection — backend credentials (API keys, DSNs, tokens) loaded once by the proxy process from environment variables or a secrets file. Modules receive credentials through their context — the agent process never sees them.
Logging — two structured JSON-L streams:
- Audit log — what agents did. Every tool call, every scope check.
- Operational log — what the server did. Startup, shutdown, config errors.
# manifests/research-agent.yml
agent_type: research
description: "Read-only research agent"
modules:
filesystem:
mode: read # read-only: read_file + list_dir only
config:
base_path: /data/agents # PrefixScope adds /{agent_id}/ automatically
sqlite:
mode: read
config:
db_dir: /data/sqlite # each agent gets /data/sqlite/agent_{agent_id}.db
ntfy: # write-only — no mode field needed
config:
topic: "research-{agent_id}"
max_priority: high
credentials:
source: env # or "file" with path: /run/secrets/agent.yml
# or: source: vault — see Vault Credentials section
# Optional: pluggable state backend (required for rate limiting and HITL)
state_backend:
type: in_process # default — no external deps
# type: dragonfly
# url: redis://127.0.0.1:6379/0
# Optional: sliding-window rate limits
rate_limits:
global: 60/minute # all tools combined
per_tool:
filesystem_write_file: 10/minute
"mcp_proxy.*": 30/minute # glob — all matched tools share one counter
# Optional: argument-value filtering
argument_filters:
- name: no-credentials
pattern: '(?i)(password|secret|token)\s*[:=]\s*\S+'
fields: [path, query, body]
action: block # or: warn
decode: [base64, urlsafe_base64, url]
# Optional: human-in-the-loop approval (requires state_backend.type: dragonfly)
hitl:
approval_required: ["filesystem_delete_*", "sqlite_execute"]
shadow: ["mcp_proxy.*"] # log-only, return synthetic empty success
timeout_seconds: 300
notify:
type: ntfy # or: log (default), webhook, matrix
topic: homelab-hitlflowchart LR
subgraph manifest_r["research-agent.yml"]
MR1["filesystem: read"]
MR2["sqlite: read"]
MR3["ntfy: write-only"]
end
subgraph tools_r["Registered Tools (4)"]
TR1["filesystem_read_file"]
TR2["filesystem_list_dir"]
TR3["sqlite_query"]
TR4["ntfy_send"]
end
MR1 --> TR1 & TR2
MR2 --> TR3
MR3 --> TR4
subgraph manifest_b["build-agent.yml"]
MB1["filesystem: write"]
MB2["sqlite: write"]
MB3["ntfy: write-only"]
MB4["slack_webhook: write-only"]
end
subgraph tools_b["Registered Tools (8)"]
TB1["filesystem_read_file"]
TB2["filesystem_list_dir"]
TB3["filesystem_write_file"]
TB4["filesystem_delete_file"]
TB5["sqlite_query"]
TB6["sqlite_execute"]
TB7["ntfy_send"]
TB8["slack_send"]
end
MB1 --> TB1 & TB2 & TB3 & TB4
MB2 --> TB5 & TB6
MB3 --> TB7
MB4 --> TB8
| Module | Scope | Read tools | Write tools |
|---|---|---|---|
filesystem |
PrefixScope — agents/{agent_id}/ |
read_file, list_dir |
write_file, delete_file |
sqlite |
Per-agent DB file — {db_dir}/agent_{agent_id}.db |
query, list_tables |
execute, create_table |
Notification modules are write-only by design — every agent needs to send alerts, but no agent should see webhook URLs, SMTP passwords, or API tokens.
| Module | Backend | Credential | Scope |
|---|---|---|---|
ntfy |
ntfy.sh (self-hosted or cloud) | Server URL + optional token | Topic per agent ({agent_id} template) |
smtp |
Any SMTP server | Host, port, user, password | Configured sender + allowed recipients |
matrix |
Matrix homeserver | Access token | Room allowlist |
slack_webhook |
Slack incoming webhook | Webhook URL | One webhook = one channel |
discord_webhook |
Discord webhook | Webhook URL | One webhook = one channel |
| Module | Scope | Read tools | Write tools |
|---|---|---|---|
http_proxy |
Service allowlist + SSRF prevention | get |
post, put, delete |
grafana |
Folder-based (agent-{agent_id}/) |
list_dashboards, get_dashboard, query_datasource, list_datasources |
create_dashboard, update_dashboard, create_alert_rule, delete_dashboard |
influxdb |
Bucket allowlist + NamespaceScope |
query, list_measurements, get_schema |
write_points, create_bucket, delete_points |
Every module declares its required and optional environment variables. scoped-mcp fails at startup with a clear error listing any missing required keys — it will not start partially configured.
| Module | Required env vars | Optional env vars |
|---|---|---|
filesystem |
— | — |
sqlite |
— | — |
ntfy |
NTFY_URL |
NTFY_TOKEN |
smtp |
SMTP_HOST, SMTP_PORT, SMTP_USER, SMTP_PASSWORD |
— |
matrix |
MATRIX_HOMESERVER, MATRIX_ACCESS_TOKEN |
— |
slack_webhook |
SLACK_WEBHOOK_URL |
— |
discord_webhook |
DISCORD_WEBHOOK_URL |
— |
http_proxy |
— (dynamic; see module config) | — |
grafana |
GRAFANA_URL, GRAFANA_SERVICE_ACCOUNT_TOKEN |
— |
influxdb |
INFLUXDB_URL, INFLUXDB_TOKEN |
INFLUXDB_ORG (overrides config.org) |
Credentials are passed in settings.json under env (for Claude Code) or exported
in the shell before running scoped-mcp. They are loaded once at startup, injected
into module contexts, and never returned in tool responses or logged.
For HashiCorp Vault — set credentials.source: vault in the manifest with an
approle block; credentials are fetched once at startup and the client token is
renewed in the background. Requires pip install scoped-mcp[vault]. See
examples/vault/ for a working manifest, AppRole setup script, and Vault policy.
For integration with a secrets manager such as Vaultwarden, see
examples/vaultwarden/.
┌─ ops-agent (AGENT_ID=ops-01) ────────────────────────────────────┐
│ │
│ 1. influxdb_query(bucket="metrics", │
│ filters=[{"field": "_measurement", │
│ "op": "==", "value": "docker_cpu"}]) │
│ → discovers container X averaging 94% CPU │
│ │
│ 2. grafana_create_dashboard( │
│ title="Container Health", │
│ panels=[{"title": "CPU by Container", ...}]) │
│ → dashboard created in folder agent-ops-01/ │
│ │
│ 3. ntfy_send(title="High CPU: container X", │
│ message="Averaging 94% over last hour.") │
│ → operator gets push notification │
│ │
└───────────────────────────────────────────────────────────────────┘
The agent queried metrics it can see, built a dashboard it owns, and alerted through a channel it's allowed to use. At no point did it see API tokens, access another agent's data, or modify operator dashboards.
# src/scoped_mcp/modules/redis.py
from scoped_mcp.modules._base import ToolModule, tool
from scoped_mcp.scoping import NamespaceScope
class RedisModule(ToolModule):
name = "redis"
scoping = NamespaceScope()
required_credentials = ["REDIS_URL"]
def __init__(self, agent_ctx, credentials, config):
super().__init__(agent_ctx, credentials, config)
import redis.asyncio as aioredis
self._redis = aioredis.from_url(credentials["REDIS_URL"])
@tool(mode="read")
async def get_key(self, key: str) -> str | None:
"""Get a value (scoped to agent namespace)."""
scoped_key = self.scoping.apply(key, self.agent_ctx)
return await self._redis.get(scoped_key)
@tool(mode="write")
async def set_key(self, key: str, value: str, ttl: int = 0) -> bool:
"""Set a key-value pair (scoped to agent namespace)."""
scoped_key = self.scoping.apply(key, self.agent_ctx)
return await self._redis.set(scoped_key, value, ex=ttl or None)Add it to your manifest:
modules:
redis:
mode: read # only get_key registered
config: {}See examples/custom-module/ for a full walkthrough and docs/module-authoring.md for the complete contract.
| Capability | scoped-mcp | agent-mcp-gateway | local-mcp-gateway | Kong MCP |
|---|---|---|---|---|
| Tool aggregation | yes | yes | yes | yes |
| Per-agent tool filtering | manifest | rules file | profiles | RBAC |
| Resource scoping | yes | no | no | no |
| Credential isolation | yes | no | no | partial |
| Unified audit log | yes | no | no | yes |
| Read/write modes | yes | per-tool | per-profile | per-role |
| Self-hosted, single process | yes | yes | yes | no |
| Built-in modules | 10 | 0 | 0 | 0 |
scoped-mcp's core value is security — tool scoping, credential isolation, and audit logging. To back that up:
- Threat model:
docs/threat-model.mddocuments the attack surface, trust boundaries, and what scoped-mcp does and does not protect against. - Audit history:
docs/security-audit.mdtracks every internal audit, including the v0.1.0 audit that found 18 findings (1 critical, 3 high, 8 medium, 6 low) and their remediation in v0.2.0. v0.2.1 and v0.3.0 audits returned clean. - Verifiable isolation: the
examples/claude-code/multi-agent-setup.mdincludes a step-by-step verification walkthrough — you can confirm filesystem isolation and credential non-exposure yourself in under five minutes.
The v0.7 → v1.0 hardening roadmap added four opt-in middleware layers that sit on top of the core tool/scope/credential/audit guarantees. All are off by default; enable per-agent in the manifest:
- Rate limiting (
rate_limits:, v0.7) — sliding-window per-agent and per-tool limits with glob patterns. Backed byInProcessBackend(default) orDragonflyBackend([dragonfly]extra) for cross-process state. - Vault-backed credentials (
credentials.source: vault, v0.8) — fetch credentials from HashiCorp Vault via AppRole; client token auto-renewed in the background. Seeexamples/vault/. - mcp_proxy schema validation + argument filtering (
argument_filters:, v0.9) — proxied calls are validated against the upstream tool'sinputSchemabefore forwarding; pattern-based argument filters can block or alert on values, with optional base64/url decoding. Seedocs/threat-model.mdfor the documented limits. - Human-in-the-loop approval (
hitl:, v1.0) — operator-gated tool calls. Glob patterns select tools that require explicit approval, or shadow-mode tools that log a sanitised summary and return a synthetic empty-success without forwarding upstream. Approve/reject viascoped-mcp hitl approve|reject <id>. Requiresstate_backend.type: dragonfly(cross-process pub/sub).
- Not an enterprise gateway — no OAuth, no multi-tenant SaaS, no Kubernetes. For self-hosters running multi-agent setups.
- Not a policy engine — no prompt injection detection, no tool call classification.
- Not a process manager — one MCP server that an agent connects to. Spawning agents is your orchestrator's job.
- Not E2EE — the Matrix module supports unencrypted rooms only in v0.1 (no libolm dependency).
# Core only (filesystem + sqlite + notifications require no extras)
pip install scoped-mcp
# With HTTP client modules (http_proxy, grafana, influxdb, ntfy, matrix, slack, discord)
pip install "scoped-mcp[http]"
# With SMTP support
pip install "scoped-mcp[smtp]"
# With SQLite async support
pip install "scoped-mcp[sqlite]"
# Everything
pip install "scoped-mcp[all]"If something isn't working, see Troubleshooting.
MIT