feat: add native MCP integration with context7 compatibility#376
feat: add native MCP integration with context7 compatibility#376Danieldd28 wants to merge 2 commits intomainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
Adds a native MCP (Model Context Protocol) integration to PicoClaw, including Context7 compatibility, with a new pkg/mcp module and agent/tooling/config wiring to share a single MCP manager across agent + subagents.
Changes:
- Introduces
pkg/mcpwith stdio client (framed + JSONL), server manager, tool naming, and response formatting. - Integrates discovered MCP tools into tool registries via a shared manager (agent + subagents) and adds bootstrap timeout/protocol inference.
- Extends config to support
tools.mcp.*plus legacy top-levelmcpServerscompatibility mapping; updates example config and adds tests.
Reviewed changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| pkg/tools/mcp.go | MCP tool wrapper + helpers to register discovered tools without re-discovery. |
| pkg/tools/mcp_test.go | Test coverage for registering known MCP tools into the registry. |
| pkg/mcp/types.go | MCP config/types + defaults and constants. |
| pkg/mcp/client.go | Stdio MCP client implementation supporting framed JSON-RPC and JSONL modes. |
| pkg/mcp/client_test.go | Tests for protocol normalization. |
| pkg/mcp/manager.go | MCP server lifecycle + tool discovery/filtering + tool call routing. |
| pkg/mcp/manager_test.go | Tests for discovery filtering, call routing, and schema normalization. |
| pkg/mcp/naming.go | Stable MCP tool naming + sanitization and length bounds. |
| pkg/mcp/naming_test.go | Tests for tool naming sanitization and max length behavior. |
| pkg/mcp/format.go | Normalization/truncation of MCP tool call results. |
| pkg/mcp/format_test.go | Tests for formatting, truncation, and isError handling. |
| pkg/agent/mcp_bootstrap.go | MCP bootstrap helpers: config conversion, protocol inference, discovery timeout. |
| pkg/agent/mcp_bootstrap_test.go | Tests for timeout calculation, disabled server filtering, Context7 protocol default. |
| pkg/agent/loop.go | Uses a shared MCP manager across registries; closes MCP lifecycle on Run/Stop. |
| pkg/config/config.go | Adds tools.mcp config, legacy mcpServers mapping, and defaults initialization. |
| pkg/config/config_test.go | Tests MCP defaults + legacy mcpServers compatibility mapping. |
| config/config.example.json | Adds example tools.mcp configuration block. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| func (m *Manager) Close() error { | ||
| m.mu.Lock() | ||
| servers := make([]*managedServer, 0, len(m.servers)) | ||
| for _, server := range m.servers { | ||
| servers = append(servers, server) | ||
| } | ||
| m.mu.Unlock() | ||
|
|
||
| var firstErr error | ||
| for _, server := range servers { | ||
| if server.client == nil { | ||
| continue | ||
| } | ||
| if err := server.client.Close(); err != nil && firstErr == nil { | ||
| firstErr = err | ||
| } | ||
| } | ||
| return firstErr | ||
| } |
There was a problem hiding this comment.
Manager.Close() releases the mutex before closing clients, so CallTool() can concurrently grab a client pointer and invoke client.CallTool() while Close() is shutting it down. That creates a race and can lead to calls on a killed process. Consider, under lock, snapshotting clients and setting server.client=nil (and/or a closed flag) before unlocking; then close the snapshot outside the lock so new calls fail fast and in-flight lookups can’t race shutdown.
| func truncateString(value string, maxBytes int) string { | ||
| if maxBytes <= 0 || len(value) <= maxBytes { | ||
| return value | ||
| } | ||
| if maxBytes <= 12 { | ||
| return value[:maxBytes] | ||
| } | ||
| return value[:maxBytes-12] + "\n...[truncated]" | ||
| } |
There was a problem hiding this comment.
truncateString() slices by byte count, which can split multi-byte UTF-8 characters and produce invalid UTF-8 in tool output. Since this content is surfaced to the LLM/user, consider truncating to the nearest valid UTF-8 boundary while still honoring the max-bytes limit (or truncate by runes after converting to []rune and then re-check byte length).
| for name, legacy := range c.MCPServers { | ||
| if strings.TrimSpace(legacy.Command) == "" { | ||
| continue | ||
| } | ||
|
|
||
| enabled := true | ||
| if legacy.Type != "" && legacy.Type != "stdio" { | ||
| enabled = false | ||
| } | ||
|
|
||
| envCopy := make(map[string]string, len(legacy.Env)) | ||
| for key, value := range legacy.Env { | ||
| envCopy[key] = value | ||
| } | ||
|
|
||
| c.Tools.MCP.Servers[name] = MCPServerConfig{ | ||
| Enabled: enabled, | ||
| Command: legacy.Command, | ||
| Args: append([]string{}, legacy.Args...), | ||
| Env: envCopy, | ||
| Protocol: legacy.Protocol, | ||
| } | ||
| } | ||
|
|
||
| if len(c.Tools.MCP.Servers) > 0 { | ||
| c.Tools.MCP.Enabled = true | ||
| } | ||
| } |
There was a problem hiding this comment.
applyLegacyMCPServers() sets Tools.MCP.Enabled=true whenever it maps any legacy entries, even if all mapped servers end up Enabled=false (e.g., legacy.Type != "stdio"). That can leave MCP globally “enabled” while no servers can ever start. Consider only setting Tools.MCP.Enabled if at least one mapped server is enabled (or skip inserting disabled legacy servers entirely).
| func (c *Config) applyLegacyMCPServers() { | ||
| // If canonical MCP config already exists, keep it as source of truth. | ||
| if len(c.Tools.MCP.Servers) > 0 { | ||
| return | ||
| } | ||
| if len(c.MCPServers) == 0 { | ||
| return | ||
| } | ||
|
|
||
| if c.Tools.MCP.Servers == nil { | ||
| c.Tools.MCP.Servers = map[string]MCPServerConfig{} | ||
| } | ||
|
|
||
| for name, legacy := range c.MCPServers { | ||
| if strings.TrimSpace(legacy.Command) == "" { | ||
| continue | ||
| } | ||
|
|
||
| enabled := true | ||
| if legacy.Type != "" && legacy.Type != "stdio" { | ||
| enabled = false | ||
| } | ||
|
|
||
| envCopy := make(map[string]string, len(legacy.Env)) | ||
| for key, value := range legacy.Env { | ||
| envCopy[key] = value | ||
| } | ||
|
|
||
| c.Tools.MCP.Servers[name] = MCPServerConfig{ | ||
| Enabled: enabled, | ||
| Command: legacy.Command, | ||
| Args: append([]string{}, legacy.Args...), | ||
| Env: envCopy, | ||
| Protocol: legacy.Protocol, | ||
| } | ||
| } | ||
|
|
||
| if len(c.Tools.MCP.Servers) > 0 { | ||
| c.Tools.MCP.Enabled = true | ||
| } | ||
| } |
There was a problem hiding this comment.
After applying legacy mcpServers into tools.mcp.servers, the original Config.MCPServers field remains populated. If the config is later saved/migrated, this can emit both the legacy top-level mcpServers and the canonical tools.mcp.servers, creating ambiguity about the source of truth. Consider clearing c.MCPServers (or omitting it during SaveConfig) after a successful mapping to keep the serialized config canonical.
| if _, hasDeadline := parent.Deadline(); hasDeadline { | ||
| return context.WithCancel(parent) |
There was a problem hiding this comment.
withTimeoutIfMissing() disables the per-server InitTimeout/CallTimeout whenever the parent context already has a deadline (it returns WithCancel instead of applying the configured timeout). In bootstrap and in normal tool execution this means requests can run much longer than configured, defeating the timeout knobs and risking hangs. Consider enforcing the configured timeout as an upper bound (use the earlier of the parent deadline and now+timeout) rather than skipping timeouts when a deadline exists.
| if _, hasDeadline := parent.Deadline(); hasDeadline { | |
| return context.WithCancel(parent) | |
| // If the parent already has a deadline, enforce the configured timeout as an upper bound | |
| // by using the earlier of the parent deadline and now+timeout. | |
| if parentDeadline, hasDeadline := parent.Deadline(); hasDeadline { | |
| // If timeout is non-positive, don't attempt to shorten or extend the existing deadline. | |
| if timeout <= 0 { | |
| return context.WithCancel(parent) | |
| } | |
| timeoutDeadline := time.Now().Add(timeout) | |
| if timeoutDeadline.Before(parentDeadline) { | |
| return context.WithDeadline(parent, timeoutDeadline) | |
| } | |
| return context.WithDeadline(parent, parentDeadline) |
| func calculateMCPDiscoveryTimeout(serverConfigs map[string]mcp.ServerConfig) time.Duration { | ||
| maxInitTimeout := mcpBootstrapMinTimeout | ||
|
|
||
| for _, serverConfig := range serverConfigs { | ||
| initTimeout := serverConfig.InitTimeout() | ||
| if initTimeout > maxInitTimeout { | ||
| maxInitTimeout = initTimeout | ||
| } | ||
| } | ||
|
|
||
| timeout := maxInitTimeout + mcpBootstrapGraceTimeout | ||
| if timeout < mcpBootstrapMinTimeout { | ||
| return mcpBootstrapMinTimeout | ||
| } | ||
| if timeout > mcpBootstrapMaxTimeout { | ||
| return mcpBootstrapMaxTimeout | ||
| } | ||
| return timeout |
There was a problem hiding this comment.
calculateMCPDiscoveryTimeout() uses the max init timeout across servers, but Manager.DiscoverTools starts/initializes servers sequentially. With multiple slow servers, total discovery time can exceed max+grace and cause later servers to time out consistently. Consider budgeting for the cumulative expected startup cost (or starting servers concurrently) so multi-server setups don’t fail deterministically.
| err := cmd.Wait() | ||
| if waitCh != nil { | ||
| close(waitCh) | ||
| } | ||
| if err != nil { | ||
| logger.WarnCF("mcp", "MCP process exited with error", | ||
| map[string]any{ | ||
| "server": serverName, | ||
| "error": err.Error(), | ||
| }) | ||
| } |
There was a problem hiding this comment.
waitLoop() logs a warning for any non-nil cmd.Wait() error, but Close() always kills the process; this will typically surface as a non-nil Wait error (e.g., "signal: killed") and produce noisy warnings on normal shutdown. Consider suppressing the warning when the client is being closed intentionally (e.g., check c.closed under lock after Wait).
| func buildProcessEnv(custom map[string]string) []string { | ||
| base := os.Environ() | ||
| if len(custom) == 0 { | ||
| return base | ||
| } | ||
|
|
||
| keys := make([]string, 0, len(custom)) | ||
| for key := range custom { | ||
| keys = append(keys, key) | ||
| } | ||
| sort.Strings(keys) | ||
|
|
||
| env := make([]string, 0, len(base)+len(keys)) | ||
| env = append(env, base...) | ||
| for _, key := range keys { | ||
| env = append(env, key+"="+custom[key]) | ||
| } | ||
| return env | ||
| } |
There was a problem hiding this comment.
buildProcessEnv() appends custom KEY=VALUE pairs to os.Environ() without removing existing keys, which can leave duplicates. On many platforms getenv() returns the first match, so the custom value may not take effect. Consider filtering base env entries whose keys exist in custom (or constructing a deduped map) so custom values reliably override.
| func (t *MCPTool) Name() string { | ||
| return t.name | ||
| } | ||
|
|
||
| func (t *MCPTool) Description() string { | ||
| return t.description | ||
| } |
There was a problem hiding this comment.
Why use a getter when you could make the Parameter public?
There was a problem hiding this comment.
I kept fields private to keep tool metadata immutable after construction and avoid accidental mutation from callers. Public fields would expose internal state without reducing interface surface here.
| type callResponse struct { | ||
| Content []contentBlock `json:"content"` | ||
| StructuredContent any `json:"structuredContent,omitempty"` | ||
| IsError bool `json:"isError,omitempty"` | ||
| } | ||
|
|
||
| type contentBlock struct { | ||
| Type string `json:"type"` | ||
| Text string `json:"text,omitempty"` | ||
| } |
There was a problem hiding this comment.
Should these be moved to types?
There was a problem hiding this comment.
I kept these types local to format.go intentionally because they are parsing-only structs used by one code path (formatCallPayload), and not part of the package contract.
Moving them to types.go would make them look reusable/public API when they are internal implementation details. If we start reusing them across files, I can promote them to shared types.
Description
This PR adds native MCP integration with a lean, protocol-aware implementation and Context7 compatibility, while keeping startup and runtime overhead low.
Key outcomes:
pkg/mcp) for client transport, manager lifecycle, naming, and output formatting.Content-Length)context7-mcpbehavior)mcpServersconfig into canonicaltools.mcp.servers.Type of Change
AI Code Generation
Linked Issue
Technical Context
context7-mcpemits JSON-RPC as JSONL on stdio)Changes
New module
pkg/mcp/types.gopkg/mcp/client.gomcp,jsonl), request/response dispatching, lifecycle handling.pkg/mcp/manager.gopkg/mcp/naming.gopkg/mcp/format.goAgent integration
pkg/agent/mcp_bootstrap.gopkg/agent/loop.goAgentLoop.Run()/Stop().Tool integration
pkg/tools/mcp.goRegisterKnownMCPToolsto reuse discovered tools without repeated discovery.Config changes
pkg/config/config.gotools.mcpconfig structs.mcpServerscompatibility mapping.config/config.example.jsonTests
pkg/mcp/client_test.gopkg/mcp/naming_test.gopkg/mcp/format_test.gopkg/mcp/manager_test.gopkg/agent/mcp_bootstrap_test.gopkg/tools/mcp_test.gopkg/config/config_test.go(MCP defaults + legacy compatibility)Test Environment and Hardware
arcee-ai/trinity-large-preview:free)Validation Performed
go test ./pkg/mcp ./pkg/config ./pkg/agentgo test ./pkg/tools -run 'TestRegisterKnownMCPTools_RegistersAllTools|TestNewMCPTool|TestRegisterMCPTools'make buildcontext7-mcpstarts successfully via stdiocount=2)Proof of Work
Click to view logs and screenshot
Startup validation screenshot:
Sample startup log excerpt:
Checklist