pctx scalability issues with large number of mcp servers / tools #71
Replies: 3 comments 2 replies
-
|
running pctx server with RUST_LOG=error suppress the logs in console, but it didn't resolve the lag issue with higher concurrency. |
Beta Was this translation helpful? Give feedback.
-
|
wild guess, It feels the /execute_typescript calls may be serialized? |
Beta Was this translation helpful? Give feedback.
-
|
pctx_executor/src/lib.rs:24-27 static V8_MUTEX: std::sync::LazyLock<Mutex<()>> = std::sync::LazyLock::new(|| {
init_v8_platform();
Mutex::new(())
});Every call to execute() acquires this mutex at line 134 and holds it for the entire duration — type checking + code execution. This is a single process-wide queue, so all concurrent /execute_typescript requests run strictly sequentially. The comment explains the constraint: V8 isolates share platform-level state (code pages, thread pool, GC metadata) that isn't safe to access concurrently from multiple OS threads. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Help!
I was very happy with pctx before running into a scalabilty issue.
I am doing a POC that connect pctx process server and registered 38 MCP servers overall 500+ tools.
pctx server working fine with low concurrency,
when concurrency increase to over 5, it lags a log.
from pctx logs, it looks every tool call comes with a huge payload (tool descriptions?) before it, the console just kept scrolling. I wonder if that's related to the lag the issue, or can you point me a direction to look into? anything I can provide?
Beta Was this translation helpful? Give feedback.
All reactions