We have completed the configuration task of MCP Server in the openclaw.json, however, when we delegate the task to the openspace in the openclaw control web, it always exceeds the limited time using the mcp server. The related log always stays in the process of 2026-04-09 14:09:19.124 [INFO ] utils.py:3995 - LiteLLM completion() model= qwen3.5-plus; provider = dashscope (It can be observerd in the following picture).
When we execute the task directly in the openspcae it shows the success, and we present the following picture.
.
We use openclaw to analysis the problem, it suggets the following:
Error Type: MCP error -32001: Request timed out
Timeout Setting: 300 seconds (5 minutes)
Observed Phenomenon: Multiple consecutive calls to execute_task all timeout, but the OpenSpace process continues running normally
Possible Root Causes:
- LLM Call Timeout (OpenSpace internal default: 120 seconds)
The LLM API call within OpenSpace times out before the MCP layer timeout
- Skill Selection Stage Blocking
The initial skill discovery/selection phase hangs during the first request
- Subprocess Pipe Handling Issues
Communication channel between processes gets blocked or fails to transmit responses
- Missing Real-time Output Feedback
No progress logs or streaming output during execution, making it impossible to diagnose where the process is stuck
We have completed the configuration task of MCP Server in the openclaw.json, however, when we delegate the task to the openspace in the openclaw control web, it always exceeds the limited time using the mcp server. The related log always stays in the process of 2026-04-09 14:09:19.124 [INFO ] utils.py:3995 - LiteLLM completion() model= qwen3.5-plus; provider = dashscope (It can be observerd in the following picture).
When we execute the task directly in the openspcae it shows the success, and we present the following picture.
We use openclaw to analysis the problem, it suggets the following:
Error Type: MCP error -32001: Request timed out
Timeout Setting: 300 seconds (5 minutes)
Observed Phenomenon: Multiple consecutive calls to execute_task all timeout, but the OpenSpace process continues running normally
Possible Root Causes:
The LLM API call within OpenSpace times out before the MCP layer timeout
The initial skill discovery/selection phase hangs during the first request
Communication channel between processes gets blocked or fails to transmit responses
No progress logs or streaming output during execution, making it impossible to diagnose where the process is stuck