Summary
When using OpenSpace with an OpenRouter model, execute_task fails immediately on the first LLM call with:
401
Missing Authentication header
This happens even though:
OPENSPACE_LLM_API_KEY is resolved successfully
OPENSPACE_LLM_API_BASE is resolved successfully
- the same credentials are visible in OpenSpace logs
- both MCP path and standalone CLI path reproduce the same failure
This looks like an auth propagation / provider-routing compatibility issue in the OpenSpace → LiteLLM → OpenRouter path, not a simple config mistake.
Environment
- OpenSpace: current
main as of 2026-04-04
- Python: 3.12.3
- Runtime: local venv
- Model:
openrouter/anthropic/claude-sonnet-4.5
- API base:
https://openrouter.ai/api/v1
Reproduction
1. Standalone CLI
Run:
cd /home/wen/OpenSpace && \
OPENROUTER_API_KEY='***' \
OPENSPACE_MODEL='openrouter/anthropic/claude-sonnet-4.5' \
OPENSPACE_LLM_API_KEY='***' \
OPENSPACE_LLM_API_BASE='https://openrouter.ai/api/v1' \
./.venv/bin/openspace --query 'Print working directory only.' --no-ui --max-iterations 4
2. MCP path
Using openspace-mcp via mcporter, calling:
search_skills works
execute_task fails immediately with the same auth error
Example task:
Print working directory and then list the first 5 entries in the workspace. Return a short summary only.
Expected behavior
OpenSpace should successfully call the configured OpenRouter model and start executing the task.
Actual behavior
The task fails before any real iteration begins.
Error:
litellm.AuthenticationError: AuthenticationError: OpenrouterException - {"error":{"message":"Missing Authentication header","code":401}}
Important observations
These parts work
- OpenSpace installation
- virtualenv
openspace-mcp
- MCP tool registration
- local skill discovery
search_skills
- shell/system backend initialization
These parts fail
execute_task
- standalone CLI query mode
- first actual LLM completion call
- embedding path also shows 401 in some runs
Relevant logs
OpenSpace logs show credentials are resolved:
LLM kwargs resolved (source=OPENSPACE_LLM_* env): {'api_key': 'sk-...', 'api_base': 'https://openrouter.ai/api/v1'}
But when the actual LLM call happens, LiteLLM logs:
LiteLLM completion() model= anthropic/claude-sonnet-4.5; provider = openrouter
And then fails with:
OpenrouterException - {"error":{"message":"Missing Authentication header","code":401}}
So the model/provider routing appears to switch from:
openrouter/anthropic/claude-sonnet-4.5
to:
anthropic/claude-sonnet-4.5; provider = openrouter
but the auth header still does not make it through.
Why this seems like a bug
I traced the code path:
Resolver
openspace/host_detection/resolver.py
It correctly maps:
OPENSPACE_LLM_API_KEY -> kwargs["api_key"]
OPENSPACE_LLM_API_BASE -> kwargs["api_base"]
OPENSPACE_LLM_EXTRA_HEADERS -> kwargs["extra_headers"]
LLM client
openspace/llm/client.py
It builds:
completion_kwargs = {
"model": kwargs.get("model", self.model),
**self.litellm_kwargs,
}
and then calls:
litellm.acompletion(**completion_kwargs)
So from OpenSpace’s point of view, api_key / api_base should already be present.
Extra validation already attempted
I tried two minimal local patches to rule out shallow config issues:
Patch A
Mirror resolved credentials into provider-native env vars:
OPENROUTER_API_KEY
OPENROUTER_API_BASE
Patch B
Force-inject:
extra_headers["Authorization"] = f"Bearer {api_key}"
into the OpenRouter request path before litellm.acompletion(...)
Result
Neither patch changed the outcome.
The error remained:
Missing Authentication header
So this does not look like a simple missing env var or missing top-level extra_headers issue.
Hypothesis
This is likely one of:
- an OpenSpace ↔ LiteLLM compatibility problem on the OpenRouter path
- a LiteLLM provider-routing issue where auth is dropped when the model is rewritten from
openrouter/... to provider=openrouter
- a deeper OpenRouter integration bug in the current dependency stack
Additional note
There is also a separate non-blocking config issue in my setup:
OPENSPACE_HOST_SKILL_DIRS was pointing to an older workspace path
But this is not the cause of the auth failure, because:
- local search still works
- standalone CLI without MCP still reproduces the same 401
Suggested maintainers’ checks
You may want to inspect:
- OpenRouter auth propagation through
litellm.acompletion(...)
- whether
api_key / api_base are still present after model/provider normalization
- whether
extra_headers are ignored or overwritten on the OpenRouter route
- current LiteLLM version compatibility with:
model="openrouter/anthropic/claude-sonnet-4.5"
api_base="https://openrouter.ai/api/v1"
Summary
When using OpenSpace with an OpenRouter model,
execute_taskfails immediately on the first LLM call with:401Missing Authentication headerThis happens even though:
OPENSPACE_LLM_API_KEYis resolved successfullyOPENSPACE_LLM_API_BASEis resolved successfullyThis looks like an auth propagation / provider-routing compatibility issue in the OpenSpace → LiteLLM → OpenRouter path, not a simple config mistake.
Environment
mainas of 2026-04-04openrouter/anthropic/claude-sonnet-4.5https://openrouter.ai/api/v1Reproduction
1. Standalone CLI
Run:
2. MCP path
Using
openspace-mcpviamcporter, calling:search_skillsworksexecute_taskfails immediately with the same auth errorExample task:
Expected behavior
OpenSpace should successfully call the configured OpenRouter model and start executing the task.
Actual behavior
The task fails before any real iteration begins.
Error:
Important observations
These parts work
openspace-mcpsearch_skillsThese parts fail
execute_taskRelevant logs
OpenSpace logs show credentials are resolved:
But when the actual LLM call happens, LiteLLM logs:
And then fails with:
So the model/provider routing appears to switch from:
openrouter/anthropic/claude-sonnet-4.5to:
anthropic/claude-sonnet-4.5; provider = openrouterbut the auth header still does not make it through.
Why this seems like a bug
I traced the code path:
Resolver
openspace/host_detection/resolver.pyIt correctly maps:
OPENSPACE_LLM_API_KEY->kwargs["api_key"]OPENSPACE_LLM_API_BASE->kwargs["api_base"]OPENSPACE_LLM_EXTRA_HEADERS->kwargs["extra_headers"]LLM client
openspace/llm/client.pyIt builds:
and then calls:
So from OpenSpace’s point of view,
api_key/api_baseshould already be present.Extra validation already attempted
I tried two minimal local patches to rule out shallow config issues:
Patch A
Mirror resolved credentials into provider-native env vars:
OPENROUTER_API_KEYOPENROUTER_API_BASEPatch B
Force-inject:
into the OpenRouter request path before
litellm.acompletion(...)Result
Neither patch changed the outcome.
The error remained:
So this does not look like a simple missing env var or missing top-level
extra_headersissue.Hypothesis
This is likely one of:
openrouter/...toprovider=openrouterAdditional note
There is also a separate non-blocking config issue in my setup:
OPENSPACE_HOST_SKILL_DIRSwas pointing to an older workspace pathBut this is not the cause of the auth failure, because:
Suggested maintainers’ checks
You may want to inspect:
litellm.acompletion(...)api_key/api_baseare still present after model/provider normalizationextra_headersare ignored or overwritten on the OpenRouter routemodel="openrouter/anthropic/claude-sonnet-4.5"api_base="https://openrouter.ai/api/v1"