Problem
OpenSpace currently works well when execute_task / auto-evolution run on the same machine that will later reuse the evolved skills.
However, this breaks down in a heterogeneous setup:
- I have multiple machines with very different environments
- some are macOS desktops with GUI access
- some are Linux servers / headless workers
- some have different local tools, runtimes, paths, permissions, and MCP availability
I originally hoped to use one centralized OpenSpace deployment to accumulate and share skills for all machines. But after tracing the current architecture, it looks like:
- execution happens on the machine running
openspace-mcp
- post-run analysis happens there too
- auto-evolution also happens there
- therefore, evolved skills are implicitly adapted to that worker's environment
This means a skill evolved on one worker may be invalid or misleading on another worker, even if the task looks similar.
Why this matters
If I run openspace-mcp centrally on one remote server, then the system mostly learns the server's environment, not the environments of my other machines.
That makes centralized execution/evolution a poor fit for users who actually want:
- centralized skill sharing
- but distributed execution/evolution on heterogeneous workers
In other words, a centralized skill registry makes sense, but a single centralized execution/evolution node does not.
Current gap
Right now, skills seem to be treated as broadly reusable unless the user manually knows otherwise.
What appears to be missing is an explicit concept of environment compatibility.
For example, a skill may depend on:
- OS: macOS / Linux / Windows
- GUI availability vs headless
- shell tools:
ffmpeg, pandoc, docker, etc.
- runtime/toolchain:
python, node, conda, browser automation, MCP servers
- filesystem/path assumptions
- network / credential availability
Without environment metadata and filtering, central skill sharing becomes risky as soon as workers differ.
Suggested direction
It would be very helpful if OpenSpace supported environment-aware skill sharing.
Possible design:
- Keep execution / analysis / evolution local to each worker
- Treat cloud / central storage primarily as registry / CRUD / search / distribution
- Add environment metadata to skills, either explicit frontmatter or system-managed metadata
- Filter search / auto-import / auto-selection by worker compatibility
- Optionally maintain a worker fingerprint or capability profile
Example metadata ideas
Something like:
environment:
os: [macos]
mode: [gui]
runtimes: [python]
tools: [ffmpeg, pandoc]
backends: [shell, gui, mcp]
constraints:
requires_display: true
requires_network: false
Or a more normalized server-side schema.
Practical outcome
With this, users could run:
- local OpenSpace workers on multiple machines
- a central skill registry shared across them
- environment-compatible discovery/import/selection
That would make cross-machine skill sharing much safer and much more useful.
Question
Is this already on the roadmap in some form?
If not, I think this is an important missing layer for anyone using OpenSpace across heterogeneous machines rather than a single repeated environment.
Problem
OpenSpace currently works well when
execute_task/ auto-evolution run on the same machine that will later reuse the evolved skills.However, this breaks down in a heterogeneous setup:
I originally hoped to use one centralized OpenSpace deployment to accumulate and share skills for all machines. But after tracing the current architecture, it looks like:
openspace-mcpThis means a skill evolved on one worker may be invalid or misleading on another worker, even if the task looks similar.
Why this matters
If I run
openspace-mcpcentrally on one remote server, then the system mostly learns the server's environment, not the environments of my other machines.That makes centralized execution/evolution a poor fit for users who actually want:
In other words, a centralized skill registry makes sense, but a single centralized execution/evolution node does not.
Current gap
Right now, skills seem to be treated as broadly reusable unless the user manually knows otherwise.
What appears to be missing is an explicit concept of environment compatibility.
For example, a skill may depend on:
ffmpeg,pandoc,docker, etc.python,node,conda, browser automation, MCP serversWithout environment metadata and filtering, central skill sharing becomes risky as soon as workers differ.
Suggested direction
It would be very helpful if OpenSpace supported environment-aware skill sharing.
Possible design:
Example metadata ideas
Something like:
Or a more normalized server-side schema.
Practical outcome
With this, users could run:
That would make cross-machine skill sharing much safer and much more useful.
Question
Is this already on the roadmap in some form?
If not, I think this is an important missing layer for anyone using OpenSpace across heterogeneous machines rather than a single repeated environment.