Skip to content

Environment-aware skill sharing across heterogeneous workers #71

@yunhaoli24

Description

@yunhaoli24

Problem

OpenSpace currently works well when execute_task / auto-evolution run on the same machine that will later reuse the evolved skills.

However, this breaks down in a heterogeneous setup:

  • I have multiple machines with very different environments
  • some are macOS desktops with GUI access
  • some are Linux servers / headless workers
  • some have different local tools, runtimes, paths, permissions, and MCP availability

I originally hoped to use one centralized OpenSpace deployment to accumulate and share skills for all machines. But after tracing the current architecture, it looks like:

  • execution happens on the machine running openspace-mcp
  • post-run analysis happens there too
  • auto-evolution also happens there
  • therefore, evolved skills are implicitly adapted to that worker's environment

This means a skill evolved on one worker may be invalid or misleading on another worker, even if the task looks similar.

Why this matters

If I run openspace-mcp centrally on one remote server, then the system mostly learns the server's environment, not the environments of my other machines.

That makes centralized execution/evolution a poor fit for users who actually want:

  • centralized skill sharing
  • but distributed execution/evolution on heterogeneous workers

In other words, a centralized skill registry makes sense, but a single centralized execution/evolution node does not.

Current gap

Right now, skills seem to be treated as broadly reusable unless the user manually knows otherwise.

What appears to be missing is an explicit concept of environment compatibility.

For example, a skill may depend on:

  • OS: macOS / Linux / Windows
  • GUI availability vs headless
  • shell tools: ffmpeg, pandoc, docker, etc.
  • runtime/toolchain: python, node, conda, browser automation, MCP servers
  • filesystem/path assumptions
  • network / credential availability

Without environment metadata and filtering, central skill sharing becomes risky as soon as workers differ.

Suggested direction

It would be very helpful if OpenSpace supported environment-aware skill sharing.

Possible design:

  1. Keep execution / analysis / evolution local to each worker
  2. Treat cloud / central storage primarily as registry / CRUD / search / distribution
  3. Add environment metadata to skills, either explicit frontmatter or system-managed metadata
  4. Filter search / auto-import / auto-selection by worker compatibility
  5. Optionally maintain a worker fingerprint or capability profile

Example metadata ideas

Something like:

environment:
  os: [macos]
  mode: [gui]
  runtimes: [python]
  tools: [ffmpeg, pandoc]
  backends: [shell, gui, mcp]
  constraints:
    requires_display: true
    requires_network: false

Or a more normalized server-side schema.

Practical outcome

With this, users could run:

  • local OpenSpace workers on multiple machines
  • a central skill registry shared across them
  • environment-compatible discovery/import/selection

That would make cross-machine skill sharing much safer and much more useful.

Question

Is this already on the roadmap in some form?

If not, I think this is an important missing layer for anyone using OpenSpace across heterogeneous machines rather than a single repeated environment.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions