Skip to content

Omnirelm/weave

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Weave

Weave is an agentic platform that takes boring, repetitive engineering work off your team, so humans can keep building what they love.

It can investigate incidents by analyzing logs and metrics, execute runbooks, and handle ad-hoc engineering requests such as:

  • "Can you check if we already have a bug on our board for login failures?"
  • "Can you inspect this repo and tell me whether feature flags gate payments?"

Table of Contents

Why Weave

Engineering teams lose time on repetitive operational work:

  • Triaging noisy alerts
  • Correlating logs and metrics across systems
  • Running the same investigation workflows repeatedly
  • Handling "quick checks" across tools like issue trackers and code repos

Weave turns these tasks into agent workflows with reusable skills and tools, so investigations are faster, more consistent, and easier to scale.

What Weave Can Do

  • Incident triage using logs, metrics, and structured investigation steps
  • Automated runbook execution for repeatable ops workflows
  • Ad-hoc engineering requests across integrated systems
  • Multi-step orchestration with planning, execution, and synthesis
  • Local tool + MCP server composition in one skill run

Core Concepts

  • skills: reusable YAML-defined workflows (instructions/schemas/model, and optional steps)
  • tools (capabilities): local callable functions used by skills
  • mcp_servers: remote capability providers attached to skills via MCP

A skill can call both local capabilities and remote mcp_servers in the same execution.

Quick Start

Prerequisites

  • Python 3.11+
  • uv

1) Install dependencies

From orchestrator/:

uv sync

2) Configure environment

Create orchestrator/.env:

ORCHESTRATOR_DEBUG=true
OPENAI_API_KEY=<your_openai_api_key>

3) Configure integrations

Update orchestrator/config.yaml for your environment (MCP servers, log sources, auth headers, etc.).

4) Start the API

uv run orchestrator

Dev alternative:

uv run uvicorn src.main:app --reload

Server: http://localhost:9999

5) Verify health

curl http://localhost:9999/health

Expected response:

{"status":"ok","service":"orchestrator"}

Example Requests

Weave orchestration runs through POST /tasks/run.

Example: Repo investigation

curl -X POST http://localhost:9999/tasks/run \
  -H "Content-Type: application/json" \
  -d '{
    "skill_id": "git_inference",
    "task": "Check whether this repo has a payments service and whether feature flags control it.",
    "tenant_id": "default",
    "context": {},
    "input": {
      "repo": "https://github.com/open-telemetry/opentelemetry-demo",
      "question": "Check whether this repo has a payments service and whether feature flags control it."
    }
  }'

Example: Incident-oriented request

{
  "skill_id": "log_analysis",
  "task": "Investigate repeated login failures in production in the last 30 minutes.",
  "tenant_id": "default",
  "context": {
    "service": "auth-service",
    "environment": "prod"
  },
  "input": {
    "objective": "Find likely root cause and next action for login failures."
  }
}

Notes:

  • skill_id lets you force a preferred skill before planner fallback.
  • input must satisfy the selected skill's input_schema.
  • Output includes synthesized summary, step-level execution details, and token cost metadata.

Configuration

Skills

Add skills in:

  • orchestrator/skills/defaults/ for versioned default skills
  • orchestrator/skills/<tenant_id>/ for tenant-specific overrides

Minimal skill example:

id: my_new_skill
name: My New Skill
description: Summarize a production issue with key evidence.
kind: simple
instructions: |
  You are an SRE assistant. Analyze the input and return a concise, actionable summary.
capabilities:
  - opensearch_fetch_logs
model: gpt-5.1
input_schema:
  type: object
  properties:
    objective:
      type: string
  required: [objective]

MCP servers

Configure under orchestrator/config.yaml in top-level mcp:

mcp:
  github:
    enabled: true
    type: streamable_http
    url: https://api.githubcopilot.com/mcp/x/repos/readonly
    headers:
      Authorization: "Bearer <YOUR_PAT>"
    timeout: 15
    sse_read_timeout: 300
    cache_tools_list: true

Attach to a skill:

mcp_servers:
  - github

Transport requirements:

  • type: stdio requires command (optional args and env)
  • type: sse and type: streamable_http require url

Project Layout

weave/
  docker-compose.yml
  README.md
  orchestrator/
    config.yaml
    pyproject.toml
    skills/
      defaults/
      <tenant_id>/
    src/
      api/
      agent_factories/
      config/
      core/
      domain/
      infrastructure/
      integrations/
    tests/

Developing Locally

Run tests from orchestrator/:

uv run pytest

Contributing

Contributions are welcome. See CONTRIBUTING.md for setup, coding standards, testing expectations, and the PR checklist.

License

This repository is licensed under the terms in LICENSE.

About

Weave is an AI-powered developer sidekick that helps you debug, investigate, and operate systems with confidence.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors