diff --git a/README.md b/README.md
index 5bd123c..4940a0c 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,28 @@
# Naftiko Framework
+Welcome to Naftiko Framework, the 1st Open Source project for **Spec-Driven AI Integration**. It reinvents API integration for the AI era with governed and versatile capabilities that streamline API sprawl from massive SaaS and microservices growth.
+
+Each capability is a coarse piece of domain that consumes existing HTTP-based APIs, converts data formats like Protocol Buffer, XML, YAML, CSV and Avro into JSON, enabling better Context Engineering and API reusability critical to AI integration.
+
+
+
+Capabilities are declared using **YAML** files, configuring the Naftiko Engine provided as a **Docker** container. Clients can then consume the capability via the **MCP** or **API** servers exposed.
+
+While the framework itself is developed in Java and can be extended to support new protocols, developers just need to know YAML, JSONPath and Mustache templates to take full advantage of it.
+
+- :rowboat: [Installation](https://github.com/naftiko/framework/wiki/Installation)
+- :sailboat: [Tutorial](https://github.com/naftiko/framework/wiki/Tutorial)
+- :ship: [Use cases](https://github.com/naftiko/framework/wiki/Use-Cases)
+- :anchor: [Specification](https://github.com/naftiko/framework/wiki/Specification)
+- :mega: [Releases](https://github.com/naftiko/framework/wiki/Releases)
+- :telescope: [Roadmap](https://github.com/naftiko/framework/wiki/Roadmap)
+- :nut_and_bolt: [Contribute](https://github.com/naftiko/framework/wiki/Contribute)
+- :ocean: [FAQ](https://github.com/naftiko/framework/wiki/FAQ)
+
+Please join the community of users and contributors in [this GitHub Discussion forum!](https://github.com/orgs/naftiko/discussions).
+
+
+
Welcome to Naftiko Framework, the first Open Source project for **Spec-Driven AI Integration**. It reinvents API integration for the AI era with a governed and versatile platform based on capabilities that streamlines API sprawl created by the massive SaaS growth and microservices.
Each capability is a coarse piece of domain that consumes existing HTTP-based APIs, converts into JSON data formats like Protocol Buffer, XML, YAML, CSV and Avro, enabling better Context Engineering and API reusability critical to AI integration.
@@ -19,6 +42,7 @@ While the framework itself is developed in Java and can be extended to support n
- :nut_and_bolt: [Contribute](https://github.com/naftiko/framework/wiki/Contribute)
- :ocean: [FAQ](https://github.com/naftiko/framework/wiki/FAQ)
+
Please join the community of users and contributors in [this GitHub Discussion forum!](https://github.com/orgs/naftiko/discussions).
diff --git a/src/main/resources/specs/agent-skills-support-proposal.md b/src/main/resources/blueprints/agent-skills-support-proposal.md
similarity index 94%
rename from src/main/resources/specs/agent-skills-support-proposal.md
rename to src/main/resources/blueprints/agent-skills-support-proposal.md
index a057e30..82873f2 100644
--- a/src/main/resources/specs/agent-skills-support-proposal.md
+++ b/src/main/resources/blueprints/agent-skills-support-proposal.md
@@ -114,19 +114,42 @@ capability:
namespace: "weather-api"
baseUri: "https://api.weather.com/v1/"
resources:
- - path: "forecast/{{location}}"
+ - name: "forecast"
+ path: "forecast/{{location}}"
+ inputParameters:
+ - name: "location"
+ in: "path"
operations:
- method: "GET"
name: "get-forecast"
+ outputParameters:
+ - name: "forecast"
+ type: "object"
+ value: "$.forecast"
- type: "http"
namespace: "geocoding-api"
baseUri: "https://geocode.example.com/"
resources:
- - path: "resolve/{{query}}"
+ - name: "resolve"
+ path: "resolve/{{query}}"
+ inputParameters:
+ - name: "query"
+ in: "path"
operations:
- method: "GET"
name: "resolve-location"
+ outputParameters:
+ - name: "coordinates"
+ type: "object"
+ value: "$.coordinates"
+ properties:
+ lat:
+ type: "number"
+ value: "$.lat"
+ lon:
+ type: "number"
+ value: "$.lon"
exposes:
# API adapter — owns tool execution via REST
@@ -139,9 +162,18 @@ capability:
operations:
- method: "GET"
name: "get-forecast"
+ inputParameters:
+ - name: "city"
+ in: "path"
+ type: "string"
+ description: "City name (e.g. 'London', 'New York')"
call: "weather-api.get-forecast"
with:
location: "{{city}}"
+ outputParameters:
+ - name: "forecast"
+ type: "object"
+ mapping: "$.forecast"
# MCP adapter — owns tool execution via MCP protocol
- type: "mcp"
@@ -152,6 +184,10 @@ capability:
tools:
- name: "resolve-and-forecast"
description: "Resolve a place name to coordinates, then fetch forecast"
+ inputParameters:
+ - name: "place"
+ type: "string"
+ description: "Place name to resolve"
steps:
- type: "call"
name: "geo"
@@ -163,10 +199,16 @@ capability:
call: "weather-api.get-forecast"
with:
location: "{{geo.coordinates.lat}},{{geo.coordinates.lon}}"
- inputParameters:
- - name: "place"
- type: "string"
- description: "Place name to resolve"
+ mappings:
+ - targetName: "location"
+ value: "$.geo.coordinates"
+ - targetName: "forecast"
+ value: "$.weather.forecast"
+ outputParameters:
+ - name: "location"
+ type: "object"
+ - name: "forecast"
+ type: "object"
# Skill adapter — metadata/catalog layer (no execution)
- type: "skill"
@@ -189,16 +231,16 @@ capability:
- name: "get-forecast"
description: "Get weather forecast for a city"
from:
- namespace: "weather-rest" # Sibling API adapter
- action: "get-forecast" # Operation name
+ sourceNamespace: "weather-rest" # Sibling API adapter
+ action: "get-forecast" # Operation name
- name: "resolve-and-forecast"
description: "Resolve a place name to coordinates, then fetch forecast"
from:
- namespace: "weather-mcp" # Sibling MCP adapter
- action: "resolve-and-forecast" # Tool name
+ sourceNamespace: "weather-mcp" # Sibling MCP adapter
+ action: "resolve-and-forecast" # Tool name
- name: "interpret-weather"
description: "Guide for reading and interpreting weather data"
- instruction: "interpret-weather.md" # Local file in location dir
+ instruction: "interpret-weather.md" # Local file in location dir
```
**How agents use this:**
@@ -259,18 +301,18 @@ skills:
- name: "list-orders"
description: "List all customer orders"
from:
- namespace: "public-api"
+ sourceNamespace: "public-api"
action: "list-orders"
- name: "create-order"
description: "Create a new customer order"
from:
- namespace: "public-api"
+ sourceNamespace: "public-api"
action: "create-order"
# Derived from sibling MCP adapter
- name: "summarize-order"
description: "Generate an AI summary of an order"
from:
- namespace: "assistant-mcp"
+ sourceNamespace: "assistant-mcp"
action: "summarize-order"
# Local file instruction
- name: "order-policies"
@@ -283,7 +325,7 @@ skills:
A tool with `from` references a specific operation or tool in a sibling adapter:
**Tool declaration rules:**
-1. `from.namespace` must resolve to a sibling `exposes[]` entry of type `api` or `mcp`
+1. `from.sourceNamespace` must resolve to a sibling `exposes[]` entry of type `api` or `mcp`
2. `from.action` must match an operation name (api) or tool name (mcp) in the resolved adapter
3. Adapter type is inferred from the resolved target
4. Each derived tool includes an `invocationRef` in the response so agents can invoke the source adapter directly
@@ -356,12 +398,12 @@ skills:
- name: "run-analysis"
description: "Run a full data analysis"
from:
- namespace: "analytics-rest"
+ sourceNamespace: "analytics-rest"
action: "run-analysis"
- name: "quick-stats"
description: "Run quick statistical analysis"
from:
- namespace: "analytics-mcp"
+ sourceNamespace: "analytics-mcp"
action: "quick-stats"
- name: "interpret-data"
description: "Guide for interpreting analysis results"
@@ -414,7 +456,7 @@ skills:
- name: "get-forecast"
description: "Get weather forecast"
from:
- namespace: "weather-rest"
+ sourceNamespace: "weather-rest"
action: "get-forecast"
- name: "interpret-weather"
description: "Guide for interpreting weather data"
@@ -494,10 +536,7 @@ The SKILL.md file at the location can contain the same frontmatter properties as
"description": "A skill definition. Declares tools derived from sibling api or mcp adapters or defined as local file instructions. Can also stand alone as purely descriptive (no tools). Supports full Agent Skills Spec frontmatter metadata. Skills describe tools — they do not execute them.",
"properties": {
"name": {
- "type": "string",
- "pattern": "^[a-z0-9][a-z0-9-]*[a-z0-9]$|^[a-z0-9]$",
- "minLength": 1,
- "maxLength": 64,
+ "$ref": "#/$defs/IdentifierKebab",
"description": "Skill identifier (kebab-case)"
},
"description": {
@@ -521,6 +560,7 @@ The SKILL.md file at the location can contain the same frontmatter properties as
},
"allowed-tools": {
"type": "string",
+ "maxLength": 1024,
"description": "Space-delimited list of pre-approved tool names (Agent Skills Spec)"
},
"argument-hint": {
@@ -563,10 +603,7 @@ The SKILL.md file at the location can contain the same frontmatter properties as
"description": "A tool declared within a skill. Derived from a sibling api or mcp adapter via 'from', or defined as a local file instruction.",
"properties": {
"name": {
- "type": "string",
- "pattern": "^[a-z0-9][a-z0-9-]*[a-z0-9]$|^[a-z0-9]$",
- "minLength": 1,
- "maxLength": 64,
+ "$ref": "#/$defs/IdentifierKebab",
"description": "Tool identifier (kebab-case)"
},
"description": {
@@ -577,7 +614,7 @@ The SKILL.md file at the location can contain the same frontmatter properties as
"type": "object",
"description": "Derive this tool from a sibling api or mcp adapter.",
"properties": {
- "namespace": {
+ "sourceNamespace": {
"type": "string",
"description": "Sibling exposes[].namespace (must be type api or mcp)"
},
@@ -586,7 +623,7 @@ The SKILL.md file at the location can contain the same frontmatter properties as
"description": "Operation name (api) or tool name (mcp) in the source adapter"
}
},
- "required": ["namespace", "action"],
+ "required": ["sourceNamespace", "action"],
"additionalProperties": false
},
"instruction": {
@@ -950,6 +987,9 @@ capability:
resources:
- path: "forecast/{{location}}"
name: "forecast"
+ inputParameters:
+ - name: "location"
+ in: "path"
operations:
- method: "GET"
name: "get-forecast"
@@ -964,13 +1004,23 @@ capability:
resources:
- path: "search/{{query}}"
name: "search"
+ inputParameters:
+ - name: "query"
+ in: "path"
operations:
- method: "GET"
name: "resolve-location"
outputParameters:
- name: "coordinates"
type: "object"
- value: "$.result"
+ value: "$.coordinates"
+ properties:
+ lat:
+ type: "number"
+ value: "$.lat"
+ lon:
+ type: "number"
+ value: "$.lon"
exposes:
# API adapter — executes the forecast tool via REST
@@ -983,14 +1033,18 @@ capability:
operations:
- method: "GET"
name: "get-forecast"
- call: "weather-api.get-forecast"
- with:
- location: "{{city}}"
inputParameters:
- name: "city"
in: "path"
type: "string"
description: "City name (e.g. 'London', 'New York')"
+ call: "weather-api.get-forecast"
+ with:
+ location: "{{city}}"
+ outputParameters:
+ - name: "forecast"
+ type: "object"
+ mapping: "$.forecast"
# MCP adapter — executes multi-step tools via MCP protocol
- type: "mcp"
@@ -1049,12 +1103,12 @@ capability:
- name: "get-forecast"
description: "Get weather forecast for a city"
from:
- namespace: "weather-rest"
+ sourceNamespace: "weather-rest"
action: "get-forecast"
- name: "resolve-and-forecast"
description: "Resolve a place name to coordinates, then fetch forecast"
from:
- namespace: "weather-mcp"
+ sourceNamespace: "weather-mcp"
action: "resolve-and-forecast"
- name: "interpret-weather"
description: "Guide for reading and interpreting weather data"
@@ -1243,17 +1297,17 @@ capability:
- name: "list-orders"
description: "List all customer orders"
from:
- namespace: "public-api"
+ sourceNamespace: "public-api"
action: "list-orders"
- name: "get-order"
description: "Get details of a specific order"
from:
- namespace: "public-api"
+ sourceNamespace: "public-api"
action: "get-order"
- name: "create-order"
description: "Create a new customer order"
from:
- namespace: "public-api"
+ sourceNamespace: "public-api"
action: "create-order"
- name: "order-admin"
@@ -1263,7 +1317,7 @@ capability:
- name: "cancel-order"
description: "Cancel an existing order"
from:
- namespace: "public-api"
+ sourceNamespace: "public-api"
action: "cancel-order"
```
@@ -1380,12 +1434,12 @@ capability:
- name: "run-analysis"
description: "Run a full data analysis via REST"
from:
- namespace: "analytics-rest"
+ sourceNamespace: "analytics-rest"
action: "run-analysis"
- name: "quick-stats"
description: "Run quick statistical analysis via MCP"
from:
- namespace: "analytics-mcp"
+ sourceNamespace: "analytics-mcp"
action: "quick-stats"
- name: "analysis-methodology"
description: "Guide for choosing the right analysis approach"
@@ -1537,8 +1591,8 @@ spec:
1. `tools` is optional — a skill can be purely descriptive (metadata + `location` only, no tools)
2. Each tool must specify exactly one source: `from` (derived) or `instruction` (local file)
-3. For derived tools (`from`), `namespace` must resolve to exactly one sibling `exposes[].namespace` of type `api` or `mcp`
-4. Referencing a `skill`-type adapter from `from.namespace` is invalid (no recursive derivation)
+3. For derived tools (`from`), `sourceNamespace` must resolve to exactly one sibling `exposes[].namespace` of type `api` or `mcp`
+4. Referencing a `skill`-type adapter from `from.sourceNamespace` is invalid (no recursive derivation)
5. For derived tools, `action` must exist as an operation name (api) or tool name (mcp) in the resolved adapter
6. For instruction tools, the skill must have a `location` configured — the instruction path is resolved relative to it
7. Tool `name` values must be unique within a skill
diff --git a/src/main/resources/specs/gap-analysis-report.md b/src/main/resources/blueprints/gap-analysis-report.md
similarity index 100%
rename from src/main/resources/specs/gap-analysis-report.md
rename to src/main/resources/blueprints/gap-analysis-report.md
diff --git a/src/main/resources/blueprints/kubernetes-backstage-governance-proposal.md b/src/main/resources/blueprints/kubernetes-backstage-governance-proposal.md
new file mode 100644
index 0000000..9e1665b
--- /dev/null
+++ b/src/main/resources/blueprints/kubernetes-backstage-governance-proposal.md
@@ -0,0 +1,924 @@
+# Naftiko Fabric Governance & Operations Proposal
+## Kubernetes CRDs, Spec Metadata Taxonomy, Governance Rules, and Owned Toolchain
+
+**Status**: Proposal
+**Date**: March 9, 2026
+**Key Concept**: A coherent, end-to-end proposal for operating Naftiko capabilities as Kubernetes Custom Resources, governing them at the spec layer via a shared rules engine, and surfacing fabric-wide visibility in Backstage — all driven by small, purposeful additions to the Naftiko specification itself.
+
+---
+
+## Table of Contents
+
+1. [Executive Summary](#executive-summary)
+2. [Spec Metadata Taxonomy](#spec-metadata-taxonomy)
+3. [Kubernetes CRD and Operator](#kubernetes-crd-and-operator)
+4. [SRE Experience](#sre-experience)
+5. [Governance Rules](#governance-rules)
+6. [Backstage Integration](#backstage-integration)
+7. [Owned Toolchain](#owned-toolchain)
+8. [Shared Rules Package](#shared-rules-package)
+9. [Complete Signal Chain](#complete-signal-chain)
+10. [Summary Tables](#summary-tables)
+
+---
+
+## Executive Summary
+
+### What This Proposes
+
+Four interconnected additions that compose into a complete fabric governance story:
+
+1. **Spec Metadata Taxonomy** — Small, purposeful additions to the Naftiko spec (`labels` on `Info`, `tags` on `Consumes`/`Exposes`/`ExposedOperation`/`ConsumedHttpOperation`, `lifecycle` on `Exposes`) that serve as the single source of truth for every downstream governance tool.
+
+2. **Kubernetes Native Operations** — A `NaftikoCapability` CRD and operator that turns every spec into a running workload with generated Deployments, Services, and ExternalSecrets — with zero imperative steps from SREs. Resilience patterns (circuit breakers, retries, bulkheads, rate limiters) are provided in-process via Resilience4j — no sidecar or service mesh required.
+
+3. **Governance Rules** — A split ruleset (blocking core rules + advisory governance rules) evaluated consistently at IDE authoring time, CI merge gates, and Backstage scorecard checks — from a single shared TypeScript package.
+
+4. **Owned Toolchain** — A VS Code extension, Docker Desktop extension, and Backstage plugin suite that replace third-party dependencies (Spectral, manual catalog entries) with first-class Naftiko experiences, all powered by the same shared rules package.
+
+### Design Principle
+
+> **The Naftiko spec is the source of truth, not Kubernetes annotations or Backstage metadata.** Every label, catalog entity, risk signal, and cost allocation entry is derived from the spec — ensuring they stay in sync without extra maintenance burden on teams.
+
+---
+
+## Spec Metadata Taxonomy
+
+### The Core Distinction: `tags` vs `labels`
+
+Two metadata types serve fundamentally different tool layers and must be kept distinct from the start:
+
+| Metadata type | Format | Consumer | Purpose |
+|---|---|---|---|
+| **tags** | `string[]` | Backstage catalog search, agent discovery, risk scorecards | Human-readable semantic classifiers |
+| **labels** | `map` | K8s operator → Kubernetes labels, Kubecost aggregation, Backstage entity filtering | Machine-readable key-value selectors |
+
+This mirrors Kubernetes' own `labels`/`annotations` split intentionally — it is a pattern SREs already know.
+
+---
+
+### 2.1 `Info` Object — Add `labels`
+
+`Info` already has `tags` for capability-level discovery. The new `labels` map is what the Kubernetes operator propagates verbatim onto every generated resource (Deployment, Service, ExternalSecret). It is the single source of truth for cost allocation and label selectors.
+
+**Proposed addition to `Info` fixed fields:**
+
+| Field Name | Type | Description |
+|---|---|---|
+| **labels** | `map` | Key-value pairs propagated to all Kubernetes resources generated by the Naftiko operator. Used for cost attribution (Kubecost), Backstage entity filtering, and network policy scoping. Keys MUST follow the pattern `^[a-zA-Z0-9./\-]+$`. |
+
+**Example:**
+
+```yaml
+info:
+ label: Notion Page Creator
+ description: Creates and manages Notion pages with rich content formatting
+ tags:
+ - notion
+ - automation
+ labels:
+ naftiko.io/domain: productivity
+ naftiko.io/cost-center: platform-team
+ naftiko.io/env: production
+ naftiko.io/tier: standard
+ stakeholders:
+ - role: owner
+ fullName: Jane Doe
+ email: jane.doe@example.com
+```
+
+---
+
+### 2.2 `Consumes` Object — Add `tags`
+
+Each consumed namespace is an upstream dependency with its own risk and cost profile. Tags here are where data classification and billing signals live. Backstage risk scorecards scan these directly.
+
+**Proposed addition to `Consumes` fixed fields:**
+
+| Field Name | Type | Description |
+|---|---|---|
+| **tags** | `string[]` | Tags classifying this consumed API's provenance, billing model, data sensitivity, and reliability. Used by governance rules and Backstage scorecards. |
+
+**Recommended vocabulary:**
+
+| Category | Values |
+|---|---|
+| Provenance | `internal`, `third-party`, `partner` |
+| Billing | `free`, `free-tier`, `paid-tier`, `metered`, `quota-limited` |
+| Data sensitivity | `pii`, `financial`, `health`, `public` |
+| Reliability | `sla-99`, `sla-999`, `best-effort` |
+
+**Example:**
+
+```yaml
+consumes:
+ - type: http
+ namespace: notion
+ baseUri: https://api.notion.com
+ description: Notion public API for page management
+ tags:
+ - third-party
+ - paid-tier
+ - pii
+ resources: [...]
+```
+
+---
+
+### 2.3 `Exposes` Object — Add `tags` and `lifecycle`
+
+The exposed adapter is a published API contract. Two new fields signal network topology intent and maturity stage to both the operator and Backstage.
+
+**Proposed additions to API Expose fixed fields:**
+
+| Field Name | Type | Description |
+|---|---|---|
+| **tags** | `string[]` | Tags classifying this adapter's visibility and access characteristics. Drive network policy generation (`public` → Ingress, `internal` → ClusterIP only) and Backstage catalog grouping. |
+| **lifecycle** | `string` | Lifecycle stage of this exposed adapter. One of: `experimental`, `production`, `deprecated`. Maps directly to Backstage's entity lifecycle field. |
+
+**Recommended `tags` vocabulary:**
+
+| Category | Values |
+|---|---|
+| Visibility | `public`, `internal`, `partner` |
+| Access | `authenticated`, `write-enabled` |
+| State | (use `lifecycle` field instead) |
+
+**Rule:** An `exposes` adapter tagged `public` in the `production` namespace MUST have `authentication` declared. A governance rule enforces this.
+
+**Example:**
+
+```yaml
+exposes:
+ - type: api
+ port: 3000
+ namespace: notion-writer
+ lifecycle: production
+ tags:
+ - internal
+ - write-enabled
+ authentication:
+ type: bearer
+ token: "{{api_token}}"
+ resources: [...]
+```
+
+**Operator behavior based on `tags`:**
+
+| Tag | Generated resources |
+|---|---|
+| `public` | ClusterIP Service + Ingress |
+| `internal` | ClusterIP Service only |
+| `deprecated` | ClusterIP Service + `Deprecated: true` response header injected |
+
+---
+
+### 2.4 `ExposedOperation` Object — Add `tags`
+
+Operation-level tags are the finest-grained risk and agent-safety signal. They determine whether an agent orchestrator requires human confirmation before invoking a tool derived from this operation.
+
+**Proposed addition to `ExposedOperation` fixed fields:**
+
+| Field Name | Type | Description |
+|---|---|---|
+| **tags** | `string[]` | Tags classifying this operation's effect, access requirements, and agent invocation policy. |
+
+**Recommended vocabulary:**
+
+| Category | Values |
+|---|---|
+| Effect | `read`, `write`, `mutating`, `idempotent`, `destructive`, `delete` |
+| Access | `admin-only`, `authenticated`, `public` |
+| Agent behavior | `requires-confirmation`, `no-auto-invoke`, `safe-to-retry` |
+
+**Example:**
+
+```yaml
+operations:
+ - method: DELETE
+ label: Archive Page
+ tags:
+ - mutating
+ - destructive
+ - requires-confirmation
+ call: notion.archive-page
+```
+
+The `requires-confirmation` and `no-auto-invoke` tags are particularly important for the MCP expose path — they signal to agent orchestrators that a tool derived from this operation should not be invoked without human approval.
+
+---
+
+### 2.5 `ConsumedHttpOperation` Object — Add `tags`
+
+This is where cost visibility and retry safety live at the most granular level. Each chargeable upstream call can be tagged, feeding directly into Backstage Cost Insights analysis.
+
+**Proposed addition to `ConsumedHttpOperation` fixed fields:**
+
+| Field Name | Type | Description |
+|---|---|---|
+| **tags** | `string[]` | Tags classifying this operation's billing impact, quota contribution, and retry safety. |
+
+**Recommended vocabulary:**
+
+| Category | Values |
+|---|---|
+| Billing | `chargeable`, `metered`, `free` |
+| Quota | `quota-limited`, `rate-limited` |
+| Safety | `idempotent`, `non-idempotent`, `retry-safe` |
+
+**Example:**
+
+```yaml
+operations:
+ - name: create-page
+ method: POST
+ tags:
+ - chargeable
+ - quota-limited
+ - non-idempotent
+```
+
+---
+
+### 2.6 What Is Deliberately Left Untagged
+
+| Object | Verdict | Rationale |
+|---|---|---|
+| `ConsumedHttpResource` | No tags | Resources are structural path groupings, not independent governance units; tag at the operation level |
+| `ExposedResource` | No tags | Same — the operation is the meaningful actor |
+| `ExternalRef` | No tags | Already carries semantic meaning via its `type` (file vs runtime) and `keys` map |
+| `Person` | No tags | Governance is expressed through `role`; tags would duplicate existing semantics |
+
+---
+
+### 2.7 Metadata Taxonomy Summary
+
+```
+Naftiko Spec Object tags labels lifecycle
+──────────────────────────────────────────────────────────────
+Info ✓ (existing) ✓ (new)
+Consumes ✓ (new)
+Exposes (API adapter) ✓ (new) ✓ (new)
+ExposedOperation ✓ (new)
+ConsumedHttpOperation ✓ (new)
+```
+
+Five objects. Two new field types (`labels` as a map, `lifecycle` as an enum). `tags` extended to four objects beyond `Info`. None of these require changes to the execution engine — they are pure metadata consumed by external tooling.
+
+---
+
+## Kubernetes CRD and Operator
+
+### 3.1 CRD Design: `NaftikoCapability`
+
+The natural mapping is a `NaftikoCapability` Custom Resource whose `spec` is the Naftiko YAML document itself. The operator is the only place that knows how to materialise a spec into running infrastructure.
+
+```yaml
+apiVersion: naftiko.io/v1alpha1
+kind: Capability
+metadata:
+ name: notion-page-creator
+ namespace: integrations
+spec:
+ naftiko: "0.4"
+ info:
+ label: Notion Page Creator
+ description: Creates and manages Notion pages
+ tags: [notion, automation]
+ labels:
+ naftiko.io/cost-center: platform-team
+ naftiko.io/domain: productivity
+ stakeholders:
+ - role: owner
+ fullName: Jane Doe
+ email: jane.doe@example.com
+ capability:
+ exposes:
+ - type: api
+ port: 3000
+ namespace: notion-writer
+ lifecycle: production
+ tags: [internal]
+ authentication:
+ type: bearer
+ token: "{{api_token}}"
+ resources: [...]
+ consumes:
+ - type: http
+ namespace: notion
+ baseUri: https://api.notion.com
+ description: Notion public API
+ tags: [third-party, paid-tier, pii]
+ resources: [...]
+status:
+ phase: Running
+ endpoint: http://notion-page-creator.integrations.svc.cluster.local:3000
+ conditions:
+ - type: Ready
+ status: "True"
+ lastTransitionTime: "2026-03-08T10:00:00Z"
+ - type: SecretsSynced
+ status: "True"
+```
+
+---
+
+### 3.2 Operator Reconciliation Loop
+
+The operator watches `Capability` CRs and for each one reconciles the following resources:
+
+| Resource | Description |
+|---|---|
+| **ConfigMap** | Serializes `spec` back to YAML and mounts it as `/capability.yaml` |
+| **Deployment** | Runs `naftiko/engine:latest` mounting the ConfigMap; resource requests/limits derived from `info.labels["naftiko.io/tier"]` |
+| **Service** | ClusterIP on the declared `exposes[].port`, named matching the CR |
+| **Ingress** | Generated only when `exposes[].tags` contains `public` |
+| **ExternalSecret** | One per `externalRefs` entry, resolved via External Secrets Operator targeting Vault/AWS SM/K8s Secrets |
+| **Resilience4j config** | In-process circuit breaker, retry, bulkhead, and rate limiter configuration injected as environment variables into the engine container; parameters derived from `CapabilityClass` |
+
+**Key detail — ExternalRef bridge:** The operator detects `externalRefs` entries in the spec and creates `ExternalSecret` resources automatically. SREs never hard-code secrets in specs — they reference a secret store and the operator wires the injection. This makes `externalRefs[].type: runtime` the production-safe pattern at zero extra operator instructions.
+
+**Label propagation:** Every resource generated by the operator is stamped with all entries from `info.labels`. This means Kubecost's cost aggregation by `naftiko.io/cost-center` works with zero additional configuration.
+
+---
+
+### 3.3 Status Conditions
+
+The operator writes standard Kubernetes conditions to `.status.conditions`:
+
+| Condition | Meaning |
+|---|---|
+| `Ready` | All generated resources are healthy |
+| `SecretsSynced` | All `ExternalSecret` resources have resolved |
+| `Degraded` | An upstream `externalRefs` secret store is unreachable |
+| `CircuitOpen` | At least one consumed namespace circuit breaker is open; written when the engine reports a tripped breaker via `/metrics` |
+
+---
+
+## SRE Experience
+
+### 4.1 GitOps-First
+
+Capability CRDs live in a `capabilities/` directory in the platform GitOps repo. ArgoCD or Flux syncs them; the operator does the rest. Promotion across environments means moving a YAML file between namespace overlays (Kustomize). SREs never run `docker run` or `kubectl apply` manually.
+
+### 4.2 Observability Out-of-the-Box
+
+Every resource generated by the operator carries consistent labels from `info.labels`. The engine exposes:
+
+- `GET /health` — liveness/readiness (drives Kubernetes probes)
+- `GET /metrics` — Prometheus format: request count, latency histograms, upstream error rates per consumed namespace
+
+Structured JSON logs include `capability`, `namespace`, `operation`, `statusCode`, `durationMs` fields — feeding Prometheus and Grafana dashboards scoped per capability without any SRE configuration.
+
+### 4.3 Scaling and Resource Governance
+
+A `CapabilityClass` cluster-scoped resource (a companion CRD) defines resource tiers:
+
+```yaml
+apiVersion: naftiko.io/v1alpha1
+kind: CapabilityClass
+metadata:
+ name: standard
+spec:
+ resources:
+ requests: { memory: "256Mi", cpu: "250m" }
+ limits: { memory: "512Mi", cpu: "500m" }
+ hpa:
+ minReplicas: 1
+ maxReplicas: 4
+ targetRequestsPerSecond: 100
+```
+
+The `info.labels["naftiko.io/tier"]` value selects the class. SREs control blast radius at the class level without touching individual capability specs.
+
+### 4.4 Failure Isolation
+
+The Naftiko engine embeds [Resilience4j](https://resilience4j.readme.io/) and wraps every outbound HTTP call to a consumed namespace with a configurable resilience pipeline. Because the engine is a plain Java process identical across all capabilities, resilience runs in-process with no sidecar or service mesh required — working identically in Docker Desktop local development, CI containers, and Kubernetes.
+
+The operator derives resilience parameters from the `CapabilityClass` selected by `info.labels["naftiko.io/tier"]` and injects them into the engine container as typed environment variables:
+
+```yaml
+# CapabilityClass extended with resilience defaults
+apiVersion: naftiko.io/v1alpha1
+kind: CapabilityClass
+metadata:
+ name: standard
+spec:
+ resources:
+ requests: { memory: "256Mi", cpu: "250m" }
+ limits: { memory: "512Mi", cpu: "500m" }
+ hpa:
+ minReplicas: 1
+ maxReplicas: 4
+ targetRequestsPerSecond: 100
+ resilience:
+ circuitBreaker:
+ slidingWindowSize: 10 # calls in rolling window
+ failureRateThreshold: 50 # % failures to open circuit
+ waitDurationInOpenState: 30s
+ permittedCallsInHalfOpenState: 3
+ retry:
+ maxAttempts: 3
+ waitDuration: 500ms
+ retryOnResultPredicate: "statusCode >= 500"
+ bulkhead:
+ maxConcurrentCalls: 20 # per consumed namespace
+ maxWaitDuration: 100ms
+ rateLimit:
+ limitForPeriod: 100 # calls per refresh period
+ limitRefreshPeriod: 1s
+ timeoutDuration: 0ms # fail-fast if limit exceeded
+```
+
+The engine maps each `consumes[].namespace` to an independent Resilience4j instance, so a degraded upstream API (e.g. `notion`) trips only its own circuit breaker and does not affect the `github` namespace on the same capability.
+
+**Per-namespace override via `consumes` tags:** A `consumes` entry tagged `best-effort` receives a more aggressive circuit breaker (lower `waitDurationInOpenState`) than one tagged `sla-999`, allowing SREs to tune resilience policy at the spec layer without changing `CapabilityClass` defaults.
+
+Resilience4j emits events that the engine exposes on `GET /metrics` as Prometheus counters and gauges:
+
+| Metric | Description |
+|---|---|
+| `naftiko_circuit_breaker_state{namespace}` | 0=closed, 1=open, 2=half-open |
+| `naftiko_circuit_breaker_failure_rate{namespace}` | Rolling failure rate (%) |
+| `naftiko_retry_calls_total{namespace,result}` | Retry attempts by outcome |
+| `naftiko_bulkhead_available_slots{namespace}` | Remaining concurrent call capacity |
+| `naftiko_rate_limiter_available_permissions{namespace}` | Remaining rate limit tokens |
+
+Grafana dashboards scoped to each capability show circuit breaker state transitions as annotations, making upstream degradation immediately visible without log diving.
+
+---
+
+## Governance Rules
+
+### 5.1 Architecture
+
+Two rulesets with a clear severity contract:
+
+| Ruleset | Severity | Effect |
+|---|---|---|
+| `naftiko-core-rules` | `error` | Blocks CI merge; mirrors in Kyverno admission (cluster-level last-mile) |
+| `naftiko-governance-rules` | `warn` / `info` | Advisory only; feeds Backstage Tech Insights scorecard checks |
+
+Both are implemented in the shared `@naftiko/rules` TypeScript package (see [Section 8](#shared-rules-package)). The Spectral YAML format below is the **reference documentation** for rule intent — useful for teams writing Kyverno policies — but the execution format is the TypeScript package.
+
+---
+
+### 5.2 Core Rules (Error Severity — Blocking)
+
+#### `naftiko-exposes-require-authentication`
+Every exposed API adapter must declare `authentication`. All exposed APIs must be authenticated.
+
+- **Given**: `$.capability.exposes[*]`
+- **Check**: `authentication` field is present and truthy
+- **Kyverno mirror**: yes — enforced at admission in `production` namespace
+
+#### `naftiko-consumes-require-https`
+Consumed APIs must use HTTPS. Plain `http://` URIs are forbidden.
+
+- **Given**: `$.capability.consumes[*].baseUri`
+- **Check**: value matches `^https://`
+- **Kyverno mirror**: yes
+
+#### `naftiko-destructive-operation-no-public-expose`
+An `ExposedOperation` tagged `destructive` must not appear on an `exposes` adapter tagged `public`. Public destructive operations require explicit security review.
+
+- **Given**: operations on `public`-tagged exposes adapters
+- **Check**: operation does not have `destructive` tag
+- **Kyverno mirror**: yes
+
+#### `naftiko-version-required`
+The `naftiko` version field must be present at the root.
+
+#### `naftiko-info-description-minimum-length`
+`info.description` must be at least 30 characters. Short descriptions degrade agent discovery quality.
+
+#### `naftiko-consumes-description-required`
+Each `consumes` entry must have a `description`. Required for agent discovery and dependency tracking.
+
+#### `naftiko-exposes-lifecycle-valid`
+When `lifecycle` is present on an `exposes` adapter, it must be one of: `experimental`, `production`, `deprecated`.
+
+---
+
+### 5.3 Governance Rules (Warn/Info Severity — Advisory)
+
+#### Cost
+| Rule ID | Check | Severity |
+|---|---|---|
+| `naftiko-info-labels-required` | `info.labels` must be present | `warn` |
+| `naftiko-info-labels-cost-center` | `info.labels` must contain `naftiko.io/cost-center` | `warn` |
+| `naftiko-consumes-billing-tag` | Each `consumes` entry must have a billing tag (`free`, `free-tier`, `paid-tier`, `metered`, `quota-limited`) | `warn` |
+
+#### Risk
+| Rule ID | Check | Severity |
+|---|---|---|
+| `naftiko-info-stakeholders-required` | `info.stakeholders` must be present | `warn` |
+| `naftiko-consumes-provenance-tag` | Each `consumes` entry must have a provenance tag (`internal`, `third-party`, `partner`) | `warn` |
+| `naftiko-pii-consumes-requires-auth` | A `pii`-tagged `consumes` entry must have `authentication` on all `exposes` adapters | `warn` |
+
+#### Efficiency & Maturity
+| Rule ID | Check | Severity |
+|---|---|---|
+| `naftiko-exposes-lifecycle-required` | All `exposes` adapters should declare `lifecycle` | `warn` |
+| `naftiko-exposes-tags-required` | All `exposes` adapters should have tags | `warn` |
+| `naftiko-consumes-tags-required` | All `consumes` entries should have tags | `warn` |
+| `naftiko-info-tags-required` | `info.tags` should be present | `warn` |
+| `naftiko-deprecated-expose-has-sunset` | A `deprecated` `exposes` adapter should carry a `sunset:YYYY-MM-DD` tag | `info` |
+
+#### Agent Safety
+| Rule ID | Check | Severity |
+|---|---|---|
+| `naftiko-mutating-method-has-effect-tag` | `POST`/`PUT`/`PATCH`/`DELETE` exposed operations should declare an effect tag (`write`, `mutating`, `destructive`, `idempotent`) | `warn` |
+
+---
+
+### 5.4 Kyverno Mirror Policies
+
+Core rules have exact equivalents as Kyverno `ClusterPolicy` resources. The two layers enforce the same contracts at different times: governance rules at authoring/CI, Kyverno at cluster admission. They share the same rule vocabulary from the spec metadata taxonomy — the source of truth.
+
+Example for `naftiko-exposes-require-authentication`:
+
+```yaml
+apiVersion: kyverno.io/v1
+kind: ClusterPolicy
+metadata:
+ name: naftiko-exposes-require-authentication
+spec:
+ validationFailureAction: Enforce
+ rules:
+ - name: check-exposes-authentication
+ match:
+ any:
+ - resources:
+ kinds: [Capability]
+ namespaces: [production]
+ validate:
+ message: "All exposes adapters in production must declare authentication."
+ foreach:
+ - list: "request.object.spec.capability.exposes"
+ deny:
+ conditions:
+ any:
+ - key: "{{ element.authentication }}"
+ operator: Equals
+ value: ""
+```
+
+---
+
+## Backstage Integration
+
+### 6.1 Fabric-Level Governance Pillars
+
+Backstage governs the three pillars derived from spec metadata:
+
+| Pillar | Source signals | Tooling |
+|---|---|---|
+| **Cost** | `info.labels["naftiko.io/cost-center"]`, `consumes[].tags` billing category, Kubecost aggregation by labels | Cost Insights plugin |
+| **Risk** | `info.stakeholders`, `consumes[].tags` data sensitivity, `exposes[].authentication` presence | Tech Insights scorecards |
+| **Efficiency** | `exposes[].lifecycle`, `info.description` length, `ExposedOperation.tags` agent safety | Tech Insights scorecards |
+
+---
+
+### 6.2 Auto-Population from Kubernetes CRDs
+
+The `NaftikoCapabilityEntityProvider` (backend plugin) watches the K8s API for `naftiko.io/v1alpha1/Capability` CRDs across all namespaces and synthesizes Backstage `Component` entities of `spec.type: capability`:
+
+```yaml
+# Auto-generated Backstage entity
+apiVersion: backstage.io/v1alpha1
+kind: Component
+metadata:
+ name: notion-page-creator
+ annotations:
+ naftiko.io/capability-ref: integrations/notion-page-creator
+ backstage.io/kubernetes-label-selector: naftiko.io/capability=notion-page-creator
+ naftiko.io/endpoint: http://notion-page-creator.integrations.svc:3000
+ tags: [notion, automation]
+spec:
+ type: capability
+ lifecycle: production # derived from exposes[0].lifecycle
+ owner: platform-team # derived from info.labels["naftiko.io/cost-center"]
+ consumesApis:
+ - notion-api # derived from consumes[].namespace
+ providesApis:
+ - notion-writer # derived from exposes[].namespace
+ dependsOn:
+ - resource:default/notion-api-secret
+```
+
+The `consumesApis` and `providesApis` relations let Backstage render the **fabric topology graph** automatically — showing which capabilities depend on which upstream APIs and which downstream clients consume them.
+
+Two discovery modes are supported:
+- **Kubernetes mode** (production): watches CRDs across all namespaces
+- **Git mode** (simpler setups / monorepos): scans configured repositories for `*.capability.yaml` files
+
+Both modes produce identical entity shapes.
+
+---
+
+### 6.3 Tech Insights Scorecard Checks
+
+The `NaftikoGovernanceFactRetriever` runs the `@naftiko/rules` governance rules against the spec from Git and maps findings to scorecard check facts:
+
+| Pillar | Scorecard check | Source rule |
+|---|---|---|
+| Cost | Cost center labeled | `naftiko-info-labels-cost-center` |
+| Cost | Dependency billing declared | `naftiko-consumes-billing-tag` |
+| Risk | Owner declared | `naftiko-info-stakeholders-required` |
+| Risk | PII surfaces protected | `naftiko-pii-consumes-requires-auth` |
+| Risk | Dependencies tagged with provenance | `naftiko-consumes-provenance-tag` |
+| Efficiency | Lifecycle stage declared | `naftiko-exposes-lifecycle-required` |
+| Efficiency | Agent safety tags on mutating ops | `naftiko-mutating-method-has-effect-tag` |
+| Efficiency | Description meets minimum length | `naftiko-info-description-minimum-length` |
+
+Clicking a failing check in the Backstage UI opens the spec file in GitHub at the responsible field.
+
+---
+
+### 6.4 Fabric Explorer
+
+A top-level `NaftikoFabricExplorerPage` renders the entire fabric as an interactive dependency graph. Nodes are sized by consumer count. Edges represent consume/expose relationships from entity relations.
+
+Filter controls:
+- By `lifecycle` (hide `deprecated`, focus `production`)
+- By `tags` (e.g. show only capabilities with `pii` in their consumes)
+- By `info.labels` (filter to a cost-center or domain)
+
+**Incident use case**: During an upstream API outage, selecting an API node immediately shows every capability in the fabric that depends on it and the downstream clients they serve.
+
+---
+
+### 6.5 Duplicate Detection
+
+The entity provider flags two capabilities where `consumes[].baseUri` entries overlap significantly and `exposes` serve similar paths. Surfaced as a "potential consolidation" insight — the efficiency signal that prevents API sprawl.
+
+---
+
+## Owned Toolchain
+
+The three extensions form a coherent lifecycle story, with no mandatory third-party extension dependencies:
+
+```
+VS Code Extension Docker Desktop Extension Backstage Plugins
+───────────────── ──────────────────────── ─────────────────
+Author → Validate Run → Debug → Inspect Discover → Govern → Track
+ (spec layer) (runtime layer) (fabric layer)
+```
+
+---
+
+### 7.1 VS Code Extension (`naftiko.vscode-naftiko`)
+
+The authoring companion. All governance feedback happens here before any CI or cluster involvement.
+
+**Schema-native YAML authoring**
+
+The extension registers as the language server for `*.capability.yaml` files and any YAML with a `naftiko:` root key:
+- Full IntelliSense based on `capability-schema.json` — field completion, enum values, pattern hints
+- Hover documentation: shows the spec description, rules, and spec section link for every field
+- Friendly error messages (not raw JSON Schema errors)
+
+**Inline governance diagnostics (no external tools required)**
+
+Governance and core rules from `@naftiko/rules` evaluate on every document change (debounced):
+- `error` severity → red underlines (blocking: missing auth, plain HTTP, etc.)
+- `warn` severity → yellow underlines (governance gaps: missing labels, no lifecycle, etc.)
+- `info` severity → blue underlines (suggestions: sunset date on deprecated adapters, etc.)
+
+Each diagnostic includes the rule ID (e.g. `naftiko-exposes-require-authentication`) for traceability to CI and Backstage scorecard checks, plus a **Quick Fix** code action where the fix is deterministic.
+
+**Capability topology panel**
+
+A webview panel (`Naftiko: Show Topology`) renders the consume/expose graph of the current spec as a live-updating diagram:
+- `consumes` entries as source nodes (color-coded by provenance tag: `third-party` = amber, `internal` = blue)
+- `exposes` adapters as sink nodes (with `lifecycle` badge)
+- Edges labeled with `namespace` routing
+- Clicking a node navigates to the relevant YAML section
+
+**ExternalRef resolution preview**
+
+When `externalRefs` uses `type: file`, declared variable values are shown inline as ghost text — the same values the runtime will have. Closes the feedback loop for debugging Mustache template expressions without running anything.
+
+**Run / Stop capability**
+
+A status bar button and command palette entry (`Naftiko: Run Capability`):
+1. Checks Docker is running; prompts to open Docker Desktop if not
+2. Detects `externalRefs` and prompts for unresolved runtime variables
+3. Starts the `naftiko/engine` container with correct port mapping from `exposes[].port`
+4. Streams logs into a dedicated Output Channel
+5. Shows a clickable endpoint URL in the status bar once ready
+
+**Snippet library**
+
+Built-in snippets for common patterns — `naftiko-simple-op`, `naftiko-orchestrated-op`, `naftiko-forward`, `naftiko-pii-consumes`, `naftiko-mcp-expose` — with placeholders guiding the author through required fields in order.
+
+---
+
+### 7.2 Docker Desktop Extension (`naftiko/docker-extension`)
+
+The runtime companion. Makes local capability operation feel like a first-class experience.
+
+**Fabric dashboard**
+
+All running Naftiko capability containers displayed as a tile grid. Each tile shows:
+- Capability `info.label`
+- `lifecycle` badge (`experimental` / `production` / `deprecated`)
+- Health indicator (polled from `/health`)
+- Exposed ports as clickable links
+- Uptime and restart count
+
+**Spec viewer**
+
+Clicking a tile renders the full capability topology graph (shared web component with the VS Code extension). Teams can inspect what a running container consumes and exposes without reading the YAML.
+
+**Log streaming and filtering**
+
+Per-capability structured log viewer with filter controls for:
+- Log level
+- Consumed namespace (filters to logs for a specific upstream)
+- Operation name
+
+Powered by the structured JSON logs emitted by the Naftiko engine (`capability`, `namespace`, `operation`, `statusCode`, `durationMs` fields).
+
+**ExternalRef secret injection wizard**
+
+For specs with `externalRefs[].type: runtime` entries, a form pre-populated with the declared `keys` lets developers enter values injected as environment variables into the container. Values are stored in Docker Desktop's secret store — not in the spec.
+
+**Multi-capability compose generation**
+
+An "Add to Compose" action generates a `docker-compose.yml` fragment for the selected capability, including volume mount, port mapping, and environment variable stubs.
+
+---
+
+### 7.3 Backstage Plugins
+
+Two packages following Backstage's frontend/backend convention.
+
+#### `@naftiko/backstage-plugin-backend`
+
+**`NaftikoCapabilityEntityProvider`**
+Synthesizes Backstage `Component` entities from Kubernetes CRDs (Kubernetes mode) or Git files (Git mode). Both modes produce identical entity shapes — the rest of the system is discovery-mode agnostic.
+
+**`NaftikoGovernanceFactRetriever`**
+Runs `@naftiko/rules` against the spec from Git for each `capability` entity. Produces boolean/numeric facts keyed by rule ID, stored in the Tech Insights facts store. Runs on schedule or webhook trigger.
+
+**Pre-built scorecard check definitions**
+The backend plugin registers `CheckDefinition` entries for all governance rules against the three pillars. Platform teams can extend or override checks without touching plugin code.
+
+#### `@naftiko/backstage-plugin`
+
+**`NaftikoCapabilityCard`**
+Entity page card rendering:
+- Topology graph (shared web component)
+- `lifecycle` badge with deprecation banner
+- Stakeholders list with role badges
+- `tags` as chips (color-coded: `third-party` = amber, `pii` = red)
+
+**`NaftikoScorecardCard`**
+Three-pillar scorecard (Cost / Risk / Efficiency) from Tech Insights facts, displayed as score gauges with expandable failing check lists.
+
+**`NaftikoFabricExplorerPage`**
+Top-level fabric dependency graph page (see [Section 6.4](#64-fabric-explorer)).
+
+---
+
+## Shared Rules Package
+
+### 8.1 Package Structure
+
+All governance rule logic lives in a single, framework-agnostic TypeScript package:
+
+```
+@naftiko/rules
+ src/
+ core-rules.ts ← error-severity rules (blocking)
+ governance-rules.ts ← warn/info-severity rules (advisory)
+ evaluate.ts ← runs rules against a parsed spec, returns Finding[]
+ types.ts ← Finding, Severity, Rule types
+ dist/
+ index.js ← CommonJS build for Node.js consumers
+ index.esm.js ← ESM build for browser consumers (VS Code webview, Docker Desktop)
+```
+
+### 8.2 Finding Type
+
+```typescript
+interface Finding {
+ ruleId: string; // e.g. "naftiko-exposes-require-authentication"
+ severity: "error" | "warn" | "info";
+ message: string;
+ path: string; // JSONPath to the violating node, e.g. "$.capability.exposes[0]"
+ // Optional: character range for IDE inline rendering
+ range?: { start: number; end: number };
+}
+```
+
+### 8.3 Consumers
+
+| Consumer | Usage |
+|---|---|
+| VS Code extension | `evaluate()` on document change → inline diagnostics with Quick Fix |
+| `naftiko-cli lint` | `evaluate()` in CI → exit 1 on error-severity findings |
+| Backstage backend plugin | `evaluate()` in `FactRetriever` → boolean/numeric facts per rule |
+| Docker Desktop extension | `evaluate()` on spec load → governance badge on tile |
+
+The Spectral YAML ruleset format (used in earlier drafts of this proposal) is replaced entirely by this package. It served as useful reference documentation but the execution format is now TypeScript — versioned, tested, and controlled by the Naftiko project.
+
+---
+
+## Complete Signal Chain
+
+```
+Developer opens *.capability.yaml in VS Code
+ │
+ ▼
+@naftiko/rules evaluates on change
+ → inline error/warn/info diagnostics (no external tools required)
+ → topology panel updates live
+ → ExternalRef ghost text preview
+ │
+ ▼
+Developer presses "Naftiko: Run Capability"
+ → VS Code extension starts naftiko/engine container
+ → Docker Desktop extension picks it up in the fabric dashboard
+ → Spec viewer renders topology graph
+ → Logs stream to VS Code output channel
+ │
+ ▼
+Developer pushes → CI: naftiko-cli lint
+ → @naftiko/rules evaluates, exit 1 on error findings
+ → warn findings annotate the PR (no merge block)
+ │
+ ▼
+ArgoCD/Flux syncs CRD to cluster
+ │
+ ▼
+Kyverno validates at admission (mirrors core rules)
+ → last-mile enforcement for anything that bypassed CI
+ │
+ ▼
+Naftiko operator reconciles:
+ ├── ConfigMap (spec YAML)
+ ├── Deployment (naftiko/engine, resources from CapabilityClass)
+ ├── Service (ClusterIP; +Ingress if tags contains "public")
+ ├── ExternalSecret (ESO → Vault / AWS SM / K8s Secrets)
+ ├── Resilience4j config injected as env vars (circuit breaker, retry, bulkhead, rate limiter)
+ └── Status conditions (Ready, SecretsSynced, Degraded)
+ │
+ ▼
+Prometheus scrapes /metrics → Grafana dashboards per capability
+ (includes naftiko_circuit_breaker_state, naftiko_retry_calls_total, etc.)
+Kubecost aggregates by info.labels (cost-center, domain, tier)
+ │
+ ▼
+Backstage NaftikoCapabilityEntityProvider syncs CRDs → Catalog
+ ├── consumesApis / providesApis relations → topology graph
+ ├── NaftikoGovernanceFactRetriever runs @naftiko/rules against spec from Git
+ │ → facts stored in Tech Insights store
+ ├── NaftikoScorecardCard: Cost / Risk / Efficiency pillar scores
+ ├── NaftikoCapabilityCard: topology + stakeholders + tags
+ └── NaftikoFabricExplorerPage: fabric dependency graph + incident impact
+```
+
+---
+
+## Summary Tables
+
+### Spec Changes Required
+
+| Field | Object | Type | New? | Purpose |
+|---|---|---|---|---|
+| `labels` | `Info` | `map` | Yes | K8s label propagation, Kubecost cost allocation |
+| `labels["naftiko.io/tier"]` | `Info` | `string` | Yes | Selects `CapabilityClass` for resource limits and Resilience4j defaults |
+| `tags` | `Consumes` | `string[]` | Yes | Provenance, billing, data sensitivity classification |
+| `tags` (`best-effort`, `sla-999`) | `Consumes` | `string[]` | Yes | Selects Resilience4j circuit breaker aggressiveness for a specific consumed namespace |
+| `tags` | `Exposes` (API) | `string[]` | Yes | Network topology intent, Backstage grouping |
+| `lifecycle` | `Exposes` (API) | `string` (enum) | Yes | Backstage lifecycle, operator Ingress decision |
+| `tags` | `ExposedOperation` | `string[]` | Yes | Agent safety policy, risk scoring |
+| `tags` | `ConsumedHttpOperation` | `string[]` | Yes | Billing granularity, retry safety |
+
+### New Artifacts Required
+
+| Artifact | Type | Purpose |
+|---|---|---|
+| `@naftiko/rules` | npm package | Shared governance evaluation engine |
+| `naftiko-cli` | npm package | CI wrapper around `@naftiko/rules` |
+| `naftiko.io/v1alpha1/Capability` | Kubernetes CRD | Spec-native K8s resource |
+| `naftiko.io/v1alpha1/CapabilityClass` | Kubernetes CRD | Resource tier governance + Resilience4j defaults |
+| Naftiko Operator | Kubernetes controller | Reconciles CRDs into running workloads |
+| Resilience4j (embedded in engine) | Java library | In-process circuit breaker, retry, bulkhead, rate limiter per consumed namespace |
+| `naftiko.vscode-naftiko` | VS Code extension | Authoring + inline governance + run |
+| `naftiko/docker-extension` | Docker Desktop extension | Runtime dashboard + log streaming |
+| `@naftiko/backstage-plugin-backend` | Backstage plugin | Entity provider + fact retriever |
+| `@naftiko/backstage-plugin` | Backstage plugin | Capability card + scorecard + fabric explorer |
+
+### Governance Signal Source Matrix
+
+| Tool | Evaluates | Source | When |
+|---|---|---|---|
+| VS Code extension | `@naftiko/rules` | Open document | On every change |
+| `naftiko-cli lint` | `@naftiko/rules` | Committed file | CI / pre-commit |
+| Kyverno | Mirror of core rules | CRD at admission | K8s apply time |
+| Backstage FactRetriever | `@naftiko/rules` | Spec from Git | Scheduled / webhook |
+| Docker Desktop ext. | `@naftiko/rules` | Mounted spec file | Container start |
diff --git a/src/main/resources/specs/mcp-resources-prompts-proposal.md b/src/main/resources/blueprints/mcp-resources-prompts-proposal.md
similarity index 100%
rename from src/main/resources/specs/mcp-resources-prompts-proposal.md
rename to src/main/resources/blueprints/mcp-resources-prompts-proposal.md
diff --git a/src/main/resources/wiki/Contribute.md b/src/main/resources/wiki/Contribute.md
new file mode 100644
index 0000000..6d82779
--- /dev/null
+++ b/src/main/resources/wiki/Contribute.md
@@ -0,0 +1,11 @@
+With welcome ALL contributions to Naftiko Framework, from the smallest to the largest, they all make a positive impact.
+
+ - **Bugs** and **Enhancements** should be entered in the [Issue Tracker](/naftiko/framework/issues) and discussed there directly
+ - :beginner: _Please search existing issues to limit the creation of duplicates_
+
+ - **Code contributions** should be prepared in a local branch and submitted via a [Pull Request](/naftiko/framework/pulls)
+ - :beginner: _Please make sure that your code pass all the build validation and rebase on "main" branch before asking for a review_
+
+ - **All contributions** are accepted under the [Apache 2.0 License](/naftiko/framework/blob/main/LICENSE)
+ - ⚠️ : _You need to ensure you have full rights on the code you are submitting, for example from your employer_
+
\ No newline at end of file
diff --git a/src/main/resources/wiki/FAQ.md b/src/main/resources/wiki/FAQ.md
new file mode 100644
index 0000000..0d5d82a
--- /dev/null
+++ b/src/main/resources/wiki/FAQ.md
@@ -0,0 +1,822 @@
+Welcome to the Naftiko Framework FAQ! This guide answers common questions from developers who are learning, using, and contributing to Naftiko. For comprehensive technical details, see the [Specification](https://github.com/naftiko/framework/wiki/Specification).
+
+---
+
+## ⛵ Getting Started
+
+### Q: What is Naftiko Framework and why would I use it?
+**A:** Naftiko Framework is the first open-source platform for **Spec-Driven AI Integration**. Instead of writing boilerplate code to consume APIs and expose unified interfaces, you declare them in YAML. This enables:
+- **API composability**: Combine multiple APIs into a single capability
+- **Format conversion**: Convert between JSON, XML, Avro, Protobuf, CSV, and YAML
+- **AI-ready integration**: Better context engineering for AI systems
+- **Reduced API sprawl**: Manage microservices and SaaS complexity
+
+Use it when you need to integrate multiple APIs, standardize data formats, or expose simplified interfaces to AI agents.
+
+### Q: What skills do I need to create a capability?
+**A:** You only need to know:
+- **YAML syntax** - the configuration language for capabilities
+- **JSONPath** - for extracting values from JSON responses
+- **Mustache templates** - for injecting parameters (optional, if using advanced features)
+
+You don't need to write Java or other code unless you want to extend the framework itself.
+
+### Q: Is Naftiko a code generator or a runtime engine?
+**A:** It's a **runtime engine**. The Naftiko Engine, provided as a Docker container, reads your YAML capability file at startup and immediately exposes HTTP or MCP interfaces. There's no compilation step - declare your capability, start the engine, and it works.
+
+---
+
+## 🚢 Installation & Setup
+
+### Q: How do I install Naftiko?
+**A:** There are two ways:
+
+1. **Docker (recommended)**
+ ```bash
+ docker pull ghcr.io/naftiko/framework:v0.4
+ docker run -p 8081:8081 -v /path/to/capability.yaml:/app/capability.yaml ghcr.io/naftiko/framework:v0.4 /app/capability.yaml
+ ```
+
+2. **CLI tool** (for configuration and validation)
+ Download the binary for [macOS](https://github.com/naftiko/framework/releases/download/v0.4/naftiko-cli-macos-arm64), [Linux](https://github.com/naftiko/framework/releases/download/v0.4/naftiko-cli-linux-amd64), or [Windows](https://github.com/naftiko/framework/releases/download/v0.4/naftiko-cli-windows-amd64.exe)
+
+See the [Installation guide](https://github.com/naftiko/framework/wiki/Installation) for detailed setup instructions.
+
+### Q: How do I validate my capability file before running it?
+**A:** Use the CLI validation command:
+```bash
+naftiko validate path/to/capability.yaml
+naftiko validate path/to/capability.yaml 0.4 # Specify schema version
+```
+
+This checks your YAML against the Naftiko schema and reports any errors.
+
+### Q: Which version of the schema should I use?
+**A:** Use the latest stable version: **0.4** (as of March 2026).
+
+Set it in your YAML:
+```yaml
+naftiko: "0.4"
+```
+
+---
+
+## 🧭 Core Concepts
+
+### Q: What are "exposes" and "consumes"?
+**A:** These are the two essential parts of every capability:
+
+- **Exposes** - What your capability *provides* to callers (HTTP API or MCP server)
+- **Consumes** - What external APIs your capability *uses internally*
+
+Example: A capability that consumes the Notion API and GitHub API, then exposes them as a single unified REST endpoint or MCP tool.
+
+### Q: What's the difference between API and MCP exposure?
+**A:**
+
+| Feature | API (REST) | MCP |
+|---------|-----------|-----|
+| **Protocol** | HTTP/REST | Model Context Protocol (JSON-RPC) |
+| **Best for** | General-purpose integrations, web apps | AI agent-native integrations |
+| **Tool discovery** | Manual or via OpenAPI | Automatic via MCP protocol |
+| **Configuration** | `type: "api"` with resources/operations | `type: "mcp"` with tools |
+| **Default transport** | HTTP | stdio or HTTP (streamable) |
+
+**Use API** for traditional REST clients, web applications, or when you want standard HTTP semantics.
+**Use MCP** when exposing capabilities to AI agents or Claude.
+
+### Q: What is a "namespace"?
+**A:** A namespace is a **unique identifier** for a consumed or exposed source, used for routing and references.
+
+- **In consumes**: `namespace: github` means "this is the GitHub API I'm consuming"
+- **In exposes**: `namespace: app` means "my exposed API is called `app`"
+- **In steps**: `call: github.get-user` routes to the consumed `github` namespace
+
+Namespaces must be unique within their scope (all consumed namespaces must differ, all exposed namespaces must differ).
+
+### Q: What are "steps" and when do I use them?
+**A:** Steps enable **multi-step orchestration** - calling multiple APIs in sequence and combining their results.
+
+**Simple mode** (direct call):
+```yaml
+operations:
+ - method: GET
+ call: github.get-user # Call one consumed operation directly
+ with:
+ username: "{{github_username}}"
+```
+
+**Orchestrated mode** (multi-step):
+```yaml
+operations:
+ - name: complex-operation
+ method: GET
+ steps:
+ - type: call
+ name: step1
+ call: github.list-users
+ - type: call
+ name: step2
+ call: github.get-user
+ with:
+ username: $step1.result # Use output from step1
+ mappings:
+ - targetName: output_field
+ value: $.step2.userId
+```
+
+Use steps when your capability needs to combine data from multiple sources or perform lookups.
+
+### Q: What's the difference between "call" and "lookup" steps?
+**A:**
+
+- **`call` steps**: Execute a consumed operation (HTTP request)
+- **`lookup` steps**: Search through a previous step's output for matching records
+
+Example: Call an API to list all users, then lookup which one matches a given email:
+```yaml
+steps:
+ - type: call
+ name: list-all-users
+ call: hr.list-employees
+ - type: lookup
+ name: find-user-by-email
+ index: list-all-users
+ match: email
+ lookupValue: "{{email_to_find}}"
+ outputParameters:
+ - fullName
+ - department
+```
+
+---
+
+## 🔩 Configuration & Parameters
+
+### Q: How do I inject input parameters into a consumed operation?
+**A:** Use the `with` injector in your exposed operation:
+
+**Simple mode:**
+```yaml
+operations:
+ - method: GET
+ call: github.get-user
+ with:
+ username: "{{github_username}}" # From externalRefs
+ accept: "application/json" # Static value
+```
+
+**Orchestrated mode (steps):**
+```yaml
+steps:
+ - type: call
+ name: fetch-user
+ call: github.get-user
+ with:
+ username: "{{github_username}}"
+```
+
+The `with` object maps consumed operation parameter names to:
+- Variable references like `{{variable_name}}` injected from externalRefs
+- Static strings or numbers - literal values
+
+### Q: How do I extract values from API responses (output parameters)?
+**A:** Use **JsonPath expressions** in the `value` field of `outputParameters`:
+
+```yaml
+consumes:
+ - resources:
+ - operations:
+ - outputParameters:
+ - name: userId
+ type: string
+ value: $.id # Top-level field
+ - name: email
+ type: string
+ value: $.contact.email # Nested field
+ - name: allNames
+ type: array
+ value: $.users[*].name # Array extraction
+```
+
+Common JsonPath patterns:
+- `$.fieldName` - access a field
+- `$.users[0].name` - access array element
+- `$.users[*].name` - extract all `.name` values from array
+- `$.data.user.email` - nested path
+
+### Q: What are "mappings" and when do I use them?
+**A:** Mappings connect step outputs to exposed operation outputs in multi-step orchestrations.
+
+```yaml
+steps:
+ - type: call
+ name: fetch-db
+ call: notion.get-database
+ - type: call
+ name: query-db
+ call: notion.query-database
+
+mappings:
+ - targetName: database_name # Exposed output parameter
+ value: $.fetch-db.dbName # From first step's output
+ - targetName: row_count
+ value: $.query-db.resultCount # From second step
+
+outputParameters:
+ - name: database_name
+ type: string
+ - name: row_count
+ type: number
+```
+
+Mappings tell Naftiko how to wire step outputs to your final response.
+
+---
+
+## 🗝️ Authentication & Security
+
+### Q: How do I authenticate to external APIs?
+**A:** Add an `authentication` block to your `consumes` section:
+
+```yaml
+consumes:
+ - type: http
+ namespace: github
+ baseUri: https://api.github.com
+ authentication:
+ type: bearer
+ token: "{{github_token}}" # Use token from externalRefs
+```
+
+**Supported authentication types:**
+- `bearer` - Bearer token
+- `basic` - Username/password
+- `apikey` - Header or query parameter API key
+- `digest` - HTTP Digest authentication
+
+### Q: How do I manage secrets like API tokens?
+**A:** Use **externalRefs** to declare variables that are injected at runtime:
+
+```yaml
+externalRefs:
+ - name: secrets
+ type: environment
+ resolution: runtime
+ keys:
+ github_token: GITHUB_TOKEN # Maps env var to template variable
+ notion_token: NOTION_TOKEN
+
+consumes:
+ - namespace: github
+ authentication:
+ type: bearer
+ token: "{{github_token}}" # Use the injected variable
+ - namespace: notion
+ authentication:
+ type: bearer
+ token: "{{notion_token}}"
+```
+
+**At runtime, provide environment variables:**
+```bash
+docker run -e GITHUB_TOKEN=ghp_xxx -e NOTION_TOKEN=secret_xxx ...
+```
+
+> ⚠️ **Security note**: Use `resolution: runtime` in production (not `file`). Never commit secrets to your repository.
+
+### Q: Can I authenticate to exposed endpoints (API/MCP)?
+**A:** Yes, add `authentication` to your `exposes` block:
+
+```yaml
+exposes:
+ - type: api
+ port: 8081
+ namespace: my-api
+ authentication:
+ type: apikey
+ in: header
+ name: X-Api-Key
+ value: "{{api_key}}"
+ resources:
+ - path: /data
+ description: Protected data endpoint
+```
+
+**Supported authentication types for exposed endpoints:**
+- `apikey` - API key via header or query parameter
+- `bearer` - Bearer token validation
+- `basic` - Username/password via HTTP Basic Auth
+
+### Q: Can I send complex request bodies (JSON, XML, etc.)?
+**A:** Yes, use the `body` field for request bodies:
+
+```yaml
+consumes:
+ - resources:
+ - operations:
+ - method: POST
+ body:
+ type: json
+ data:
+ filter:
+ status: "active"
+```
+
+**Body types:**
+- `json` - JSON object or string
+- `text`, `xml`, `sparql` - Plain text payloads
+- `formUrlEncoded` - URL-encoded form
+- `multipartForm` - Multipart file upload
+
+---
+
+## 🗺️ API Design
+
+### Q: How do I define resource paths with parameters?
+**A:** Use path parameters with curly braces:
+
+```yaml
+exposes:
+ - resources:
+ - path: /users/{userId}/projects/{projectId}
+ description: Get a specific project for a user
+ inputParameters:
+ - name: userId
+ in: path
+ type: string
+ description: The user ID
+ - name: projectId
+ in: path
+ type: string
+ description: The project ID
+```
+
+Callers access it as: `GET /users/123/projects/456`
+
+### Q: How do I support query parameters and headers?
+**A:** Use `in` field in `inputParameters`:
+
+```yaml
+inputParameters:
+ - name: filter
+ in: query
+ type: string
+ description: Filter results
+ - name: Authorization
+ in: header
+ type: string
+ description: Auth header
+ - name: X-Custom
+ in: header
+ type: string
+ description: Custom header
+```
+
+Callers send: `GET /endpoint?filter=value` with custom headers.
+
+### Q: How do forward proxies work?
+**A:** Use `forward` to pass requests through to a consumed API without transformation:
+
+```yaml
+exposes:
+ - resources:
+ - path: /github/{path}
+ description: Pass-through proxy to GitHub API
+ forward:
+ targetNamespace: github
+ trustedHeaders:
+ - Authorization
+ - Accept
+```
+
+This forwards `GET /github/repos/owner/name` to GitHub's `/repos/owner/name`.
+
+**Trusted headers** must be explicitly listed for security.
+
+---
+
+## 📡 MCP-Specific
+
+### Q: How do I expose a capability as an MCP tool?
+**A:** Use `type: mcp` in exposes instead of `type: api`:
+
+```yaml
+exposes:
+ - type: mcp
+ address: localhost
+ port: 9091
+ namespace: my-mcp
+ description: My MCP server
+ tools:
+ - name: query-database
+ description: Query the database
+ call: notion.query-db
+ with:
+ db_id: "fixed-db-id"
+ outputParameters:
+ - type: array
+ mapping: $.results
+```
+
+### Q: What's the difference between HTTP and stdio MCP transports?
+**A:**
+
+| Transport | Use Case | Setup |
+|-----------|----------|-------|
+| **HTTP** | Streamable HTTP transport, integrates with existing infrastructure | Specify `address` and `port` |
+| **stdio** | Direct process communication, native integration with Claude Desktop | No address/port needed |
+
+For Claude integration, stdio is typically preferred. HTTP is useful for remote or containerized deployments.
+
+### Q: How do I expose MCP resources and prompts?
+**A:** Add `resources` and `prompts` sections to your MCP server:
+
+```yaml
+exposes:
+ - type: mcp
+ resources:
+ - uri: file:///docs/guide.md
+ name: User Guide
+ description: API usage guide
+ prompts:
+ - name: analyze-code
+ description: Analyze code snippet
+ template: "Analyze this code:\n{{code}}"
+```
+
+MCP clients can then discover and use these resources dynamically.
+
+---
+
+## 🔭 Troubleshooting & Debugging
+
+### Q: My capability won't start. How do I debug it?
+**A:**
+
+1. **Validate your YAML first:**
+ ```bash
+ naftiko validate capability.yaml
+ ```
+
+2. **Check the Docker logs:**
+ ```bash
+ docker run ... ghcr.io/naftiko/framework:v0.4 /app/capability.yaml
+ # Look for error messages in the output
+ ```
+
+3. **Verify your file path** - if using Docker, ensure:
+ - The volume mount is correct: `-v /absolute/path:/app/capability.yaml`
+ - The file exists and is readable
+ - For Docker on Windows/Mac, use proper path translation
+
+4. **Check external services** - ensure:
+ - APIs you're consuming are reachable
+ - Network connectivity is available
+ - Authentication credentials are correct
+
+### Q: Requests to my exposed endpoint return errors. How do I debug?
+**A:**
+
+1. **Check the request format** - ensure headers, parameters, and body match your definition
+2. **Verify consumed API availability** - test the underlying API directly
+3. **Inspect JsonPath mappings** - ensure your extraction paths match the API response
+4. **Use Docker logs** - see server-side error messages
+
+### Q: JsonPath expressions aren't extracting the data I expect. How do I fix it?
+**A:**
+
+1. **Test your JsonPath** - use an online tool like [jsonpath.com](https://jsonpath.com)
+2. **Inspect the actual response** - add an operation without filtering to see raw data
+3. **Understand array syntax**:
+ - `$.users[0]` - first element
+ - `$.users[*]` - all elements (creates array output)
+ - `$.users[*].name` - all names
+
+4. **For nested objects**, trace the path step-by-step: `$.data.user.profile.email`
+
+### Q: My parameters aren't being passed to the consumed API. What's wrong?
+**A:**
+
+1. **Check parameter names match** - consumed parameter names must match keys in `with`
+2. **Verify parameter location** (`in: path`, `in: query`, `in: header`, etc.)
+3. **Check variable references** - ensure `{{variable_name}}` variables are defined in externalRefs
+4. **Test without transformation** - use `forward` to proxy the request and see if underlying API works
+
+### Q: Authentication is failing. How do I debug it?
+**A:**
+
+1. **Test credentials directly** - verify your token/key works with the API
+2. **Check token format** - ensure it's a valid token (not expired, wrong format, etc.)
+3. **Verify placement** - is the token in the right header/query/body?
+4. **Environment variables** - ensure the Docker environment variable matches the key name in `externalRefs`
+5. **Quotes** - make sure tokens with special characters are properly quoted in YAML
+
+---
+
+## 🚣 Contributing
+
+### Q: How do I contribute to Naftiko Framework?
+**A:** We welcome all contributions! Here's how:
+
+1. **Report bugs or request features** - [GitHub Issues](https://github.com/naftiko/framework/issues)
+ - Search for existing issues first to avoid duplicates
+
+2. **Submit code changes** - [GitHub Pull Requests](https://github.com/naftiko/framework/pulls)
+ - Create a local branch
+ - Ensure your code passes all build validation
+ - Rebase on `main` before submitting
+
+3. **Contribute examples** - Add capability examples to the repository
+ - Document your use case in the example
+ - Include comments explaining key features
+
+4. **Improve documentation** - Fix typos, clarify docs, add examples
+
+### Q: What's the code structure and how do I set up a development environment?
+**A:** Naftiko is a **Java project** using Maven. To build and develop:
+
+```bash
+# Clone the repository
+git clone https://github.com/naftiko/framework.git
+cd framework
+
+# Build the project
+mvn clean install
+
+# Run tests
+mvn test
+
+# Build Docker image
+docker build -t naftiko:local .
+```
+
+Key directories:
+- `src/main/java/io/naftiko/` Core engine code
+- `src/main/resources/schemas/` JSON Schema definitions
+- `src/test/` Unit and integration tests
+- `src/main/resources/specs/` Specification proposals and examples
+
+### Q: What are the design guidelines for creating capabilities?
+**A:**
+
+1. **Keep the Naftiko Specification as a first-class citizen** - refer to it often
+2. **Don't expose unused input parameters** - every parameter should be used in steps
+3. **Don't declare consumed outputs you don't use** - be precise in mappings
+4. **Don't prefix variables unnecessarily** - let scope provide clarity
+
+Example:
+```yaml
+# Good: expose only used input
+inputParameters:
+ - name: database_id # Used in step below
+ in: path
+
+# Bad: expose unused input
+inputParameters:
+ - name: database_id
+ - name: unused_param # Never used anywhere
+
+# Good: output only consumed outputs you map
+outputParameters:
+ - name: result
+ value: $.step1.output # Clearly mapped
+
+# Bad: declare outputs you don't use
+outputParameters:
+ - name: unused_result
+ value: $.step1.unused
+```
+
+### Q: How do I test my capability changes?
+**A:**
+
+1. **Unit tests** - Add tests in `src/test/java`
+2. **Integration tests** - Test against real or mock APIs
+3. **Validation** - Use the CLI tool: `naftiko validate capability.yaml`
+4. **Docker testing** - Build and run the Docker image with your capability
+
+### Q: Which version of Java is required?
+**A:** Naftiko requires **Java 17 or later**. This is specified in the Maven configuration.
+
+---
+
+## ⛴️ Advanced Topics
+
+### Q: Can I use templates/variables in my capability definition?
+**A:** Yes, use **Mustache-style `{{variable}}`** expressions:
+
+```yaml
+externalRefs:
+ - name: env
+ type: environment
+ keys:
+ api_key: API_KEY
+ base_url: API_BASE_URL
+
+consumes:
+ - baseUri: "{{base_url}}"
+ authentication:
+ type: apikey
+ key: X-API-Key
+ value: "{{api_key}}"
+```
+
+Variables come from `externalRefs` and are injected at runtime.
+
+
+### Q: Can I compose capabilities (capability calling another capability)?
+**A:** Indirectly - by referencing the exposed URL/port as a consumed API:
+
+```yaml
+# Capability B "consumes" the exposed endpoint from Capability A
+consumes:
+ - baseUri: http://localhost:8081 # Capability A's port
+ namespace: capability-a
+```
+
+This way, Capability B can combine Capability A with other APIs.
+
+### Q: How do I handle errors or retries?
+**A:** Naftiko currently doesn't have built-in retry logic in v0.4. Options:
+
+1. **At the HTTP client level** - use an API gateway with retry policies
+2. **In future versions** - this is on the roadmap
+
+Check the [Roadmap](https://github.com/naftiko/framework/wiki/Roadmap) for planned features.
+
+### Q: Can I expose the same capability on both API and MCP?
+**A:** Yes! Add multiple entries to `exposes`:
+
+```yaml
+exposes:
+ - type: api
+ port: 8081
+ namespace: rest-api
+ resources: [...]
+
+ - type: mcp
+ port: 9091
+ namespace: mcp-server
+ tools: [...]
+
+consumes: [...] # Shared between both
+```
+
+Both adapters consume the same sources but expose different interfaces.
+
+---
+
+## 💨 Performance & Deployment
+
+### Q: How scalable is Naftiko for high-load scenarios?
+**A:** Naftiko is suitable for moderate to high loads depending on:
+- **Your consumed APIs' performance** - Naftiko's overhead is minimal
+- **Docker/Kubernetes scaling** - deploy multiple instances behind a load balancer
+- **Orchestration complexity** - simpler capabilities (forward, single calls) are faster
+
+For production workloads:
+- Use Kubernetes for auto-scaling
+- Monitor consuming/consumed API latencies
+- Consider caching strategies above Naftiko
+
+### Q: How do I deploy Naftiko to production?
+**A:**
+
+1. **Kubernetes** (recommended):
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: naftiko-engine
+ spec:
+ replicas: 3
+ template:
+ spec:
+ containers:
+ - name: naftiko
+ image: ghcr.io/naftiko/framework:v0.4
+ volumeMounts:
+ - name: capability
+ mountPath: /app/capability.yaml
+ subPath: capability.yaml
+ env:
+ - name: GITHUB_TOKEN
+ valueFrom:
+ secretKeyRef:
+ name: naftiko-secrets
+ key: github-token
+ ```
+
+2. **Docker Compose** - for simpler setups
+3. **Environment Variables** - inject secrets via `externalRefs` with `resolution: runtime`
+
+### Q: Can I use Naftiko behind a reverse proxy (nginx, Envoy)?
+**A:** Yes, absolutely. Naftiko exposes standard HTTP endpoints, so it works with any reverse proxy.
+
+Example (nginx):
+```nginx
+server {
+ listen 80;
+ location / {
+ proxy_pass http://naftiko:8081;
+ }
+}
+```
+
+---
+
+## 📜 Specifications & Standards
+
+### Q: How does Naftiko compare to OpenAPI, AsyncAPI, or Arazzo?
+**A:** Naftiko is **complementary** to these specifications and combines their strengths into a single runtime model:
+- **Consume/expose duality** - like OpenAPI's interface description, but bidirectional
+- **Orchestration** - like Arazzo's workflow sequencing
+- **AI-driven discovery** - beyond what all three cover natively
+- **Namespace-based routing** - unique to Naftiko's runtime approach
+
+See the [Specification](https://github.com/naftiko/framework/wiki/Specification#13-related-specifications) for a detailed comparison.
+
+### Q: Is the Naftiko Specification stable?
+**A:** Yes, v0.4 is stable as of March 2026. The specification follows semantic versioning:
+- **Major versions** (0.x.0) - breaking changes
+- **Minor versions** (x.1.0) - new features, backward-compatible
+- **Patch versions** (x.x.1) - bug fixes
+
+Check the naftiko field in your YAML to specify the version.
+
+---
+
+## 📣 Community & Support
+
+### Q: Where can I ask questions or discuss ideas?
+**A:** Join the community at:
+- **[GitHub Discussions](https://github.com/orgs/naftiko/discussions)** - Ask questions and share ideas
+- **[GitHub Issues](https://github.com/naftiko/framework/issues)** - Report bugs or request features
+- **Pull Requests** - Review and discuss code changes
+
+### Q: Are there examples I can reference?
+**A:** Yes! Several resources:
+
+- **[Tutorial](https://github.com/naftiko/framework/wiki/Tutorial)** - Step-by-step guides
+- **[Use Cases](https://github.com/naftiko/framework/wiki/Use-cases)** - Real-world examples
+- **Repository examples** - In `src/main/resources/specs/` and test resources
+- **Specification examples** - In the [Specification](https://github.com/naftiko/framework/wiki/Specification#4-complete-examples) (Section 4)
+
+### Q: How often is Naftiko updated?
+**A:** Check the [Releases](https://github.com/naftiko/framework/wiki/Releases) page for version history. The project follows a regular release cadence with security updates prioritized.
+
+---
+
+## 🚤 Common Use Cases
+
+### Q: I want to create a unified API that combines Notion + GitHub. How do I start?
+**A:**
+
+1. **Read the Tutorial** - particularly steps 2-5 on forwarding and orchestration
+2. **Define consumed sources** - GitHub and Notion APIs with auth
+3. **Design exposed resources** - endpoints that combine their data
+4. **Use multi-step orchestration** - call both APIs and map results
+5. **Test locally** - use Docker to run your capability
+
+### Q: I want to expose my capability as an MCP tool for Claude. How do I do this?
+**A:**
+
+1. **Use `type: mcp`** in `exposes`
+2. **Define `tools`** - each tool is an MCP tool your capability provides
+3. **Use stdio transport** - for native Claude Desktop integration
+4. **Test with Claude** - configure Claude Desktop with your MCP server
+5. **Publish** - share your capability spec with the community
+
+See the [Tutorial](https://github.com/naftiko/framework/wiki/Tutorial) Section 6 (MCP) for a full example.
+
+### Q: I want to standardize data from multiple SaaS tools. How do I use Naftiko?
+**A:**
+
+1. **Consume multiple SaaS APIs** - define each in `consumes`
+2. **Normalize outputs** - use `outputParameters` to extract and structure data consistently
+3. **Expose unified interface** - create a single API with harmonized formats
+4. **Use orchestration** - combine data from multiple sources if needed
+
+This is Naftiko's core strength for managing API sprawl.
+
+---
+
+## 🏝️ Additional Resources
+
+- **[Specification](https://github.com/naftiko/framework/wiki/Specification)** - Complete technical reference
+- **[Tutorial](https://github.com/naftiko/framework/wiki/Tutorial)** - Step-by-step learning guide
+- **[Installation](https://github.com/naftiko/framework/wiki/Installation)** - Setup instructions
+- **[Use Cases](https://github.com/naftiko/framework/wiki/Use-cases)** - Real-world examples
+- **[Roadmap](https://github.com/naftiko/framework/wiki/Roadmap)** - Future plans
+- **[Contribute](https://github.com/naftiko/framework/wiki/Contribute)** - How to contribute
+- **[Discussions](https://github.com/orgs/naftiko/discussions)** - Community Q&A
+
+---
+
+## 🔔 Feedback
+
+Did this FAQ help you? Have questions not covered here?
+- **Add an issue** - [GitHub Issues](https://github.com/naftiko/framework/issues)
+- **Start a discussion** - [GitHub Discussions](https://github.com/orgs/naftiko/discussions)
+- **Submit a PR** - Help us improve this FAQ!
\ No newline at end of file
diff --git a/src/main/resources/wiki/Installation.md b/src/main/resources/wiki/Installation.md
new file mode 100644
index 0000000..0f74e82
--- /dev/null
+++ b/src/main/resources/wiki/Installation.md
@@ -0,0 +1,136 @@
+To use Naftiko Framework, you must install and then run its engine.
+
+## Docker usage
+### Prerequisites
+* You need Docker or, if you are on macOS or Windows their Docker Desktop version. To do so, follow the official documentation:
+ * [For Mac](https://docs.docker.com/desktop/setup/install/mac-install/)
+ * [For Linux](https://docs.docker.com/desktop/setup/install/linux/)
+ * [For Windows](https://docs.docker.com/desktop/setup/install/windows-install/)
+
+* Be sure that Docker (or Docker Desktop) is running
+
+### Pull Naftiko's Docker image
+* Naftiko provides a docker image hosted in GitHub packages platform. It is public, so you can easily pull it locally.
+ ```bash
+ # v0.4
+ docker pull ghcr.io/naftiko/framework:sha-86377ea
+
+ # If you want to play with the last snapshot
+ docker pull ghcr.io/naftiko/framework:latest
+ ```
+ Then, you should see the image 'ghcr.io/naftiko/framework' in your Docker Desktop with the tag 'latest'. You can also display local images with this command:
+ ```bash
+ docker image ls
+ ```
+
+### Configure your own capability
+* Create your capability configuration file.\
+ The Naftiko Engine runs capabilities. For that, it uses a capability configuration file. You first have to create this file locally. You can use [this "Hello, World!" example](https://github.com/naftiko/framework/blob/main/src/main/resources/schemas/tutorial/step1-hello-world.yml) to start with and then move to the [Tutorial](https://github.com/naftiko/framework/wiki/Tutorial) and later to the comprehensive [Naftiko Specification](https://github.com/naftiko/framework/wiki/Specification). This file must be a YAML file (yaml and yml extensions are supported).
+
+* Localhost in your capability configuration file.
+ * If your capability reffers to some local hosts, be carefull to not use 'localhost', but 'host.docker.internal' instead. This is because your capability will run into an isolated docker container, so 'localhost' will reffer to the container and not your local machine.\
+ For example:
+ ```bash
+ baseUri: "http://host.docker.internal:8080/api/"
+ ```
+ * In the same way, if your capability expose a local host, be careful to not use 'localhost', but '0.0.0.0' instead. Else requests to localhost coming from outside of the container won't succeed.\
+ For example:
+ ```bash
+ address: "0.0.0.0"
+ ```
+
+### Run Naftiko Engine as a Docker container
+* Use a Docker volume.\
+ As you have to provide your local capability configuration file to the docker container, you must use a volume. This will be done using the '-v' option of the docker run command.
+
+* Use port forwarding.\
+ According to your configuration file, your capability will be exposed on a given port. Keep in mind that the framework engine runs in a container context, so this port won't be accessible from your local machine. You must use the port forwarding. This will be done using the '-p' option of the docker run command.
+
+* Run your capability with Naftiko Engine.\
+ Given a capability configuration file 'test.capability.yaml' and an exposition on port 8081, here is the command you have to execute to run the Framework Engine:
+ ```bash
+ docker run -p 8081:8081 -v full_path_to_your_capability_folder/test.capability.yaml:/app/test.capability.yaml ghcr.io/naftiko/framework:latest /app/test.capability.yaml
+ ```
+ Then you should be able to request your capability at http://localhost:8081
+
+## CLI tool
+The Naftiko framework provides a CLI tool.\
+The goal of this CLI is to simplify configuration and validation. While everything can be done manually, the CLI provides helper commands.
+
+## Installation
+### macOS
+For the moment, CLI is only provided for Apple Silicon (with M chip).
+**Apple Silicon (M1/M2/M3/M4):**
+```bash
+# Download the binary
+curl -L https://github.com/naftiko/framework/releases/download/v0.4/naftiko-cli-macos-arm64 -o naftiko
+
+# Set binary as executable
+chmod +x naftiko
+
+# Delete the macOS quarantine (temporary step, because the binary is not signed yet)
+xattr -d com.apple.quarantine naftiko
+
+# Install
+sudo mv naftiko /usr/local/bin/
+```
+### Linux
+```bash
+# Download the binary
+curl -L https://github.com/naftiko/framework/releases/download/v0.4/naftiko-cli-linux-amd64 -o naftiko
+
+# Set binary as executable
+chmod +x naftiko
+
+# Install
+sudo mv naftiko /usr/local/bin/
+```
+### Windows
+PowerShell installation is recommended.
+
+**Open PowerShell as admin and execute:**
+```powershell
+# Create installation folder
+New-Item -ItemType Directory -Force -Path "C:\Program Files\Naftiko"
+
+# Download the binary
+Invoke-WebRequest -Uri "https://github.com/naftiko/framework/releases/download/v0.4/naftiko-cli-windows-amd64.exe" -OutFile "C:\Program Files\Naftiko\naftiko.exe"
+
+# Add to the system PATH
+$oldPath = [Environment]::GetEnvironmentVariable('Path', 'Machine')
+$newPath = $oldPath + ';C:\Program Files\Naftiko'
+[Environment]::SetEnvironmentVariable('Path', $newPath, 'Machine')
+```
+
+## Test
+After installation, you may have to restart your terminal. Then run this command to check the CLI is well installed:
+```bash
+naftiko --help
+```
+You should see the help of the command.
+
+## Use
+There are two available features for the moment: the creation of a "minimum" valid capability configuration file, and the validation of a capability file.
+### Create a capability configuration file
+```bash
+naftiko create capability
+# You can also use aliases like:
+naftiko cr cap
+naftiko c cap
+```
+The terminal will then ask you several questions. Finally, the file will be generated in your current directory.
+### Validate a capability configuration file
+The capabilities configuration file generated by the previous command should be valid. However, you can then complete it or even create it from scratch.\
+The validation command allows you to check your file.
+```bash
+naftiko validate path_to_your_capability_file
+# You can also use aliases like:
+naftiko val path_to_your_capability_file
+naftiko v path_to_your_capability_file
+```
+By default, validation is performed on the latest schema version. If you want to test validation on a previous schema version, you can specify it as the second argument.
+```bash
+# Validate the capability configuration file with the schema v0.3
+naftiko validate path_to_your_capability_file 0.3
+```
+The result will tell you if the file is valid or if there are any errors.
diff --git a/src/main/resources/wiki/Releases.md b/src/main/resources/wiki/Releases.md
new file mode 100644
index 0000000..09ca1c4
--- /dev/null
+++ b/src/main/resources/wiki/Releases.md
@@ -0,0 +1,7 @@
+| Version | Requirements | Release | EOL | Java EOL | Maven Group ID | Maven Repo | Docker Repo |
+| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
+| [0.4](https://github.com/naftiko/framework/releases/tag/v0.4) | Java 21 LTS | 2026-03-03 | TBD | Sept. 2028 | io.naftiko | GitHub Packages | GitHub Packages |
+| 0.5 * | Java 21 LTS | 2026-03-16 * | TBC | Sept. 2028 | io.naftiko | GitHub Packages | GitHub Packages |
+| 1.0 Alpha 1 * | Java 21 LTS | 2026-03-30 * | TBC | Sept. 2028 | io.naftiko | Maven Central | Docker Hub |
+
+\* Future release
\ No newline at end of file
diff --git a/src/main/resources/wiki/Roadmap.md b/src/main/resources/wiki/Roadmap.md
new file mode 100644
index 0000000..00a7919
--- /dev/null
+++ b/src/main/resources/wiki/Roadmap.md
@@ -0,0 +1,52 @@
+## Version 1.0 - First Alpha - March 30th, 2026 :seedling:
+
+The goal of this version is to deliver a MVP to enable common AI integration use cases and grow our community.
+
+### Rightsize AI context
+- [x] Declarative applied capability exposing Agent Skills
+- [x] Declarative MCP exposing Resources and Prompts (Tools only so far)
+
+### Enable API reuse
+- [x] Support for lookups as part of API call steps
+- [x] Authenticate API and MCP Server consumers and manage permissions
+- [ ] Reusable source HTTP adapter declaration across capabilities
+ - [ ] Declarative applied capabilities with reused source capabilities
+
+### Core developer experience
+- [x] Publish FAQ in the wiki
+- [ ] Provide GitHub Action template based on [Super Linter](https://github.com/super-linter/super-linter)
+- [ ] Publish Maven Artifacts to [Maven Central](https://central.sonatype.com/)
+- [ ] Publish Javadocs to [Javadoc.io](https://javadoc.io)
+- [ ] Publish Docker Image to [Docker Hub](https://hub.docker.com/)
+- [ ] Publish Naftiko JSON Structure
+
+## Version 1.0 - Second Alpha - May 11th :deciduous_tree:
+
+The goal of this version is to deliver a MVP to enable common AI integration use cases and grow our community.
+
+- [ ] Enable agent orchestration use cases
+ - [ ] Declarative applied capability exposing A2A
+- [ ] Provide enhanced security
+ - [ ] Facilitate integration with various API/MCP/AI gateways
+ - [ ] Facilitate integration with [Keycloak](https://www.keycloak.org/), [OpenFGA](https://openfga.dev/)
+- [ ] Provide per-capability Control API and MCP adapters, aligned with CLI
+- [ ] Provide Control webapp (per Capability)
+- [ ] Publish Docker Desktop Extension to Docker Hub
+- [ ] Fabric discovery of published capabilities for consumers
+
+## Version 1.0 - First Beta - June :blossom:
+
+The goal of this version is to deliver a stable MVP, including a stable Naftiko Specification
+
+- [ ] Incorporate community feedback
+- [ ] Solidify the existing alpha version scope
+- [ ] Increase test coverage and overall quality
+
+## Version 1.0 - General Availability - September :apple:
+
+The goal of this version is to release our first version ready for production.
+
+- [ ] Incorporate community feedback
+- [ ] Solidify the existing beta version scope
+- [ ] Increase test coverage and overall quality
+- [ ] Publish JSON Schema to [JSON Schema Store](https://www.schemastore.org/)
diff --git a/src/main/resources/specs/naftiko-specification-v0.4.md b/src/main/resources/wiki/Specification-v0.4.md
similarity index 100%
rename from src/main/resources/specs/naftiko-specification-v0.4.md
rename to src/main/resources/wiki/Specification-v0.4.md
diff --git a/src/main/resources/specs/naftiko-specification-v0.5.md b/src/main/resources/wiki/Specification.md
similarity index 84%
rename from src/main/resources/specs/naftiko-specification-v0.5.md
rename to src/main/resources/wiki/Specification.md
index 1362a4e..c490040 100644
--- a/src/main/resources/specs/naftiko-specification-v0.5.md
+++ b/src/main/resources/wiki/Specification.md
@@ -1,17 +1,8 @@
# Naftiko Specification
-Version: 0.5
-Created by: Thomas Eskenazi
-Category: Sepcification
-Last updated time: March 5, 2026 12:40 PM
-Reviewers: Kin Lane, Jerome Louvel, Jérémie Tarnaud, Antoine Buhl
-Status: Draft
+**Version 0.4**
-# Naftiko Specification v0.5
-
-**Version 0.5**
-
-**Publication Date:** March 2026
+**Publication Date:** February 2026
---
@@ -49,8 +40,6 @@ The JSON Schema for the Naftiko Specification is available in two forms:
**Namespace**: A unique identifier for consumed sources, used for routing and mapping with the expose layer.
-**MCP Server**: An exposition adapter that exposes capability operations as MCP tools, enabling AI agent integration via Streamable HTTP or stdio transport.
-
**ExternalRef**: A declaration of an external reference providing variables to the capability. Two variants: file-resolved (for development) and runtime-resolved (for production). Variables are explicitly declared via a `keys` map.
### 1.3 Related Specifications.
@@ -68,7 +57,7 @@ Three specifications that work better together.
| --- | --- | --- | --- | --- |
| **Focus** | Defines *what* your API is — the contract, the schema, the structure. | Defines *how* API calls are sequenced — the workflows between endpoints. | Defines *how* to use your API — the scenarios, the runnable collections. | Defines *what* a capability consumes and exposes — the integration intent. |
| **Scope** | Single API surface | Workflows across one or more APIs | Runnable collections of API calls | Modular capability spanning multiple APIs |
-| **Key strengths** | ✓ Endpoints & HTTP methods, ✓ Request/response schemas, ✓ Authentication requirements, ✓ Data types & validation, ✓ SDK & docs generation | ✓ Multi-step sequences, ✓ Step dependencies & data flow, ✓ Success/failure criteria, ✓ Reusable workflow definitions | ✓ Runnable, shareable collections, ✓ Pre-request scripts & tests, ✓ Environment variables, ✓ Living, executable docs | ✓ Consume/expose duality, ✓ Namespace-based routing, ✓ Orchestration & forwarding, ✓ AI-driven discovery, ✓ Composable capabilities |
+| **Key strengths** | ✓ Endpoints & HTTP methods ✓ Request/response schemas ✓ Authentication requirements ✓ Data types & validation ✓ SDK & docs generation | ✓ Multi-step sequences ✓ Step dependencies & data flow ✓ Success/failure criteria ✓ Reusable workflow definitions | ✓ Runnable, shareable collections ✓ Pre-request scripts & tests ✓ Environment variables ✓ Living, executable docs | ✓ Consume/expose duality ✓ Namespace-based routing ✓ Orchestration & forwarding ✓ AI-driven discovery ✓ Composable capabilities |
| **Analogy** | The *parts list* and dimensions | The *assembly sequence* between parts | The *step-by-step assembly guide* you can run | The *product blueprint* — what goes in, what comes out |
| **Best used when you need to…** | Define & document an API contract, generate SDKs, validate payloads | Describe multi-step API workflows with dependencies | Share runnable API examples, test workflows, onboard developers | Declare a composable capability that consumes sources and exposes unified interfaces |
@@ -99,15 +88,15 @@ This is the root object of the Naftiko document.
| Field Name | Type | Description |
| --- | --- | --- |
-| **naftiko** | `string` | **REQUIRED**. Version of the Naftiko schema. MUST be `"0.5"` for this version. |
-| **info** | `Info` | *Recommended*. Metadata about the capability. |
+| **naftiko** | `string` | **REQUIRED**. Version of the Naftiko schema. MUST be `"0.4"` for this version. |
+| **info** | `Info` | **REQUIRED**. Metadata about the capability. |
| **capability** | `Capability` | **REQUIRED**. Technical configuration of the capability including sources and adapters. |
| **externalRefs** | `ExternalRef[]` | List of external references for variable injection. Each entry declares injected variables via a `keys` map. |
#### 3.1.2 Rules
-- The `naftiko` field MUST be present and MUST have the value `"0.5"` for documents conforming to this version of the specification.
-- The `capability` object MUST be present. The `info` object is recommended.
+- The `naftiko` field MUST be present and MUST have the value `"0.4"` for documents conforming to this version of the specification.
+- Both `info` and `capability` objects MUST be present.
- The `externalRefs` field is OPTIONAL. When present, it MUST contain at least one entry.
- No additional properties are allowed at the root level.
@@ -122,7 +111,7 @@ Provides metadata about the capability.
| Field Name | Type | Description |
| --- | --- | --- |
| **label** | `string` | **REQUIRED**. The display name of the capability. |
-| **description** | `string` | *Recommended*. A description of the capability. The more meaningful it is, the easier for agent discovery. |
+| **description** | `string` | **REQUIRED**. A description of the capability. The more meaningful it is, the easier for agent discovery. |
| **tags** | `string[]` | List of tags to help categorize the capability for discovery and filtering. |
| **created** | `string` | Date the capability was created (format: `YYYY-MM-DD`). |
| **modified** | `string` | Date the capability was last modified (format: `YYYY-MM-DD`). |
@@ -130,7 +119,7 @@ Provides metadata about the capability.
#### 3.2.2 Rules
-- The `label` field is mandatory. The `description` field is recommended to improve agent discovery.
+- Both `label` and `description` are mandatory.
- No additional properties are allowed.
#### 3.2.3 Info Object Example
@@ -187,14 +176,13 @@ Defines the technical configuration of the capability.
| Field Name | Type | Description |
| --- | --- | --- |
-| **exposes** | `Exposes[]` | List of exposed server adapters. Each entry is an API Expose (`type: "api"`) or an MCP Expose (`type: "mcp"`). |
-| **consumes** | `Consumes[]` | List of consumed client adapters. |
+| **exposes** | `Exposes[]` | **REQUIRED**. List of exposed server adapters. |
+| **consumes** | `Consumes[]` | **REQUIRED**. List of consumed client adapters. |
#### 3.4.2 Rules
-- At least one of `exposes` or `consumes` MUST be present.
-- When present, the `exposes` array MUST contain at least one entry.
-- When present, the `consumes` array MUST contain at least one entry.
+- The `exposes` array MUST contain at least one entry.
+- The `consumes` array MUST contain at least one entry.
- Each `consumes` entry MUST include both `baseUri` and `namespace` fields.
- There are several types of exposed adapters and consumed sources objects, all will be described in following objects.
- No additional properties are allowed.
@@ -252,16 +240,13 @@ capability:
Describes a server adapter that exposes functionality.
-> Update (schema v0.5): Two exposition adapter types are now supported — **API** (`type: "api"`) and **MCP** (`type: "mcp"`). Legacy `httpProxy` / `rest` exposition types are not part of the JSON Schema anymore.
+> Update (schema v0.4): the exposition adapter is **API** with `type: "api"` (and a required `namespace`). Legacy `httpProxy` / `rest` exposition types are not part of the JSON Schema anymore.
>
#### 3.5.1 API Expose
API exposition configuration.
-> Update (schema v0.5): The Exposes object is now a discriminated union (`oneOf`) between **API** (`type: "api"`, this section) and **MCP** (`type: "mcp"`, see §3.5.4). The `type` field acts as discriminator.
->
-
**Fixed Fields:**
| Field Name | Type | Description |
@@ -282,7 +267,7 @@ An exposed resource with **operations** and/or **forward** configuration.
| Field Name | Type | Description |
| --- | --- | --- |
| **path** | `string` | **REQUIRED**. Path of the resource (supports `param` placeholders). |
-| **description** | `string` | *Recommended*. Used to provide *meaningful* information about the resource. In a world of agents, context is king. |
+| **description** | `string` | **REQUIRED**. Used to provide *meaningful* information about the resource. In a world of agents, context is king. |
| **name** | `string` | Technical name for the resource (used for references, pattern `^[a-zA-Z0-9-]+$`). |
| **label** | `string` | Display name for the resource (likely used in UIs). |
| **inputParameters** | `ExposedInputParameter[]` | Input parameters attached to the resource. |
@@ -291,127 +276,18 @@ An exposed resource with **operations** and/or **forward** configuration.
#### 3.5.3 Rules
-- The `path` field is mandatory. The `description` field is recommended to provide meaningful context for agent discovery.
+- Both `description` and `path` are mandatory.
- At least one of `operations` or `forward` MUST be present. Both can coexist on the same resource.
- if both `operations` or `forward` are present, in case of conflict, `operations` takes precendence on `forward`.
- No additional properties are allowed.
-#### 3.5.4 MCP Expose
-
-MCP Server exposition configuration. Exposes capability operations as MCP tools over Streamable HTTP or stdio transport.
-
-> New in schema v0.5.
->
-
-**Fixed Fields:**
-
-| Field Name | Type | Description |
-| --- | --- | --- |
-| **type** | `string` | **REQUIRED**. MUST be `"mcp"`. |
-| **transport** | `string` | Transport protocol. One of: `"http"` (default), `"stdio"`. `"http"` exposes a Streamable HTTP server; `"stdio"` uses stdin/stdout JSON-RPC for local IDE integration. |
-| **address** | `string` | Server address. Can be a hostname, IPv4, or IPv6 address. |
-| **port** | `integer` | **REQUIRED when transport is `"http"`**. Port number (1–65535). MUST NOT be present when transport is `"stdio"`. |
-| **namespace** | `string` | **REQUIRED**. Unique identifier for this exposed MCP server. |
-| **description** | `string` | *Recommended*. A meaningful description of the MCP server's purpose. Sent as server instructions during MCP initialization. |
-| **tools** | `McpTool[]` | **REQUIRED**. List of MCP tools exposed by this server (minimum 1). |
-
-**Rules:**
-
-- The `type` field MUST be `"mcp"`.
-- The `namespace` field is mandatory and MUST be unique across all exposes entries.
-- The `tools` array is mandatory and MUST contain at least one entry.
-- When `transport` is `"http"` (or omitted, since `"http"` is the default), the `port` field is required.
-- When `transport` is `"stdio"`, the `port` field MUST NOT be present.
-- No additional properties are allowed.
-
-#### 3.5.5 McpTool Object
-
-An MCP tool definition. Each tool maps to one or more consumed HTTP operations, similar to ExposedOperation but adapted for the MCP protocol (no HTTP method, tool-oriented input schema).
-
-> The McpTool supports the same two modes as ExposedOperation: **simple** (direct `call` + `with`) and **orchestrated** (multi-step with `steps` + `mappings`).
->
-
-**Fixed Fields:**
-
-| Field Name | Type | Description |
-| --- | --- | --- |
-| **name** | `string` | **REQUIRED**. Technical name for the tool. Used as the MCP tool name. MUST match pattern `^[a-zA-Z0-9-]+$`. |
-| **description** | `string` | **REQUIRED**. A meaningful description of the tool. Essential for agent discovery. |
-| **inputParameters** | `McpToolInputParameter[]` | Tool input parameters. These become the MCP tool's input schema (JSON Schema). |
-| **call** | `string` | **Simple mode only**. Reference to a consumed operation. Format: `{namespace}.{operationId}`. MUST match pattern `^[a-zA-Z0-9-]+\.[a-zA-Z0-9-]+$`. |
-| **with** | `WithInjector` | **Simple mode only**. Parameter injection for the called operation. |
-| **steps** | `OperationStep[]` | **Orchestrated mode only. REQUIRED** (at least 1 step). Sequence of calls to consumed operations. |
-| **mappings** | `StepOutputMapping[]` | **Orchestrated mode only**. Maps step outputs to the tool's output parameters. |
-| **outputParameters** (simple) | `MappedOutputParameter[]` | **Simple mode**. Output parameters mapped from the consumed operation response. |
-| **outputParameters** (orchestrated) | `OrchestratedOutputParameter[]` | **Orchestrated mode**. Output parameters with name and type. |
-
-**Modes:**
-
-**Simple mode** — direct call to a single consumed operation:
-
-- `call` is **REQUIRED**
-- `with` is optional
-- `outputParameters` are `MappedOutputParameter[]`
-- `steps` MUST NOT be present
-
-**Orchestrated mode** — multi-step orchestration:
-
-- `steps` is **REQUIRED** (at least 1 entry)
-- `mappings` is optional
-- `outputParameters` are `OrchestratedOutputParameter[]`
-- `call` and `with` MUST NOT be present
-
-**Rules:**
-
-- Both `name` and `description` are mandatory.
-- Exactly one of the two modes MUST be used (simple or orchestrated).
-- In simple mode, `call` MUST follow the format `{namespace}.{operationId}` and reference a valid consumed operation.
-- In orchestrated mode, the `steps` array MUST contain at least one entry.
-- The `$this` context reference works the same as for ExposedOperation: `$this.{mcpNamespace}.{paramName}` accesses the tool's input parameters.
-- No additional properties are allowed.
-
-#### 3.5.6 McpToolInputParameter Object
-
-Declares an input parameter for an MCP tool. These become properties in the tool's JSON Schema input definition.
-
-> Unlike `ExposedInputParameter`, MCP tool parameters have no `in` field (no HTTP location concept) and include a `required` flag.
->
-
-**Fixed Fields:**
-
-| Field Name | Type | Description |
-| --- | --- | --- |
-| **name** | `string` | **REQUIRED**. Parameter name. Becomes a property name in the tool's input schema. MUST match pattern `^[a-zA-Z0-9-_*]+$`. |
-| **type** | `string` | **REQUIRED**. Data type. One of: `string`, `number`, `integer`, `boolean`, `array`, `object`. |
-| **description** | `string` | **REQUIRED**. A meaningful description of the parameter. Used for agent discovery and tool documentation. |
-| **required** | `boolean` | Whether the parameter is required. Defaults to `true`. |
-
-**Rules:**
-
-- The `name`, `type`, and `description` fields are all mandatory.
-- The `type` field MUST be one of: `"string"`, `"number"`, `"integer"`, `"boolean"`, `"array"`, `"object"`.
-- The `required` field defaults to `true` when omitted.
-- No additional properties are allowed.
-
-**McpToolInputParameter Example:**
-
-```yaml
-- name: database_id
- type: string
- description: The unique identifier of the Notion database
-- name: page_size
- type: number
- description: Number of results per page (max 100)
- required: false
-```
-
-#### 3.5.7 Address Validation Patterns
+#### 3.5.4 Address Validation Patterns
- **Hostname**: `^([a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)(\\.[a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$`
- **IPv4**: `^((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$`
- **IPv6**: `^([0-9a-fA-F]{0,4}:){2,7}[0-9a-fA-F]{0,4}$`
-#### 3.5.8 Exposes Object Examples
+#### 3.5.5 Exposes Object Examples
**API Expose with operations:**
@@ -466,32 +342,13 @@ resources:
- Authorization
```
-**MCP Expose with a single tool:**
-
-```yaml
-type: mcp
-port: 3001
-namespace: tools
-description: "AI-facing tools for database operations"
-tools:
- - name: get-database
- description: "Retrieve metadata about a database by its ID"
- inputParameters:
- - name: database_id
- type: string
- description: "The unique identifier of the database"
- call: api.get-database
- with:
- database_id: "$this.tools.database_id"
-```
-
---
### 3.6 Consumes Object
Describes a client adapter for consuming external APIs.
-> Update (schema v0.5): `targetUri` is now `baseUri`. The `headers` field has been removed — use `inputParameters` with `in: "header"` instead.
+> Update (schema v0.4): `targetUri` is now `baseUri`. The `headers` field has been removed — use `inputParameters` with `in: "header"` instead.
>
#### 3.6.1 Fixed Fields
@@ -502,7 +359,7 @@ Describes a client adapter for consuming external APIs.
| **namespace** | `string` | Path suffix used for routing from exposes. MUST match pattern `^[a-zA-Z0-9-]+$`. |
| **baseUri** | `string` | **REQUIRED**. Base URI for the consumed API. Must be a valid http(s) URL (no `path` placeholder in the schema). |
| **authentication** | Authentication Object | Authentication configuration. Defaults to `"inherit"`. |
-| **description** | `string` | *Recommended*. A description of the consumed API. The more meaningful it is, the easier for agent discovery. |
+| **description** | `string` | **REQUIRED**. A description of the consumed API. The more meaningful it is, the easier for agent discovery. |
| **inputParameters** | `ConsumedInputParameter[]` | Input parameters applied to all operations in this consumed API. |
| **resources** | [ConsumedHttpResource Object] | **REQUIRED**. List of API resources. |
@@ -512,7 +369,7 @@ Describes a client adapter for consuming external APIs.
- The `baseUri` field is required.
- The `namespace` field is required and MUST be unique across all consumes entries.
- The `namespace` value MUST match the pattern `^[a-zA-Z0-9-]+$` (alphanumeric and hyphens only).
-- The `description` field is recommended to improve agent discovery.
+- The `description` field is required.
- The `resources` array is required and MUST contain at least one entry.
#### 3.6.3 Base URI Format
@@ -662,7 +519,7 @@ outputParameters:
Describes an operation exposed on an exposed resource.
-> Update (schema v0.5): ExposedOperation now supports two modes via `oneOf` — **simple** (direct call with mapped output) and **orchestrated** (multi-step with named operation). The `call` and `with` fields are new. The `name` and `steps` fields are only required in orchestrated mode.
+> Update (schema v0.4): ExposedOperation now supports two modes via `oneOf` — **simple** (direct call with mapped output) and **orchestrated** (multi-step with named operation). The `call` and `with` fields are new. The `name` and `steps` fields are only required in orchestrated mode.
>
#### 3.9.1 Fixed Fields
@@ -865,7 +722,7 @@ body: |
### 3.11 InputParameter Objects
-> Update (schema v0.5): The single `InputParameter` object has been split into two distinct types: **ConsumedInputParameter** (used in consumes) and **ExposedInputParameter** (used in exposes, with additional `type` and `description` fields required).
+> Update (schema v0.4): The single `InputParameter` object has been split into two distinct types: **ConsumedInputParameter** (used in consumes) and **ExposedInputParameter** (used in exposes, with additional `type` and `description` fields required).
>
#### 3.11.1 ConsumedInputParameter Object
@@ -902,7 +759,7 @@ Used in consumed resources and operations.
#### 3.11.2 ExposedInputParameter Object
-Used in exposed resources and operations. Extends the consumed variant with `type` (required) and `description` (recommended) for agent discoverability, plus an optional `pattern` for validation.
+Used in exposed resources and operations. Extends the consumed variant with `type` and `description` (both required) for agent discoverability, plus an optional `pattern` for validation.
**Fixed Fields:**
@@ -910,17 +767,17 @@ Used in exposed resources and operations. Extends the consumed variant with `typ
| --- | --- | --- |
| **name** | `string` | **REQUIRED**. Parameter name. MUST match pattern `^[a-zA-Z0-9-*]+$`. |
| **in** | `string` | **REQUIRED**. Parameter location. Valid values: `"query"`, `"header"`, `"path"`, `"cookie"`, `"body"`. |
-| **type** | `string` | **REQUIRED**. Data type of the parameter. One of: `string`, `number`, `integer`, `boolean`, `object`, `array`. |
-| **description** | `string` | *Recommended*. Human-readable description of the parameter. Provides valuable context for agent discovery. |
+| **type** | `string` | **REQUIRED**. Data type of the parameter. One of: `string`, `number`, `boolean`, `object`, `array`. |
+| **description** | `string` | **REQUIRED**. Human-readable description of the parameter. Essential for agent discovery. |
| **pattern** | `string` | Optional regex pattern for parameter value validation. |
| **value** | `string` | Default value or JSONPath reference. |
**Rules:**
-- The `name`, `in`, and `type` fields are mandatory. The `description` field is recommended for agent discovery.
+- All of `name`, `in`, `type`, and `description` are mandatory.
- The `name` field MUST match the pattern `^[a-zA-Z0-9-*]+$`.
- The `in` field MUST be one of: `"query"`, `"header"`, `"path"`, `"cookie"`, `"body"`.
-- The `type` field MUST be one of: `"string"`, `"number"`, `"integer"`, `"boolean"`, `"object"`, `"array"`.
+- The `type` field MUST be one of: `"string"`, `"number"`, `"boolean"`, `"object"`, `"array"`.
- No additional properties are allowed.
**ExposedInputParameter Example:**
@@ -941,7 +798,7 @@ Used in exposed resources and operations. Extends the consumed variant with `typ
### 3.12 OutputParameter Objects
-> Update (schema v0.5): The single `OutputParameter` object has been split into three distinct types: **ConsumedOutputParameter** (used in consumed operations), **MappedOutputParameter** (used in simple-mode exposed operations), and **OrchestratedOutputParameter** (used in orchestrated-mode exposed operations).
+> Update (schema v0.4): The single `OutputParameter` object has been split into three distinct types: **ConsumedOutputParameter** (used in consumed operations), **MappedOutputParameter** (used in simple-mode exposed operations), and **OrchestratedOutputParameter** (used in orchestrated-mode exposed operations).
>
#### 3.12.1 ConsumedOutputParameter Object
@@ -988,7 +845,7 @@ Used in **simple mode** exposed operations. Maps a value from the consumed respo
**Subtypes by type:**
-- **`string`**, **`number`**, **`boolean`**: `mapping` is a JSONPath string (e.g. `$.login`)
+- **`string`**, **`number`**, **`boolean`**: `mapping` is a JsonPath string (e.g. `$.login`)
- **`object`**: `mapping` is `{ properties: { key: MappedOutputParameter, ... } }` — recursive
- **`array`**: `mapping` is `{ items: MappedOutputParameter }` — recursive
@@ -1073,7 +930,7 @@ outputParameters:
type: string
```
-#### 3.12.4 JSONPath roots (extensions)
+#### 3.12.4 JsonPath roots (extensions)
In a consumed resource, **`$`** refers to the *raw response payload* of the consumed operation (after decoding based on `outputRawFormat`). The root `$` gives direct access to the JSON response body.
@@ -1110,7 +967,7 @@ Example, if you consider the following JSON response :
Describes a single step in an orchestrated operation. `OperationStep` is a `oneOf` between two subtypes: **OperationStepCall** and **OperationStepLookup**, both sharing a common **OperationStepBase**.
-> Update (schema v0.5): OperationStep is now a discriminated union (`oneOf`) with a required `type` field (`"call"` or `"lookup"`) and a required `name` field. `OperationStepCall` uses `with` (WithInjector) instead of `inputParameters`. `OperationStepLookup` is entirely new.
+> Update (schema v0.4): OperationStep is now a discriminated union (`oneOf`) with a required `type` field (`"call"` or `"lookup"`) and a required `name` field. `OperationStepCall` uses `with` (WithInjector) instead of `inputParameters`. `OperationStepLookup` is entirely new.
>
#### 3.13.1 OperationStepBase (shared fields)
@@ -1156,7 +1013,7 @@ Performs a lookup against the output of a previous call step, matching values an
| **name** | `string` | **REQUIRED**. Step name (from base). |
| **index** | `string` | **REQUIRED**. Name of a previous call step whose output serves as the lookup table. MUST match pattern `^[a-zA-Z0-9-]+$`. |
| **match** | `string` | **REQUIRED**. Name of the key field in the index to match against. MUST match pattern `^[a-zA-Z0-9-]+$`. |
-| **lookupValue** | `string` | **REQUIRED**. JSONPath expression resolving to the value(s) to look up. |
+| **lookupValue** | `string` | **REQUIRED**. JsonPath expression resolving to the value(s) to look up. |
| **outputParameters** | `string[]` | **REQUIRED**. List of field names to extract from the matched index entries (minimum 1 entry). |
**Rules:**
@@ -1238,7 +1095,7 @@ Describes how to map the output of an operation step to the input of another ste
| Field Name | Type | Description |
| --- | --- | --- |
| **targetName** | `string` | **REQUIRED**. The name of the parameter to map to. It can be an input parameter of a next step or an output parameter of the exposed operation. |
-| **value** | `string` | **REQUIRED**. A JSONPath reference to the value to map from. E.g. `$.get-database.database_id`. |
+| **value** | `string` | **REQUIRED**. A JsonPath reference to the value to map from. E.g. `$.get-database.database_id`. |
#### 3.14.2 Rules
@@ -1250,7 +1107,7 @@ Describes how to map the output of an operation step to the input of another ste
A StepOutputMapping connects the **output parameters of a consumed operation** (called by the step) to the **output parameters of the exposed operation** (or to input parameters of subsequent steps).
- **`targetName`** — refers to the `name` of an output parameter declared on the exposed operation, or the `name` of an input parameter of a subsequent step. The target parameter receives its value from the mapping.
-- **`value`** — a JSONPath expression where **`$`** is the root of the consumed operation's output parameters. The syntax `$.{outputParameterName}` references a named output parameter of the consumed operation called in this step.
+- **`value`** — a JsonPath expression where **`$`** is the root of the consumed operation's output parameters. The syntax `$.{outputParameterName}` references a named output parameter of the consumed operation called in this step.
#### 3.14.4 End-to-end example
@@ -1323,7 +1180,7 @@ mappings:
Describes how `$this` references work in `with` (WithInjector) and other expression contexts.
-> Update (schema v0.5): The former `OperationStepParameter` object (with `name` and `value` fields) has been replaced by `WithInjector` (see §3.18). This section now documents the `$this` expression root, which is used within `WithInjector` values.
+> Update (schema v0.4): The former `OperationStepParameter` object (with `name` and `value` fields) has been replaced by `WithInjector` (see §3.18). This section now documents the `$this` expression root, which is used within `WithInjector` values.
>
#### 3.15.1 The `$this` root
@@ -1452,7 +1309,7 @@ authentication:
Defines forwarding configuration for an exposed resource to pass requests through to a consumed namespace.
-> Update (schema v0.5): Renamed from `ForwardHeaders` to `ForwardConfig`. The `targetNamespaces` array has been replaced by a single `targetNamespace` string.
+> Update (schema v0.4): Renamed from `ForwardHeaders` to `ForwardConfig`. The `targetNamespaces` array has been replaced by a single `targetNamespace` string.
>
#### 3.17.1 Fixed Fields
@@ -1486,7 +1343,7 @@ forward:
Defines parameter injection for simple-mode exposed operations. Used with the `with` field on an ExposedOperation to inject values into the called consumed operation.
-> New in schema v0.5.
+> New in schema v0.4.
>
#### 3.18.1 Shape
@@ -1533,7 +1390,7 @@ Loads variables from a local file. Intended for **local development only**.
| Field Name | Type | Description |
| --- | --- | --- |
| **name** | `string` | **REQUIRED**. Unique identifier (kebab-case). MUST match pattern `^[a-zA-Z0-9-]+$`. |
-| **description** | `string` | *Recommended*. Used to provide *meaningful* information about the external reference. In a world of agents, context is king. |
+| **description** | `string` | **REQUIRED**. Used to provide *meaningful* information about the external reference. In a world of agents, context is king. |
| **type** | `string` | **REQUIRED**. MUST be `"environment"`. |
| **resolution** | `string` | **REQUIRED**. MUST be `"file"`. |
| **uri** | `string` | **REQUIRED**. URI pointing to the file (e.g. `file:///path/to/env.json`). |
@@ -1541,7 +1398,7 @@ Loads variables from a local file. Intended for **local development only**.
**Rules:**
-- The `name`, `type`, `resolution`, `uri`, and `keys` fields are mandatory. The `description` field is recommended.
+- All fields (`name`, `description`, `type`, `resolution`, `uri`, `keys`) are mandatory.
- No additional properties are allowed.
#### 3.19.2 Runtime-Resolved ExternalRef
@@ -1596,7 +1453,6 @@ Example: `{"notion_token": "NOTION_INTEGRATION_TOKEN"}` means the value of `NOTI
- Each `name` value MUST be unique across all `externalRefs` entries.
- The `name` value MUST NOT collide with any `consumes` namespace to avoid ambiguity.
- The `keys` map MUST contain at least one entry.
-- Variable names (keys in the `keys` map) SHOULD be unique across all `externalRefs` entries. If the same variable name appears in multiple entries, the expression MUST use the qualified form `{{name.variable}}` (where `name` is the `name` of the `externalRefs` entry) to disambiguate which source provides the value.
- No additional properties are allowed on either variant.