Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 42 additions & 0 deletions .github/workflows/openapi_spectral.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# OpenAPI: regenerate check (Spectral + committed docs/openapi.json drift).
# - scripts/generate_openapi_schema.py builds the spec from FastAPI (app.main).
# - CI fails if docs/openapi.json does not match generator output (run locally:
# uv run scripts/generate_openapi_schema.py docs/openapi.json).
name: OpenAPI (Spectral)

on:
- push
- pull_request
Comment thread
syedriko marked this conversation as resolved.

jobs:
spectral:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: read
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v5
with:
python-version: "3.12"
- name: Install dependencies
# Same pattern as local dev (CONTRIBUTING.md): dev + llslibdev for a full app import.
run: uv sync --group dev --group llslibdev
- name: Install PDM
# scripts/generate_openapi_schema.py asserts OpenAPI info.version matches `pdm show --version`.
run: uv pip install pdm
- name: Verify docs/openapi.json matches generator
run: |
set -euo pipefail
uv run python scripts/generate_openapi_schema.py /tmp/openapi-generated.json
if ! diff -u docs/openapi.json /tmp/openapi-generated.json; then
echo "::error::docs/openapi.json is out of date. Regenerate with: uv run scripts/generate_openapi_schema.py docs/openapi.json"
exit 1
fi
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "22"
- name: Spectral lint
run: npx --yes @stoplight/spectral-cli@6 lint docs/openapi.json --fail-severity error --display-only-failures
9 changes: 9 additions & 0 deletions .spectral.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Spectral (OpenAPI) — https://meta.stoplight.io/docs/spectral/
#
# Full Stoplight OAS ruleset. `oas3-valid-media-example` is off: examples in
# docs/openapi.json are often partial and produced ~600 warnings with little
# signal until examples are aligned with schemas. Re-enable when tightening
# examples (set to "warn" or "error").
extends: spectral:oas
rules:
oas3-valid-media-example: off
7 changes: 7 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
* [Pre-commit hook settings](#pre-commit-hook-settings)
* [Code coverage measurement](#code-coverage-measurement)
* [Linters](#linters)
* [OpenAPI (Spectral)](#openapi-spectral)
* [Type hints checks](#type-hints-checks)
* [Ruff](#ruff)
* [Pylint](#pylint)
Expand Down Expand Up @@ -48,6 +49,7 @@
- git
- Python 3.12 or 3.13
- pip
- **Node.js** (18 or newer; **npm** and **`npx`** ship with Node). **`make verify`** runs **`lint-openapi`**, which calls **`npx --yes @stoplight/spectral-cli@6`** (see `Makefile`). If **`npx`** is not installed, **`lint-openapi` is skipped** with a message so **`make verify` still succeeds** locally; install Node to run the OpenAPI check. **CI** always runs Spectral (see `.github/workflows/openapi_spectral.yaml`).

The development requires at least [Python 3.12](https://docs.python.org/3/whatsnew/3.12.html) due to significant improvement on performance, optimizations which benefit modern ML, AI, LLM, NL stacks, and improved asynchronous processing capabilities. It is also possible to use Python 3.13.

Expand All @@ -57,6 +59,7 @@ The development requires at least [Python 3.12](https://docs.python.org/3/whatsn

1. `pip install --user uv`
1. `uv --version` -- should return no error
1. Install [Node.js](https://nodejs.org/en/download) (LTS is fine) or use your OS package manager, e.g. Fedora: `sudo dnf install nodejs`, macOS with [Homebrew](https://brew.sh/): `brew install node`. Confirm `node --version` and `npx --version` work. CI uses Node 22 for Spectral (see `.github/workflows/openapi_spectral.yaml`).



Expand Down Expand Up @@ -217,6 +220,10 @@ Code coverage reports are generated in JSON and also in format compatible with [

_Black_, _Ruff_, Pyright, _Pylint_, __Pydocstyle__, __Mypy__, and __Bandit__ tools are used as linters. There are a bunch of linter rules enabled for this repository. All of them are specified in `pyproject.toml`, such as in sections `[tool.ruff]` and `[tool.pylint."MESSAGES CONTROL"]`. Some specific rules can be disabled using `ignore` parameter (empty now).

### OpenAPI (Spectral)

OpenAPI is linted with [Spectral](https://stoplight.io/open-api/) via **`npx --yes @stoplight/spectral-cli@6`** in the **`lint-openapi`** target (`make lint-openapi`, part of **`make verify`**). If **`npx`** is missing, **`lint-openapi`** skips Spectral locally; install **Node.js** to run it (see [Prerequisites](#prerequisites) and [Tooling installation](#tooling-installation)). **CI** always runs the Spectral step. If you introduce a **new** router tag (`APIRouter(tags=[...])`), you must also extend the global tag list in `src/app/main.py` and regenerate `docs/openapi.json`. See **[docs/contributing/openapi-tags-and-spectral.md](docs/contributing/openapi-tags-and-spectral.md)**.
Comment thread
coderabbitai[bot] marked this conversation as resolved.


### Type hints checks

Expand Down
8 changes: 8 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -113,13 +113,21 @@ docstyle: ## Check the docstring style using Docstyle checker
ruff: ## Check source code using Ruff linter
uv run ruff check src tests --per-file-ignores=tests/*:S101 --per-file-ignores=scripts/*:S101

lint-openapi: ## Lint docs/openapi.json (Spectral OAS ruleset; fail on error)
@if command -v npx >/dev/null 2>&1; then \
npx --yes @stoplight/spectral-cli@6 lint docs/openapi.json --fail-severity error --display-only-failures; \
else \
echo "lint-openapi: skipping Spectral (npx not found). Install Node.js for OpenAPI lint locally; CI still runs it."; \
fi

verify: ## Run all linters
$(MAKE) black
$(MAKE) pylint
$(MAKE) pyright
$(MAKE) ruff
$(MAKE) docstyle
$(MAKE) check-types
$(MAKE) lint-openapi
Comment thread
syedriko marked this conversation as resolved.

distribution-archives: ## Generate distribution archives to be uploaded into Python registry
rm -rf dist
Expand Down
27 changes: 27 additions & 0 deletions docs/contributing/openapi-tags-and-spectral.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# OpenAPI tags and Spectral

## Global tag list (`_OPENAPI_TAGS`)

In `src/app/endpoints/`, route tags come from **`APIRouter(tags=[...])`**, which FastAPI uses when it builds the OpenAPI description for each operation.

The OpenAPI document must list those tags at the top level for tools like [Spectral](https://stoplight.io/open-api/) rule **`operation-tag-defined`** to pass, so we keep **`_OPENAPI_TAGS`** in **`src/app/main.py`** and pass it into the **`FastAPI`** app as **`openapi_tags`**.

**When you add a new router or change `tags=[...]` to use a new tag name**, add a matching entry to **`_OPENAPI_TAGS`** (same `name` string, plus a short `description` for the docs).

The schema generator **`scripts/generate_openapi_schema.py`** passes **`tags=app.openapi_tags`** into **`get_openapi()`** so **`docs/openapi.json`** includes the top-level `tags` array. Regenerate after tag changes:

```bash
uv run scripts/generate_openapi_schema.py docs/openapi.json
```

## Linting (`make lint-openapi`)

Spectral is configured in **`.spectral.yaml`** (extends `spectral:oas`). Run:

```bash
make lint-openapi
```

This is part of **`make verify`**. If **`npx`** is not on your **`PATH`**, the Makefile **skips** Spectral and prints a short message so **`make verify`** can still pass; install Node.js to run the check locally. **CI** (`.github/workflows/openapi_spectral.yaml`) always runs Spectral. Failures are driven by **error**-severity rules.
Comment thread
coderabbitai[bot] marked this conversation as resolved.
Comment on lines +21 to +25
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use the uv run form for local commands.

This page already uses uv run for schema generation on Line 14; use the same convention for OpenAPI linting and verification. Based on learnings, “Use uv package manager with uv run prefix for all commands.”

📝 Proposed update
-make lint-openapi
+uv run make lint-openapi
-This is part of **`make verify`**.
+This is part of **`uv run make verify`**.
🧰 Tools
🪛 LanguageTool

[style] ~25-~25: Consider using the synonym “brief” (= concise, using a few words, not lasting long) to strengthen your wording.
Context: ...akefile skips Spectral and prints a short message so make verify can still ...

(QUICK_BRIEF)


[uncategorized] ~25-~25: The official name of this software platform is spelled with a capital “H”.
Context: ...de.js to run the check locally. CI (.github/workflows/openapi_spectral.yaml) alway...

(GITHUB)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/contributing/openapi-tags-and-spectral.md` around lines 21 - 25, Update
the doc so the local lint command uses the "uv run" prefix: replace the shown
command "make lint-openapi" with the uv-run form (use "uv run make
lint-openapi") and update the surrounding text to recommend using "uv run" for
all local OpenAPI/Spectral invocations; keep notes about npx/Node.js, that this
is part of "make verify", and that CI (.github/workflows/openapi_spectral.yaml)
always runs Spectral and failures are driven by error-severity rules.


The rule **`oas3-valid-media-example`** (examples must match schemas) is **turned off** in **`.spectral.yaml`** because the generated spec carries many partial examples and produced hundreds of noisy warnings. Turn it back on when examples are brought in line with schemas.
155 changes: 142 additions & 13 deletions docs/openapi.json
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
},
"servers": [
{
"url": "http://localhost:8080/",
"url": "http://localhost:8080",
"description": "Locally running service"
}
],
Expand Down Expand Up @@ -9262,8 +9262,7 @@
"schema": {
"type": "string"
},
"example": "event: response.created\ndata: {\"type\":\"response.created\",\"sequence_number\":0,\"response\":{\"id\":\"resp_abc\",\"object\":\"response\",\"created_at\":1704067200,\"status\":\"in_progress\",\"model\":\"openai/gpt-4o-mini\",\"output\":[],\"store\":true,\"text\":{\"format\":{\"type\":\"text\"}},\"conversation\":\"0d21ba731f21f798dc9680125d5d6f49\",\"available_quotas\":{},\"output_text\":\"\"}}\n\nevent: response.output_item.added\ndata: {\"type\":\"response.output_item.added\",\"sequence_number\":1,\"response_id\":\"resp_abc\",\"output_index\":0,\"item\":{\"id\":\"msg_abc\",\"type\":\"message\",\"status\":\"in_progress\",\"role\":\"assistant\",\"content\":[]}}\n\n...\n\nevent: response.completed\ndata: {\"type\":\"response.completed\",\"sequence_number\":30,\"response\":{\"id\":\"resp_abc\",\"object\":\"response\",\"created_at\":1704067200,\"status\":\"completed\",\"model\":\"openai/gpt-4o-mini\",\"output\":[{\"id\":\"msg_abc\",\"type\":\"message\",\"status\":\"completed\",\"role\":\"assistant\",\"content\":[{\"type\":\"output_text\",\"text\":\"Hello! How can I help?\",\"annotations\":[]}]}],\"store\":true,\"text\":{\"format\":{\"type\":\"text\"}},\"usage\":{\"input_tokens\":10,\"output_tokens\":6,\"total_tokens\":16,\"input_tokens_details\":{\"cached_tokens\":0},\"output_tokens_details\":{\"reasoning_tokens\":0}},\"conversation\":\"0d21ba731f21f798dc9680125d5d6f49\",\"available_quotas\":{\"daily\":1000,\"monthly\":50000},\"output_text\":\"Hello! How can I help?\"}}\n\ndata: [DONE]\n\n",
"description": "SSE stream of events"
"example": "event: response.created\ndata: {\"type\":\"response.created\",\"sequence_number\":0,\"response\":{\"id\":\"resp_abc\",\"object\":\"response\",\"created_at\":1704067200,\"status\":\"in_progress\",\"model\":\"openai/gpt-4o-mini\",\"output\":[],\"store\":true,\"text\":{\"format\":{\"type\":\"text\"}},\"conversation\":\"0d21ba731f21f798dc9680125d5d6f49\",\"available_quotas\":{},\"output_text\":\"\"}}\n\nevent: response.output_item.added\ndata: {\"type\":\"response.output_item.added\",\"sequence_number\":1,\"response_id\":\"resp_abc\",\"output_index\":0,\"item\":{\"id\":\"msg_abc\",\"type\":\"message\",\"status\":\"in_progress\",\"role\":\"assistant\",\"content\":[]}}\n\n...\n\nevent: response.completed\ndata: {\"type\":\"response.completed\",\"sequence_number\":30,\"response\":{\"id\":\"resp_abc\",\"object\":\"response\",\"created_at\":1704067200,\"status\":\"completed\",\"model\":\"openai/gpt-4o-mini\",\"output\":[{\"id\":\"msg_abc\",\"type\":\"message\",\"status\":\"completed\",\"role\":\"assistant\",\"content\":[{\"type\":\"output_text\",\"text\":\"Hello! How can I help?\",\"annotations\":[]}]}],\"store\":true,\"text\":{\"format\":{\"type\":\"text\"}},\"usage\":{\"input_tokens\":10,\"output_tokens\":6,\"total_tokens\":16,\"input_tokens_details\":{\"cached_tokens\":0},\"output_tokens_details\":{\"reasoning_tokens\":0}},\"conversation\":\"0d21ba731f21f798dc9680125d5d6f49\",\"available_quotas\":{\"daily\":1000,\"monthly\":50000},\"output_text\":\"Hello! How can I help?\"}}\n\ndata: [DONE]\n\n"
}
}
},
Expand Down Expand Up @@ -10600,15 +10599,31 @@
"tags": [
"a2a"
],
"summary": "Handle A2A Jsonrpc",
"description": "Handle A2A JSON-RPC requests following the A2A protocol specification.\n\nThis endpoint uses the DefaultRequestHandler from the A2A SDK to handle\nall JSON-RPC requests including message/send, message/stream, etc.\n\nThe A2A SDK application is created per-request to include authentication\ncontext while still leveraging FastAPI's authorization middleware.\n\nAutomatically detects streaming requests (message/stream JSON-RPC method)\nand returns a StreamingResponse to enable real-time chunk delivery.\n\nArgs:\n request: FastAPI request object\n auth: Authentication tuple\n mcp_headers: MCP headers for context propagation\n\nReturns:\n JSON-RPC response or streaming response",
"summary": "Handle A2A JSON-RPC GET",
"description": "Handle GET on /a2a for A2A JSON-RPC requests following the A2A protocol specification.",
"operationId": "handle_a2a_jsonrpc_a2a_get",
"responses": {
"200": {
"description": "Successful Response",
"description": "Successful response",
"content": {
"application/json": {
"schema": {}
"schema": {
"type": "object",
"description": "JSON-RPC 2.0 response or A2A-over-HTTP payload"
},
"example": {
"jsonrpc": "2.0",
"id": "1",
"result": {}
}
},
"text/event-stream": {
"schema": {
"type": "string",
"format": "text/event-stream",
"description": "Server-Sent Events stream when the JSON-RPC method is message/stream"
},
"example": "data: {\"jsonrpc\":\"2.0\",\"id\":\"1\",\"result\":{}}\n\n"
}
}
}
Expand All @@ -10618,15 +10633,31 @@
"tags": [
"a2a"
],
"summary": "Handle A2A Jsonrpc",
"description": "Handle A2A JSON-RPC requests following the A2A protocol specification.\n\nThis endpoint uses the DefaultRequestHandler from the A2A SDK to handle\nall JSON-RPC requests including message/send, message/stream, etc.\n\nThe A2A SDK application is created per-request to include authentication\ncontext while still leveraging FastAPI's authorization middleware.\n\nAutomatically detects streaming requests (message/stream JSON-RPC method)\nand returns a StreamingResponse to enable real-time chunk delivery.\n\nArgs:\n request: FastAPI request object\n auth: Authentication tuple\n mcp_headers: MCP headers for context propagation\n\nReturns:\n JSON-RPC response or streaming response",
"operationId": "handle_a2a_jsonrpc_a2a_get",
"summary": "Handle A2A JSON-RPC POST",
"description": "Handle POST on /a2a for A2A JSON-RPC requests following the A2A protocol specification.",
"operationId": "handle_a2a_jsonrpc_a2a_post",
"responses": {
"200": {
"description": "Successful Response",
"description": "Successful response",
"content": {
"application/json": {
"schema": {}
"schema": {
"type": "object",
"description": "JSON-RPC 2.0 response or A2A-over-HTTP payload"
},
"example": {
"jsonrpc": "2.0",
"id": "1",
"result": {}
}
},
"text/event-stream": {
"schema": {
"type": "string",
"format": "text/event-stream",
"description": "Server-Sent Events stream when the JSON-RPC method is message/stream"
},
"example": "data: {\"jsonrpc\":\"2.0\",\"id\":\"1\",\"result\":{}}\n\n"
}
}
}
Expand Down Expand Up @@ -20478,5 +20509,103 @@
]
}
}
}
},
"tags": [
{
"name": "a2a",
"description": "Agent-to-Agent (A2A) protocol."
},
{
"name": "authorized",
"description": "Authorization probe."
},
{
"name": "config",
"description": "Service configuration."
},
{
"name": "conversations_v1",
"description": "Conversations API v1."
},
{
"name": "conversations_v2",
"description": "Conversations API v2."
},
{
"name": "feedback",
"description": "User feedback."
},
{
"name": "health",
"description": "Health and readiness probes."
},
{
"name": "info",
"description": "Service information."
},
{
"name": "mcp-auth",
"description": "MCP client authentication options."
},
{
"name": "mcp-servers",
"description": "MCP server registration."
},
{
"name": "metrics",
"description": "Prometheus metrics."
},
{
"name": "models",
"description": "LLM models."
},
{
"name": "prompts",
"description": "Prompt management."
},
{
"name": "providers",
"description": "Inference providers."
},
{
"name": "query",
"description": "Non-streaming query."
},
{
"name": "rags",
"description": "RAG configuration."
},
{
"name": "responses",
"description": "OpenAI-compatible Responses API."
},
{
"name": "rlsapi-v1",
"description": "RLS API v1 (inference)."
},
{
"name": "root",
"description": "Service root."
},
{
"name": "shields",
"description": "Safety shields."
},
{
"name": "streaming_query",
"description": "Streaming query (SSE)."
},
{
"name": "streaming_query_interrupt",
"description": "Streaming interrupt."
},
{
"name": "tools",
"description": "Tools."
},
{
"name": "vector-stores",
"description": "Vector stores and files."
}
]
Comment thread
coderabbitai[bot] marked this conversation as resolved.
}
1 change: 1 addition & 0 deletions scripts/generate_openapi_schema.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@ def read_version_from_pyproject():
license_info=app.license_info,
servers=app.servers,
contact=app.contact,
tags=app.openapi_tags,
)

# dump the schema into file
Expand Down
Loading
Loading