LCORE-1958: Correct OpenAPI spec violations in docs/openapi.json, add OpenAPI linting to CI#1565
LCORE-1958: Correct OpenAPI spec violations in docs/openapi.json, add OpenAPI linting to CI#1565syedriko wants to merge 3 commits intolightspeed-core:mainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds CI and local Spectral OpenAPI linting, centralizes OpenAPI tags in the app and generated schema, splits the Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Route as FastAPI Route (GET/POST /a2a)
participant Auth as Auth Dependency
participant MCP as mcp_headers Dependency
participant Handler as _handle_a2a_jsonrpc
participant ASGI as Internal ASGI A2A App
participant Response as Response / StreamingResponse
Client->>Route: HTTP GET/POST /a2a
Route->>Auth: resolve auth dependency
Auth-->>Route: AuthTuple
Route->>MCP: resolve mcp headers
MCP-->>Route: McpHeaders
Route->>Handler: call _handle_a2a_jsonrpc(request, AuthTuple, McpHeaders)
Handler->>Handler: read/parse body, detect message/stream
Handler->>ASGI: replay request via receive() into A2A ASGI app
ASGI-->>Handler: streamed events or buffered response
Handler-->>Response: return StreamingResponse or Response
Response-->>Client: HTTP response (application/json or text/event-stream)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
✨ Simplify code
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/openapi_spectral.yaml:
- Around line 3-5: The workflow is triggering twice because the top-level "on"
includes both "push" and "pull_request" with no branch filters; update the "on"
configuration so "push" is scoped (e.g., set "push" to only run on protected
branches like main or tags) while keeping "pull_request" for PR checks to avoid
duplicate runs — modify the "on" block (the "push" and "pull_request" keys in
.github/workflows/openapi_spectral.yaml) to include appropriate branch filters
such as push: branches: [main] or pull_request: branches: [main], or use
branches-ignore on push to exclude feature branches.
In `@CONTRIBUTING.md`:
- Around line 221-223: Update the Prerequisites / Tooling section to list
Node.js (and npx) as a required prerequisite so contributors know it's needed
for make verify; mention that make verify runs the lint-openapi target which
invokes npx `@stoplight/spectral-cli`@6, and add a short note or link on how to
install Node/npx (or the minimum Node version) so contributors won't see "npx:
command not found" when running make verify.
In `@docs/contributing/openapi-tags-and-spectral.md`:
- Line 5: The sentence in docs/contributing/openapi-tags-and-spectral.md is too
dense; split it into two sentences for clarity: first explain that FastAPI
assigns route tags via APIRouter(tags=[...]) in src/app/endpoints/, then explain
the OpenAPI requirement and our solution—maintaining _OPENAPI_TAGS in
src/app/main.py and passing it as openapi_tags into the FastAPI app so
Spectral's operation-tag-defined rule passes. Keep references to
APIRouter(tags=[...]), src/app/endpoints/, _OPENAPI_TAGS, src/app/main.py, and
openapi_tags in the rewritten text.
In `@docs/openapi.json`:
- Around line 9938-9939: Update the public OpenAPI texts to use standard
JSON-RPC casing and remove implementation details: change the summary field
"Handle A2A Jsonrpc Get" to "Handle A2A JSON-RPC GET" and replace the
description that says "Handle GET on ``/a2a`` for A2A JSON-RPC (same handler as
POST)." with a concise client-focused sentence such as "Handle GET on /a2a to
perform a JSON-RPC request; returns JSON-RPC formatted responses."—edit the
summary and description values (the JSON keys shown in the diff) and apply the
same change at the other occurrence around lines 9956–9958.
In `@Makefile`:
- Around line 116-126: The verify target now depends on lint-openapi which
shells out to npx, adding an implicit Node/npx requirement; either document
Node/npx in CONTRIBUTING.md tooling or make lint-openapi tolerate missing npx so
local make verify won't fail—update the Makefile's lint-openapi target to check
for npx (using command -v npx or similar) and skip with an explanatory message
if not found, while leaving the CI workflow that runs the Spectral lint
unchanged so CI still enforces the check.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: e9710063-bf71-404d-a65e-296270bb9913
📒 Files selected for processing (10)
.github/workflows/openapi_spectral.yaml.spectral.yamlCONTRIBUTING.mdMakefiledocs/contributing/openapi-tags-and-spectral.mddocs/openapi.jsonscripts/generate_openapi_schema.pysrc/app/endpoints/a2a.pysrc/app/main.pysrc/models/responses.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
- GitHub Check: unit_tests (3.12)
- GitHub Check: mypy
- GitHub Check: radon
- GitHub Check: black
- GitHub Check: bandit
- GitHub Check: build-pr
- GitHub Check: E2E: server mode / ci / group 1
- GitHub Check: E2E: server mode / ci / group 3
- GitHub Check: E2E: server mode / ci / group 2
- GitHub Check: E2E: library mode / ci / group 1
- GitHub Check: E2E Tests for Lightspeed Evaluation job
- GitHub Check: E2E: library mode / ci / group 2
- GitHub Check: Pylinter
- GitHub Check: E2E: library mode / ci / group 3
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Use absolute imports for internal modules:from authentication import get_auth_dependency
Import FastAPI dependencies with:from fastapi import APIRouter, HTTPException, Request, status, Depends
Import Llama Stack client with:from llama_stack_client import AsyncLlamaStackClient
Checkconstants.pyfor shared constants before defining new ones
All modules start with descriptive docstrings explaining purpose
Uselogger = get_logger(__name__)fromlog.pyfor module logging
Type aliases defined at module level for clarity
Use Final[type] as type hint for all constants
All functions require docstrings with brief descriptions
Complete type annotations for parameters and return types in functions
Usetyping_extensions.Selffor model validators in Pydantic models
Use modern union type syntaxstr | intinstead ofUnion[str, int]
UseOptional[Type]for optional type hints
Use snake_case with descriptive, action-oriented function names (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns; return new data structures instead
Useasync deffor I/O operations and external API calls
HandleAPIConnectionErrorfrom Llama Stack in error handling
Use standard log levels with clear purposes: debug, info, warning, error
All classes require descriptive docstrings explaining purpose
Use PascalCase for class names with standard suffixes: Configuration, Error/Exception, Resolver, Interface
Use ABC for abstract base classes with@abstractmethoddecorators
Use@model_validatorand@field_validatorfor Pydantic model validation
Complete type annotations for all class attributes; use specific types, notAny
Follow Google Python docstring conventions with Parameters, Returns, Raises, and Attributes sections
Files:
scripts/generate_openapi_schema.pysrc/models/responses.pysrc/app/main.pysrc/app/endpoints/a2a.py
src/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Pydantic models extend
ConfigurationBasefor config,BaseModelfor data models
Files:
src/models/responses.pysrc/app/main.pysrc/app/endpoints/a2a.py
src/app/endpoints/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Use FastAPI
HTTPExceptionwith appropriate status codes for API endpoints
Files:
src/app/endpoints/a2a.py
🧠 Learnings (7)
📚 Learning: 2026-04-15T18:53:54.785Z
Learnt from: Lifto
Repo: lightspeed-core/lightspeed-stack PR: 1510
File: src/models/requests.py:742-769
Timestamp: 2026-04-15T18:53:54.785Z
Learning: In `src/models/requests.py`, keep `ResponsesRequest.validate_body_size` using `json.dumps(values)` with Python defaults (including ASCII escaping and the default separators). This default formatting is intentional to make the 65,536-character limit conservatively strict (accounting for small encoding/format overhead). Do not recommend changing it to `ensure_ascii=False` or using compact separators for this validator, since an exact wire-format byte match with the client payload is not achievable via `json.dumps` anyway.
Applied to files:
.spectral.yaml
📚 Learning: 2026-04-20T15:09:45.119Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1548
File: src/app/endpoints/rlsapi_v1.py:56-56
Timestamp: 2026-04-20T15:09:45.119Z
Learning: In `src/app/endpoints/rlsapi_v1.py`, treat the line `_get_rh_identity_context = get_rh_identity_context` as an intentional temporary backward-compatibility shim (introduced for PR `#1548`, Splunk HEC telemetry work). Do not flag it as dead/unnecessary/unused code during review until the planned part 3 is merged and the responses endpoint is fully wired up such that no tests or external consumers reference the underscore-prefixed name.
Applied to files:
.spectral.yaml
📚 Learning: 2026-03-02T15:57:15.929Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-stack PR: 1250
File: AGENTS.md:11-12
Timestamp: 2026-03-02T15:57:15.929Z
Learning: In lightspeed-stack's AGENTS.md, the Linting Tools section lists actual tool names (e.g., mypy, pylint, pyright, ruff, black), not make target names. For example, `check-types` is a make target that runs mypy, but mypy is the tool name documented in the Linting Tools section.
Applied to files:
MakefileCONTRIBUTING.md
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv run make verify` to run all linters (black, pylint, pyright, ruff, docstyle, check-types)
Applied to files:
MakefileCONTRIBUTING.md
📚 Learning: 2026-01-12T10:58:40.230Z
Learnt from: blublinsky
Repo: lightspeed-core/lightspeed-stack PR: 972
File: src/models/config.py:459-513
Timestamp: 2026-01-12T10:58:40.230Z
Learning: In lightspeed-core/lightspeed-stack, for Python files under src/models, when a user claims a fix is done but the issue persists, verify the current code state before accepting the fix. Steps: review the diff, fetch the latest changes, run relevant tests, reproduce the issue, search the codebase for lingering references to the original problem, confirm the fix is applied and not undone by subsequent commits, and validate with local checks to ensure the issue is resolved.
Applied to files:
src/models/responses.py
📚 Learning: 2026-02-25T07:46:33.545Z
Learnt from: asimurka
Repo: lightspeed-core/lightspeed-stack PR: 1211
File: src/models/responses.py:8-16
Timestamp: 2026-02-25T07:46:33.545Z
Learning: In the Python codebase, requests.py should use OpenAIResponseInputTool as Tool while responses.py uses OpenAIResponseTool as Tool. This difference is intentional due to differing schemas for input vs output tools in llama-stack-api. Apply this distinction consistently to other models under src/models (e.g., ensure request-related tools use the InputTool variant and response-related tools use the ResponseTool variant). If adding new tools, choose the corresponding InputTool or Tool class based on whether the tool represents input or output, and document the rationale in code comments.
Applied to files:
src/models/responses.py
📚 Learning: 2026-04-06T20:18:07.852Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1463
File: src/app/endpoints/rlsapi_v1.py:266-271
Timestamp: 2026-04-06T20:18:07.852Z
Learning: In the lightspeed-stack codebase, within `src/app/endpoints/` inference/MCP endpoints, treat `tools: Optional[list[Any]]` in MCP tool definitions as an intentional, consistent typing pattern (used across `query`, `responses`, `streaming_query`, `rlsapi_v1`). Do not raise or suggest this as a typing issue during code review; changing it in isolation could break endpoint typing consistency across the codebase.
Applied to files:
src/app/endpoints/a2a.py
🪛 checkmake (0.2.2)
Makefile
[warning] 119-119: Target body for "verify" exceeds allowed length of 5 (7).
(maxbodylength)
🪛 Checkov (3.2.519)
docs/openapi.json
[high] 1-19584: Ensure that the global security field has rules defined
(CKV_OPENAPI_4)
[high] 1-19584: Ensure that security operations is not empty.
(CKV_OPENAPI_5)
🔇 Additional comments (8)
src/app/main.py (1)
35-61: The tag list in_OPENAPI_TAGSis complete and synchronized.Verification confirms all 24 router tags used across the codebase are declared in
_OPENAPI_TAGS. However, the lack of automated checking remains a concern—consider adding a pre-commit hook or CI check that validates tag declarations against actual router usage to prevent drift as new routers are added.docs/contributing/openapi-tags-and-spectral.md (3)
9-13: Clear regeneration instructions.The schema generation command and workflow are well-documented with appropriate context about when regeneration is needed.
25-25: Excellent justification for the disabled rule.The explanation clearly documents why
oas3-valid-media-exampleis disabled and provides guidance for when it should be re-enabled. This will help future maintainers understand the trade-off.
23-23: The documentation statement is accurate. The Makefile confirms thatlint-openapiis called as part of theverifytarget (line 126 of the Makefile shows$(MAKE) lint-openapiwithin the verify recipe).docs/openapi.json (4)
20-20: LGTM — server base URL is cleaner without the trailing slash.This avoids accidental double slashes when tools concatenate path templates.
8583-8585: LGTM — SSE behavior is documented at the response level.This keeps the OpenAPI document valid while still describing the
text/event-streampayload semantics.
8638-8642: LGTM — SSE example is concrete and useful.The sample shows event framing, payload shape, completion, and the terminal
[DONE]marker.
19486-19584: LGTM — top-level tag metadata improves generated API docs.The tag list is valid OpenAPI metadata and should make grouped documentation clearer.
55a492e to
ffcecf4
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/app/main.py (1)
3-7: 🛠️ Refactor suggestion | 🟠 MajorMark
_OPENAPI_TAGSasFinal.This is a new module-level constant, but it is not annotated with
Final[...].♻️ Proposed fix
import os from collections.abc import AsyncIterator from contextlib import asynccontextmanager +from typing import Final @@ -_OPENAPI_TAGS: list[dict[str, str]] = [ +_OPENAPI_TAGS: Final[list[dict[str, str]]] = [As per coding guidelines, Use Final[type] as type hint for all constants.
Also applies to: 35-61
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/main.py` around lines 3 - 7, Add a Final annotation to the module-level constant _OPENAPI_TAGS and any other constants defined in the same region (lines ~35-61): import Final from typing (or typing_extensions if needed) and change declarations like _OPENAPI_TAGS = ... to include an explicit Final[...] type hint (e.g., _OPENAPI_TAGS: Final[list[dict[str, str]]]) so the constants are properly typed as immutable; apply the same pattern to the other module-level constants in that block using accurate concrete types for each constant.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/openapi.json`:
- Around line 19486-19584: CI currently lints the committed docs/openapi.json
but doesn't regenerate it, so stale tags can slip through; update the
openapi_spectral workflow to run scripts/generate_openapi_schema.py (using the
same Python environment/requirements as local dev), write its output to
docs/openapi.json (or a temp file) and then fail the job if the generated file
differs from the committed docs/openapi.json (e.g., by running a git diff
--exit-code or comparing the two files); reference the module/entrypoint
scripts/generate_openapi_schema.py and the constant _OPENAPI_TAGS in
src/app/main.py when implementing this step.
---
Outside diff comments:
In `@src/app/main.py`:
- Around line 3-7: Add a Final annotation to the module-level constant
_OPENAPI_TAGS and any other constants defined in the same region (lines ~35-61):
import Final from typing (or typing_extensions if needed) and change
declarations like _OPENAPI_TAGS = ... to include an explicit Final[...] type
hint (e.g., _OPENAPI_TAGS: Final[list[dict[str, str]]]) so the constants are
properly typed as immutable; apply the same pattern to the other module-level
constants in that block using accurate concrete types for each constant.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 5e970a94-b60e-4c0f-9645-5b89db14baf9
📒 Files selected for processing (10)
.github/workflows/openapi_spectral.yaml.spectral.yamlCONTRIBUTING.mdMakefiledocs/contributing/openapi-tags-and-spectral.mddocs/openapi.jsonscripts/generate_openapi_schema.pysrc/app/endpoints/a2a.pysrc/app/main.pysrc/models/responses.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: build-pr
- GitHub Check: E2E: server mode / ci / group 3
- GitHub Check: E2E: server mode / ci / group 1
- GitHub Check: E2E: library mode / ci / group 2
- GitHub Check: E2E: server mode / ci / group 2
- GitHub Check: E2E: library mode / ci / group 1
- GitHub Check: E2E Tests for Lightspeed Evaluation job
- GitHub Check: E2E: library mode / ci / group 3
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Use absolute imports for internal modules:from authentication import get_auth_dependency
Import FastAPI dependencies with:from fastapi import APIRouter, HTTPException, Request, status, Depends
Import Llama Stack client with:from llama_stack_client import AsyncLlamaStackClient
Checkconstants.pyfor shared constants before defining new ones
All modules start with descriptive docstrings explaining purpose
Uselogger = get_logger(__name__)fromlog.pyfor module logging
Type aliases defined at module level for clarity
Use Final[type] as type hint for all constants
All functions require docstrings with brief descriptions
Complete type annotations for parameters and return types in functions
Usetyping_extensions.Selffor model validators in Pydantic models
Use modern union type syntaxstr | intinstead ofUnion[str, int]
UseOptional[Type]for optional type hints
Use snake_case with descriptive, action-oriented function names (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns; return new data structures instead
Useasync deffor I/O operations and external API calls
HandleAPIConnectionErrorfrom Llama Stack in error handling
Use standard log levels with clear purposes: debug, info, warning, error
All classes require descriptive docstrings explaining purpose
Use PascalCase for class names with standard suffixes: Configuration, Error/Exception, Resolver, Interface
Use ABC for abstract base classes with@abstractmethoddecorators
Use@model_validatorand@field_validatorfor Pydantic model validation
Complete type annotations for all class attributes; use specific types, notAny
Follow Google Python docstring conventions with Parameters, Returns, Raises, and Attributes sections
Files:
scripts/generate_openapi_schema.pysrc/models/responses.pysrc/app/main.pysrc/app/endpoints/a2a.py
src/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Pydantic models extend
ConfigurationBasefor config,BaseModelfor data models
Files:
src/models/responses.pysrc/app/main.pysrc/app/endpoints/a2a.py
src/app/endpoints/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Use FastAPI
HTTPExceptionwith appropriate status codes for API endpoints
Files:
src/app/endpoints/a2a.py
🧠 Learnings (11)
📚 Learning: 2026-04-15T18:53:54.785Z
Learnt from: Lifto
Repo: lightspeed-core/lightspeed-stack PR: 1510
File: src/models/requests.py:742-769
Timestamp: 2026-04-15T18:53:54.785Z
Learning: In `src/models/requests.py`, keep `ResponsesRequest.validate_body_size` using `json.dumps(values)` with Python defaults (including ASCII escaping and the default separators). This default formatting is intentional to make the 65,536-character limit conservatively strict (accounting for small encoding/format overhead). Do not recommend changing it to `ensure_ascii=False` or using compact separators for this validator, since an exact wire-format byte match with the client payload is not achievable via `json.dumps` anyway.
Applied to files:
.spectral.yaml
📚 Learning: 2026-04-20T15:09:45.119Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1548
File: src/app/endpoints/rlsapi_v1.py:56-56
Timestamp: 2026-04-20T15:09:45.119Z
Learning: In `src/app/endpoints/rlsapi_v1.py`, treat the line `_get_rh_identity_context = get_rh_identity_context` as an intentional temporary backward-compatibility shim (introduced for PR `#1548`, Splunk HEC telemetry work). Do not flag it as dead/unnecessary/unused code during review until the planned part 3 is merged and the responses endpoint is fully wired up such that no tests or external consumers reference the underscore-prefixed name.
Applied to files:
.spectral.yaml
📚 Learning: 2025-07-28T13:15:54.895Z
Learnt from: radofuchs
Repo: lightspeed-core/lightspeed-stack PR: 306
File: .github/workflows/e2e_tests.yaml:4-4
Timestamp: 2025-07-28T13:15:54.895Z
Learning: The user radofuchs prefers unrestricted pull_request_target workflows with secrets access for comprehensive PR testing, accepting the security trade-offs to ensure all problems are caught during the PR process.
Applied to files:
.github/workflows/openapi_spectral.yaml
📚 Learning: 2026-01-12T10:58:40.230Z
Learnt from: blublinsky
Repo: lightspeed-core/lightspeed-stack PR: 972
File: src/models/config.py:459-513
Timestamp: 2026-01-12T10:58:40.230Z
Learning: In lightspeed-core/lightspeed-stack, for Python files under src/models, when a user claims a fix is done but the issue persists, verify the current code state before accepting the fix. Steps: review the diff, fetch the latest changes, run relevant tests, reproduce the issue, search the codebase for lingering references to the original problem, confirm the fix is applied and not undone by subsequent commits, and validate with local checks to ensure the issue is resolved.
Applied to files:
src/models/responses.py
📚 Learning: 2026-02-25T07:46:33.545Z
Learnt from: asimurka
Repo: lightspeed-core/lightspeed-stack PR: 1211
File: src/models/responses.py:8-16
Timestamp: 2026-02-25T07:46:33.545Z
Learning: In the Python codebase, requests.py should use OpenAIResponseInputTool as Tool while responses.py uses OpenAIResponseTool as Tool. This difference is intentional due to differing schemas for input vs output tools in llama-stack-api. Apply this distinction consistently to other models under src/models (e.g., ensure request-related tools use the InputTool variant and response-related tools use the ResponseTool variant). If adding new tools, choose the corresponding InputTool or Tool class based on whether the tool represents input or output, and document the rationale in code comments.
Applied to files:
src/models/responses.py
📚 Learning: 2026-03-02T15:57:15.929Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-stack PR: 1250
File: AGENTS.md:11-12
Timestamp: 2026-03-02T15:57:15.929Z
Learning: In lightspeed-stack's AGENTS.md, the Linting Tools section lists actual tool names (e.g., mypy, pylint, pyright, ruff, black), not make target names. For example, `check-types` is a make target that runs mypy, but mypy is the tool name documented in the Linting Tools section.
Applied to files:
MakefileCONTRIBUTING.md
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv run make verify` to run all linters (black, pylint, pyright, ruff, docstyle, check-types)
Applied to files:
MakefileCONTRIBUTING.md
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv run make format` to auto-format code with black + ruff
Applied to files:
Makefile
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv` package manager with `uv run` prefix for all commands
Applied to files:
CONTRIBUTING.md
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv sync --group dev --group llslibdev` for dependencies
Applied to files:
CONTRIBUTING.md
📚 Learning: 2026-04-06T20:18:07.852Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1463
File: src/app/endpoints/rlsapi_v1.py:266-271
Timestamp: 2026-04-06T20:18:07.852Z
Learning: In the lightspeed-stack codebase, within `src/app/endpoints/` inference/MCP endpoints, treat `tools: Optional[list[Any]]` in MCP tool definitions as an intentional, consistent typing pattern (used across `query`, `responses`, `streaming_query`, `rlsapi_v1`). Do not raise or suggest this as a typing issue during code review; changing it in isolation could break endpoint typing consistency across the codebase.
Applied to files:
src/app/endpoints/a2a.py
🪛 checkmake (0.2.2)
Makefile
[warning] 123-123: Target body for "verify" exceeds allowed length of 5 (7).
(maxbodylength)
🪛 Checkov (3.2.519)
docs/openapi.json
[high] 1-19584: Ensure that the global security field has rules defined
(CKV_OPENAPI_4)
[high] 1-19584: Ensure that security operations is not empty.
(CKV_OPENAPI_5)
🪛 LanguageTool
docs/contributing/openapi-tags-and-spectral.md
[style] ~25-~25: Consider using the synonym “brief” (= concise, using a few words, not lasting long) to strengthen your wording.
Context: ...akefile skips Spectral and prints a short message so make verify can still ...
(QUICK_BRIEF)
[uncategorized] ~25-~25: The official name of this software platform is spelled with a capital “H”.
Context: ...de.js to run the check locally. CI (.github/workflows/openapi_spectral.yaml) alway...
(GITHUB)
CONTRIBUTING.md
[uncategorized] ~52-~52: The official name of this software platform is spelled with a capital “H”.
Context: ...check. CI always runs Spectral (see .github/workflows/openapi_spectral.yaml). The...
(GITHUB)
[uncategorized] ~62-~62: The official name of this software platform is spelled with a capital “H”.
Context: ...work. CI uses Node 22 for Spectral (see .github/workflows/openapi_spectral.yaml). #...
(GITHUB)
🔇 Additional comments (12)
.github/workflows/openapi_spectral.yaml (2)
3-5: Avoid duplicate Spectral runs for same-repo PR branches.With both unfiltered
pushandpull_request, same-repo PRs run this workflow twice. Scopepushto protected branches or tags if that duplication is not intentional.🧹 Proposed trigger scope
on: - - push - - pull_request + push: + branches: [main] + pull_request:
7-20: LGTM — the Spectral job is minimal and read-only.The workflow uses least-privilege read permissions and runs the same error-severity lint command as the local target.
src/models/responses.py (1)
1707-1714: LGTM — SSE documentation is now OpenAPI-compliant.The response-level description keeps the
text/event-streamclarification without using unsupported media-typedescriptionmetadata.src/app/main.py (1)
132-135: LGTM — OpenAPI metadata is wired into the app.The server URL normalization and
openapi_tagswiring align the generated schema with the new Spectral tag checks..spectral.yaml (1)
1-9: LGTM — Spectral configuration is clear and scoped.The file enables the OAS ruleset and documents why media example validation is temporarily disabled.
scripts/generate_openapi_schema.py (1)
78-89: LGTM — generated schemas now include global tags.Passing
app.openapi_tagskeepsdocs/openapi.jsonin sync with the FastAPI tag declarations.Makefile (1)
116-130: LGTM — local verification stays usable while CI enforces OpenAPI linting.The
npxguard avoids surprising local failures, and adding the target toverifymakes the check discoverable.CONTRIBUTING.md (1)
20-25: LGTM — contributor docs cover the new OpenAPI lint workflow.The prerequisites, install guidance, and Spectral section now explain Node/npx, local skip behavior, CI enforcement, and tag-list maintenance.
Also applies to: 47-62, 219-225
docs/contributing/openapi-tags-and-spectral.md (1)
1-27: LGTM — tag ownership and Spectral behavior are well documented.The page clearly states where tags come from, where to mirror them, how schema generation preserves them, and how local/CI linting behaves.
docs/openapi.json (1)
9938-9939: Past polish applied — LGTM.The summaries now use proper "JSON-RPC" casing and the descriptions no longer reference the other handler, addressing the earlier review feedback on both GET and POST operations.
src/app/endpoints/a2a.py (2)
695-731: LGTM — clean split into dedicated GET/POST handlers.The split preserves the original behavior (both paths delegate to
_handle_a2a_jsonrpc), gives each operation a distinctoperation_id,summary, anddescriptionfor OpenAPI uniqueness, and keeps@authorize(Action.A2A_JSONRPC)enforcement on both. This directly addresses the duplicateoperationIdviolation called out in the PR objective.
733-737: Helper extraction is appropriate.Moving the shared ASGI bridging/streaming logic into
_handle_a2a_jsonrpcavoids duplication between the GET and POST handlers. Signature and return type are consistent with the public handlers.
ffcecf4 to
27f69f5
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
scripts/generate_openapi_schema.py (1)
78-89:⚠️ Potential issue | 🟠 MajorRegenerate and commit the OpenAPI artifacts.
The new
tags=app.openapi_tagschanges generated schema output, and CI reportsdocs/openapi.jsondiffers fromuv run python scripts/generate_openapi_schema.py /tmp/openapi-generated.json. Regeneratedocs/openapi.jsonand the derivedopenapi2.md, then commit both.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/generate_openapi_schema.py` around lines 78 - 89, Run the OpenAPI generation script to update committed artifacts: execute the existing script (python scripts/generate_openapi_schema.py /tmp/openapi-generated.json) which uses get_openapi with the new tags=app.openapi_tags, then overwrite docs/openapi.json with the generated output and regenerate the derived openapi2.md (the project’s conversion step that consumes /tmp/openapi-generated.json), verify diffs, and commit both docs/openapi.json and openapi2.md to the repo.
♻️ Duplicate comments (1)
.github/workflows/openapi_spectral.yaml (1)
7-9:⚠️ Potential issue | 🟡 MinorScope the
pushtrigger to avoid duplicate PR runs.With unfiltered
pushandpull_request, same-repo PR branches run this workflow twice. Keep PR checks, but restrict push runs to protected branches such asmain.🧹 Proposed workflow trigger cleanup
on: - - push - - pull_request + push: + branches: [main] + pull_request:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/openapi_spectral.yaml around lines 7 - 9, The workflow trigger currently fires on both unfiltered push and pull_request causing duplicate runs for PR branches; change the top-level trigger so push is restricted to protected branches (e.g., replace "- push" with "push: branches: - main" or similar protected branch names) while keeping "pull_request" intact so PR checks still run; update the YAML "on:" block to list "pull_request" and a scoped "push: branches: [main]" to avoid duplicate PR runs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/contributing/openapi-tags-and-spectral.md`:
- Line 25: The long sentence that begins "This is part of **`make verify`**. If
**`npx`** is not on your **`PATH`**, the Makefile **skips** Spectral and prints
a short message so **`make verify`** can still pass; install Node.js to run the
check locally. **CI** (`.github/workflows/openapi_spectral.yaml`) always runs
Spectral. Failures are driven by **error**-severity rules." should be split into
shorter sentences for clarity: one stating Spectral is part of make verify, one
explaining the local fallback when npx is missing and the suggested fix (install
Node.js), one noting CI always runs Spectral, and a final short sentence stating
failures are driven by error-severity rules; update the paragraph in
docs/contributing/openapi-tags-and-spectral.md accordingly.
In `@docs/openapi.json`:
- Around line 19486-19584: The committed docs/openapi.json is out of sync with
the generator; run the OpenAPI generator and recommit the regenerated file.
Execute the script scripts/generate_openapi_schema.py (as in uv run
scripts/generate_openapi_schema.py docs/openapi.json) to regenerate
docs/openapi.json so it matches get_openapi(...) output derived from
_OPENAPI_TAGS and the updated ResponsesResponse.openapi_response(); verify the
top-level tags ordering/content and any SSE/response metadata match, then commit
the regenerated file.
In `@src/app/main.py`:
- Around line 35-61: The module-level constant _OPENAPI_TAGS should be annotated
with typing.Final to follow the repo's typing convention: import Final from
typing and change the annotation to use Final[...] (e.g., Final[list[dict[str,
str]]]) for the _OPENAPI_TAGS declaration so the constant is statically
recognized as immutable; update the import if necessary and keep the same value
assignment.
---
Outside diff comments:
In `@scripts/generate_openapi_schema.py`:
- Around line 78-89: Run the OpenAPI generation script to update committed
artifacts: execute the existing script (python
scripts/generate_openapi_schema.py /tmp/openapi-generated.json) which uses
get_openapi with the new tags=app.openapi_tags, then overwrite docs/openapi.json
with the generated output and regenerate the derived openapi2.md (the project’s
conversion step that consumes /tmp/openapi-generated.json), verify diffs, and
commit both docs/openapi.json and openapi2.md to the repo.
---
Duplicate comments:
In @.github/workflows/openapi_spectral.yaml:
- Around line 7-9: The workflow trigger currently fires on both unfiltered push
and pull_request causing duplicate runs for PR branches; change the top-level
trigger so push is restricted to protected branches (e.g., replace "- push" with
"push: branches: - main" or similar protected branch names) while keeping
"pull_request" intact so PR checks still run; update the YAML "on:" block to
list "pull_request" and a scoped "push: branches: [main]" to avoid duplicate PR
runs.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 95259f47-886a-4b48-a46f-d15c3832365e
📒 Files selected for processing (10)
.github/workflows/openapi_spectral.yaml.spectral.yamlCONTRIBUTING.mdMakefiledocs/contributing/openapi-tags-and-spectral.mddocs/openapi.jsonscripts/generate_openapi_schema.pysrc/app/endpoints/a2a.pysrc/app/main.pysrc/models/responses.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
- GitHub Check: Pylinter
- GitHub Check: integration_tests (3.12)
- GitHub Check: unit_tests (3.12)
- GitHub Check: unit_tests (3.13)
- GitHub Check: build-pr
- GitHub Check: E2E: library mode / ci / group 1
- GitHub Check: E2E: server mode / ci / group 1
- GitHub Check: E2E: library mode / ci / group 3
- GitHub Check: E2E: server mode / ci / group 3
- GitHub Check: E2E: library mode / ci / group 2
- GitHub Check: E2E: server mode / ci / group 2
- GitHub Check: E2E Tests for Lightspeed Evaluation job
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Use absolute imports for internal modules:from authentication import get_auth_dependency
Import FastAPI dependencies with:from fastapi import APIRouter, HTTPException, Request, status, Depends
Import Llama Stack client with:from llama_stack_client import AsyncLlamaStackClient
Checkconstants.pyfor shared constants before defining new ones
All modules start with descriptive docstrings explaining purpose
Uselogger = get_logger(__name__)fromlog.pyfor module logging
Type aliases defined at module level for clarity
Use Final[type] as type hint for all constants
All functions require docstrings with brief descriptions
Complete type annotations for parameters and return types in functions
Usetyping_extensions.Selffor model validators in Pydantic models
Use modern union type syntaxstr | intinstead ofUnion[str, int]
UseOptional[Type]for optional type hints
Use snake_case with descriptive, action-oriented function names (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns; return new data structures instead
Useasync deffor I/O operations and external API calls
HandleAPIConnectionErrorfrom Llama Stack in error handling
Use standard log levels with clear purposes: debug, info, warning, error
All classes require descriptive docstrings explaining purpose
Use PascalCase for class names with standard suffixes: Configuration, Error/Exception, Resolver, Interface
Use ABC for abstract base classes with@abstractmethoddecorators
Use@model_validatorand@field_validatorfor Pydantic model validation
Complete type annotations for all class attributes; use specific types, notAny
Follow Google Python docstring conventions with Parameters, Returns, Raises, and Attributes sections
Files:
scripts/generate_openapi_schema.pysrc/models/responses.pysrc/app/main.pysrc/app/endpoints/a2a.py
src/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Pydantic models extend
ConfigurationBasefor config,BaseModelfor data models
Files:
src/models/responses.pysrc/app/main.pysrc/app/endpoints/a2a.py
src/app/endpoints/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Use FastAPI
HTTPExceptionwith appropriate status codes for API endpoints
Files:
src/app/endpoints/a2a.py
🧠 Learnings (11)
📚 Learning: 2026-04-15T18:53:54.785Z
Learnt from: Lifto
Repo: lightspeed-core/lightspeed-stack PR: 1510
File: src/models/requests.py:742-769
Timestamp: 2026-04-15T18:53:54.785Z
Learning: In `src/models/requests.py`, keep `ResponsesRequest.validate_body_size` using `json.dumps(values)` with Python defaults (including ASCII escaping and the default separators). This default formatting is intentional to make the 65,536-character limit conservatively strict (accounting for small encoding/format overhead). Do not recommend changing it to `ensure_ascii=False` or using compact separators for this validator, since an exact wire-format byte match with the client payload is not achievable via `json.dumps` anyway.
Applied to files:
.spectral.yaml
📚 Learning: 2026-04-20T15:09:45.119Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1548
File: src/app/endpoints/rlsapi_v1.py:56-56
Timestamp: 2026-04-20T15:09:45.119Z
Learning: In `src/app/endpoints/rlsapi_v1.py`, treat the line `_get_rh_identity_context = get_rh_identity_context` as an intentional temporary backward-compatibility shim (introduced for PR `#1548`, Splunk HEC telemetry work). Do not flag it as dead/unnecessary/unused code during review until the planned part 3 is merged and the responses endpoint is fully wired up such that no tests or external consumers reference the underscore-prefixed name.
Applied to files:
.spectral.yaml
📚 Learning: 2026-01-12T10:58:40.230Z
Learnt from: blublinsky
Repo: lightspeed-core/lightspeed-stack PR: 972
File: src/models/config.py:459-513
Timestamp: 2026-01-12T10:58:40.230Z
Learning: In lightspeed-core/lightspeed-stack, for Python files under src/models, when a user claims a fix is done but the issue persists, verify the current code state before accepting the fix. Steps: review the diff, fetch the latest changes, run relevant tests, reproduce the issue, search the codebase for lingering references to the original problem, confirm the fix is applied and not undone by subsequent commits, and validate with local checks to ensure the issue is resolved.
Applied to files:
src/models/responses.py
📚 Learning: 2026-02-25T07:46:33.545Z
Learnt from: asimurka
Repo: lightspeed-core/lightspeed-stack PR: 1211
File: src/models/responses.py:8-16
Timestamp: 2026-02-25T07:46:33.545Z
Learning: In the Python codebase, requests.py should use OpenAIResponseInputTool as Tool while responses.py uses OpenAIResponseTool as Tool. This difference is intentional due to differing schemas for input vs output tools in llama-stack-api. Apply this distinction consistently to other models under src/models (e.g., ensure request-related tools use the InputTool variant and response-related tools use the ResponseTool variant). If adding new tools, choose the corresponding InputTool or Tool class based on whether the tool represents input or output, and document the rationale in code comments.
Applied to files:
src/models/responses.py
📚 Learning: 2026-03-02T15:57:15.929Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-stack PR: 1250
File: AGENTS.md:11-12
Timestamp: 2026-03-02T15:57:15.929Z
Learning: In lightspeed-stack's AGENTS.md, the Linting Tools section lists actual tool names (e.g., mypy, pylint, pyright, ruff, black), not make target names. For example, `check-types` is a make target that runs mypy, but mypy is the tool name documented in the Linting Tools section.
Applied to files:
MakefileCONTRIBUTING.md
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv run make verify` to run all linters (black, pylint, pyright, ruff, docstyle, check-types)
Applied to files:
MakefileCONTRIBUTING.md
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv run make format` to auto-format code with black + ruff
Applied to files:
Makefile
📚 Learning: 2025-07-28T13:15:54.895Z
Learnt from: radofuchs
Repo: lightspeed-core/lightspeed-stack PR: 306
File: .github/workflows/e2e_tests.yaml:4-4
Timestamp: 2025-07-28T13:15:54.895Z
Learning: The user radofuchs prefers unrestricted pull_request_target workflows with secrets access for comprehensive PR testing, accepting the security trade-offs to ensure all problems are caught during the PR process.
Applied to files:
.github/workflows/openapi_spectral.yaml
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv` package manager with `uv run` prefix for all commands
Applied to files:
CONTRIBUTING.md
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv sync --group dev --group llslibdev` for dependencies
Applied to files:
CONTRIBUTING.md
📚 Learning: 2026-04-06T20:18:07.852Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1463
File: src/app/endpoints/rlsapi_v1.py:266-271
Timestamp: 2026-04-06T20:18:07.852Z
Learning: In the lightspeed-stack codebase, within `src/app/endpoints/` inference/MCP endpoints, treat `tools: Optional[list[Any]]` in MCP tool definitions as an intentional, consistent typing pattern (used across `query`, `responses`, `streaming_query`, `rlsapi_v1`). Do not raise or suggest this as a typing issue during code review; changing it in isolation could break endpoint typing consistency across the codebase.
Applied to files:
src/app/endpoints/a2a.py
🪛 checkmake (0.2.2)
Makefile
[warning] 123-123: Target body for "verify" exceeds allowed length of 5 (7).
(maxbodylength)
🪛 Checkov (3.2.519)
docs/openapi.json
[high] 1-19584: Ensure that the global security field has rules defined
(CKV_OPENAPI_4)
[high] 1-19584: Ensure that security operations is not empty.
(CKV_OPENAPI_5)
🪛 GitHub Actions: OpenAPI (Spectral)
scripts/generate_openapi_schema.py
[error] 1-1: Generation command was run and produced a schema different from the committed docs. Command: 'uv run python scripts/generate_openapi_schema.py /tmp/openapi-generated.json'.
docs/openapi.json
[error] 1-1: OpenAPI schema is out of date. Diff failed between 'docs/openapi.json' and '/tmp/openapi-generated.json'. Regenerate with: 'uv run scripts/generate_openapi_schema.py docs/openapi.json'.
🪛 LanguageTool
docs/contributing/openapi-tags-and-spectral.md
[style] ~25-~25: Consider using the synonym “brief” (= concise, using a few words, not lasting long) to strengthen your wording.
Context: ...akefile skips Spectral and prints a short message so make verify can still ...
(QUICK_BRIEF)
[uncategorized] ~25-~25: The official name of this software platform is spelled with a capital “H”.
Context: ...de.js to run the check locally. CI (.github/workflows/openapi_spectral.yaml) alway...
(GITHUB)
CONTRIBUTING.md
[uncategorized] ~52-~52: The official name of this software platform is spelled with a capital “H”.
Context: ...check. CI always runs Spectral (see .github/workflows/openapi_spectral.yaml). The...
(GITHUB)
[uncategorized] ~62-~62: The official name of this software platform is spelled with a capital “H”.
Context: ...work. CI uses Node 22 for Spectral (see .github/workflows/openapi_spectral.yaml). #...
(GITHUB)
🔇 Additional comments (10)
src/models/responses.py (1)
1708-1711: LGTM.The SSE note is now placed on the OpenAPI response description instead of the media-type object, which keeps the generated spec valid while preserving the documentation.
src/app/endpoints/a2a.py (2)
695-730: LGTM.The split GET/POST handlers give
/a2aunique OpenAPI operation IDs while preserving the shared authorization and request handling path.
733-913: LGTM.The extracted helper keeps the JSON-RPC forwarding logic centralized, reducing drift between the GET and POST handlers.
.spectral.yaml (1)
1-9: LGTM.The config enables the OAS ruleset while explicitly documenting why media-example validation is temporarily disabled.
src/app/main.py (1)
132-135: LGTM.Passing
openapi_tags=_OPENAPI_TAGScorrectly exposes the declared operation tags to schema generation.Makefile (1)
116-130: LGTM.The new target makes OpenAPI linting available locally, and the
npxguard keepsmake verifyusable on Python-only developer setups while CI still enforces Spectral.CONTRIBUTING.md (1)
21-62: LGTM.The contributor docs now cover the Node/npx prerequisite, local skip behavior, CI enforcement, and the
_OPENAPI_TAGSupdate requirement.Also applies to: 223-225
.github/workflows/openapi_spectral.yaml (1)
23-52: LGTM.The workflow validates generated OpenAPI drift, derived markdown drift, and Spectral linting in one focused CI job.
docs/contributing/openapi-tags-and-spectral.md (1)
1-27: Well-structured documentation with clear instructions.The documentation effectively explains the OpenAPI tag synchronization requirement, provides actionable guidance for developers, and includes necessary context about CI behavior and disabled rules. The separation into logical sections (tag management and linting) makes it easy to navigate.
docs/openapi.json (1)
9938-9958: Past feedback addressed.A2A GET/POST summaries now use standard "JSON-RPC" casing and the descriptions no longer leak handler-implementation details.
27f69f5 to
785d35e
Compare
tisnik
left a comment
There was a problem hiding this comment.
Probably would be nice to split into:
- the fix only + openapi.json changes
- the validator in place
- the things about tags
What are tags btw? Why we need them? It look like people will need to worry about the yet another place where to look/edit.
tisnik
left a comment
There was a problem hiding this comment.
At least, the title is wrong, improper Jira issue I guess?
What's wrong with the title (of the PR?)? |
Declaring global tags is optional according to the OpenAPI spec, but encouraged. Spectral the linter has a rule that checks if a tag attached to an operation has been declared: https://meta.stoplight.io/docs/spectral/4dec24461f3af-open-api-rules#operation-tag-defined |
785d35e to
43bcfb9
Compare
Split one commit into three. |
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
.github/workflows/openapi_spectral.yaml (1)
7-9:⚠️ Potential issue | 🟡 MinorAvoid duplicate workflow runs for same-repo PRs.
With both unfiltered
pushandpull_request, a PR branch in this repository runs the same Spectral job twice. Scopepushto protected branches while keeping PR validation.🛠️ Proposed trigger adjustment
-on: - - push - - pull_request +on: + push: + branches: [main] + pull_request:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/openapi_spectral.yaml around lines 7 - 9, The workflow currently triggers on both unfiltered "push" and "pull_request", causing duplicate runs for same-repo PRs; update the "on" triggers in openapi_spectral.yaml to keep "pull_request" as-is but scope "push" to only protected branches (e.g., list "main", "release/*" or your protected branch names under "push: branches:") so pushes to PR branches won't also trigger the Spectral job, ensuring a single run per PR.src/app/main.py (1)
35-61: 🛠️ Refactor suggestion | 🟠 MajorMark
_OPENAPI_TAGSasFinal.This is a module-level constant and should follow the repository constant typing convention. As per coding guidelines, Use Final[type] as type hint for all constants.
♻️ Proposed fix
import os from collections.abc import AsyncIterator from contextlib import asynccontextmanager +from typing import Final @@ -_OPENAPI_TAGS: list[dict[str, str]] = [ +_OPENAPI_TAGS: Final[list[dict[str, str]]] = [🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/main.py` around lines 35 - 61, The module-level constant _OPENAPI_TAGS should be annotated as Final: import Final from typing and change the type annotation of _OPENAPI_TAGS from list[dict[str, str]] to Final[list[dict[str, str]]] so the constant follows repository convention; update the import block to include Final and keep the existing variable name and value unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/app/endpoints/a2a.py`:
- Around line 705-710: The one-line docstring for the endpoint wrapper
handle_a2a_jsonrpc_get (and the similar wrapper at lines referenced 724-729)
must be expanded to a Google-style docstring: add a short summary line, a
"Args:" section documenting request (Request), auth (AuthTuple), and mcp_headers
(McpHeaders) including types and purpose, and a "Returns:" section describing
the Response | StreamingResponse return type and what is returned on
success/failure; include a "Raises:" entry if the function can raise specific
exceptions. Update only the docstrings for these wrapper functions
(handle_a2a_jsonrpc_get and the other new endpoint wrapper) to follow the
repository's Google Python docstring conventions.
- Around line 714-722: The POST and GET /a2a endpoint decorators currently omit
a responses declaration, so FastAPI/OpenAPI can't represent conditional
StreamingResponse output with media_type=MEDIA_TYPE_EVENT_STREAM; update the
`@router.post` (and the corresponding `@router.get`) decorators in
src/app/endpoints/a2a.py to include a responses parameter mirroring streaming
behavior (like streaming_query_responses used in streaming_query.py) that
declares both "application/json" and "text/event-stream"
(MEDIA_TYPE_EVENT_STREAM) content types so the OpenAPI schema reflects both
non-streaming JSON and streaming SSE outputs (StreamingResponse).
- Around line 695-711: The GET route handle_a2a_jsonrpc_get currently advertises
JSON-RPC via GET (contradicting A2A spec); make it explicit compatibility-only
by updating the router decorator to exclude it from OpenAPI (set
include_in_schema=False) and change its summary/description to state it is a
non-spec, backward-compatibility GET that does not claim A2A JSON-RPC POST
compliance; keep the function name handle_a2a_jsonrpc_get and its call to
_handle_a2a_jsonrpc intact so behavior is unchanged, or alternatively remove the
router.get decorator entirely if you want to drop the GET route.
---
Duplicate comments:
In @.github/workflows/openapi_spectral.yaml:
- Around line 7-9: The workflow currently triggers on both unfiltered "push" and
"pull_request", causing duplicate runs for same-repo PRs; update the "on"
triggers in openapi_spectral.yaml to keep "pull_request" as-is but scope "push"
to only protected branches (e.g., list "main", "release/*" or your protected
branch names under "push: branches:") so pushes to PR branches won't also
trigger the Spectral job, ensuring a single run per PR.
In `@src/app/main.py`:
- Around line 35-61: The module-level constant _OPENAPI_TAGS should be annotated
as Final: import Final from typing and change the type annotation of
_OPENAPI_TAGS from list[dict[str, str]] to Final[list[dict[str, str]]] so the
constant follows repository convention; update the import block to include Final
and keep the existing variable name and value unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 17d7a513-e902-4629-a5e8-7505b4fc01ad
📒 Files selected for processing (10)
.github/workflows/openapi_spectral.yaml.spectral.yamlCONTRIBUTING.mdMakefiledocs/contributing/openapi-tags-and-spectral.mddocs/openapi.jsonscripts/generate_openapi_schema.pysrc/app/endpoints/a2a.pysrc/app/main.pysrc/models/responses.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
- GitHub Check: build-pr
- GitHub Check: pydocstyle
- GitHub Check: unit_tests (3.13)
- GitHub Check: unit_tests (3.12)
- GitHub Check: list_outdated_dependencies
- GitHub Check: E2E: server mode / ci / group 1
- GitHub Check: E2E: server mode / ci / group 2
- GitHub Check: E2E: library mode / ci / group 1
- GitHub Check: E2E: library mode / ci / group 2
- GitHub Check: E2E: library mode / ci / group 3
- GitHub Check: E2E: server mode / ci / group 3
- GitHub Check: E2E Tests for Lightspeed Evaluation job
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Use absolute imports for internal modules:from authentication import get_auth_dependency
Import FastAPI dependencies with:from fastapi import APIRouter, HTTPException, Request, status, Depends
Import Llama Stack client with:from llama_stack_client import AsyncLlamaStackClient
Checkconstants.pyfor shared constants before defining new ones
All modules start with descriptive docstrings explaining purpose
Uselogger = get_logger(__name__)fromlog.pyfor module logging
Type aliases defined at module level for clarity
Use Final[type] as type hint for all constants
All functions require docstrings with brief descriptions
Complete type annotations for parameters and return types in functions
Usetyping_extensions.Selffor model validators in Pydantic models
Use modern union type syntaxstr | intinstead ofUnion[str, int]
UseOptional[Type]for optional type hints
Use snake_case with descriptive, action-oriented function names (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns; return new data structures instead
Useasync deffor I/O operations and external API calls
HandleAPIConnectionErrorfrom Llama Stack in error handling
Use standard log levels with clear purposes: debug, info, warning, error
All classes require descriptive docstrings explaining purpose
Use PascalCase for class names with standard suffixes: Configuration, Error/Exception, Resolver, Interface
Use ABC for abstract base classes with@abstractmethoddecorators
Use@model_validatorand@field_validatorfor Pydantic model validation
Complete type annotations for all class attributes; use specific types, notAny
Follow Google Python docstring conventions with Parameters, Returns, Raises, and Attributes sections
Files:
scripts/generate_openapi_schema.pysrc/app/main.pysrc/app/endpoints/a2a.pysrc/models/responses.py
src/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Pydantic models extend
ConfigurationBasefor config,BaseModelfor data models
Files:
src/app/main.pysrc/app/endpoints/a2a.pysrc/models/responses.py
src/app/endpoints/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Use FastAPI
HTTPExceptionwith appropriate status codes for API endpoints
Files:
src/app/endpoints/a2a.py
🧠 Learnings (12)
📚 Learning: 2026-04-15T18:53:54.785Z
Learnt from: Lifto
Repo: lightspeed-core/lightspeed-stack PR: 1510
File: src/models/requests.py:742-769
Timestamp: 2026-04-15T18:53:54.785Z
Learning: In `src/models/requests.py`, keep `ResponsesRequest.validate_body_size` using `json.dumps(values)` with Python defaults (including ASCII escaping and the default separators). This default formatting is intentional to make the 65,536-character limit conservatively strict (accounting for small encoding/format overhead). Do not recommend changing it to `ensure_ascii=False` or using compact separators for this validator, since an exact wire-format byte match with the client payload is not achievable via `json.dumps` anyway.
Applied to files:
.spectral.yaml
📚 Learning: 2026-04-20T15:09:45.119Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1548
File: src/app/endpoints/rlsapi_v1.py:56-56
Timestamp: 2026-04-20T15:09:45.119Z
Learning: In `src/app/endpoints/rlsapi_v1.py`, treat the line `_get_rh_identity_context = get_rh_identity_context` as an intentional temporary backward-compatibility shim (introduced for PR `#1548`, Splunk HEC telemetry work). Do not flag it as dead/unnecessary/unused code during review until the planned part 3 is merged and the responses endpoint is fully wired up such that no tests or external consumers reference the underscore-prefixed name.
Applied to files:
.spectral.yaml
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Applies to **/*.py : Use Final[type] as type hint for all constants
Applied to files:
src/app/main.py
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Applies to **/constants.py : Central `constants.py` for shared constants with descriptive comments
Applied to files:
src/app/main.py
📚 Learning: 2026-03-02T15:57:15.929Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-stack PR: 1250
File: AGENTS.md:11-12
Timestamp: 2026-03-02T15:57:15.929Z
Learning: In lightspeed-stack's AGENTS.md, the Linting Tools section lists actual tool names (e.g., mypy, pylint, pyright, ruff, black), not make target names. For example, `check-types` is a make target that runs mypy, but mypy is the tool name documented in the Linting Tools section.
Applied to files:
MakefileCONTRIBUTING.md
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv run make verify` to run all linters (black, pylint, pyright, ruff, docstyle, check-types)
Applied to files:
MakefileCONTRIBUTING.md
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv run make format` to auto-format code with black + ruff
Applied to files:
Makefile
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv` package manager with `uv run` prefix for all commands
Applied to files:
CONTRIBUTING.md
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv sync --group dev --group llslibdev` for dependencies
Applied to files:
CONTRIBUTING.md
📚 Learning: 2026-04-06T20:18:07.852Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1463
File: src/app/endpoints/rlsapi_v1.py:266-271
Timestamp: 2026-04-06T20:18:07.852Z
Learning: In the lightspeed-stack codebase, within `src/app/endpoints/` inference/MCP endpoints, treat `tools: Optional[list[Any]]` in MCP tool definitions as an intentional, consistent typing pattern (used across `query`, `responses`, `streaming_query`, `rlsapi_v1`). Do not raise or suggest this as a typing issue during code review; changing it in isolation could break endpoint typing consistency across the codebase.
Applied to files:
src/app/endpoints/a2a.py
📚 Learning: 2026-01-12T10:58:40.230Z
Learnt from: blublinsky
Repo: lightspeed-core/lightspeed-stack PR: 972
File: src/models/config.py:459-513
Timestamp: 2026-01-12T10:58:40.230Z
Learning: In lightspeed-core/lightspeed-stack, for Python files under src/models, when a user claims a fix is done but the issue persists, verify the current code state before accepting the fix. Steps: review the diff, fetch the latest changes, run relevant tests, reproduce the issue, search the codebase for lingering references to the original problem, confirm the fix is applied and not undone by subsequent commits, and validate with local checks to ensure the issue is resolved.
Applied to files:
src/models/responses.py
📚 Learning: 2026-02-25T07:46:33.545Z
Learnt from: asimurka
Repo: lightspeed-core/lightspeed-stack PR: 1211
File: src/models/responses.py:8-16
Timestamp: 2026-02-25T07:46:33.545Z
Learning: In the Python codebase, requests.py should use OpenAIResponseInputTool as Tool while responses.py uses OpenAIResponseTool as Tool. This difference is intentional due to differing schemas for input vs output tools in llama-stack-api. Apply this distinction consistently to other models under src/models (e.g., ensure request-related tools use the InputTool variant and response-related tools use the ResponseTool variant). If adding new tools, choose the corresponding InputTool or Tool class based on whether the tool represents input or output, and document the rationale in code comments.
Applied to files:
src/models/responses.py
🪛 checkmake (0.3.2)
Makefile
[warning] 116-116: Target body for "lint-openapi" exceeds allowed length of 5 lines (6).
(maxbodylength)
[warning] 123-123: Target body for "verify" exceeds allowed length of 5 lines (7).
(maxbodylength)
🪛 Checkov (3.2.519)
docs/openapi.json
[high] 1-19584: Ensure that the global security field has rules defined
(CKV_OPENAPI_4)
[high] 1-19584: Ensure that security operations is not empty.
(CKV_OPENAPI_5)
🪛 LanguageTool
CONTRIBUTING.md
[uncategorized] ~52-~52: The official name of this software platform is spelled with a capital “H”.
Context: ...check. CI always runs Spectral (see .github/workflows/openapi_spectral.yaml). The...
(GITHUB)
[uncategorized] ~62-~62: The official name of this software platform is spelled with a capital “H”.
Context: ...work. CI uses Node 22 for Spectral (see .github/workflows/openapi_spectral.yaml). #...
(GITHUB)
docs/contributing/openapi-tags-and-spectral.md
[style] ~25-~25: Consider using the synonym “brief” (= concise, using a few words, not lasting long) to strengthen your wording.
Context: ...akefile skips Spectral and prints a short message so make verify can still ...
(QUICK_BRIEF)
[uncategorized] ~25-~25: The official name of this software platform is spelled with a capital “H”.
Context: ...de.js to run the check locally. CI (.github/workflows/openapi_spectral.yaml) alway...
(GITHUB)
🔇 Additional comments (14)
src/models/responses.py (1)
1707-1714: LGTM — SSE metadata is now OpenAPI-safe.Moving the SSE explanation into the response-level
descriptionavoids the invalid media-type-leveldescriptionwhile preserving useful docs fortext/event-stream..spectral.yaml (1)
1-9: LGTM — focused Spectral configuration.The config enables the OAS ruleset and clearly documents why media-example validation is temporarily disabled.
scripts/generate_openapi_schema.py (1)
78-89: LGTM — generated OpenAPI now preserves global tags.Passing
tags=app.openapi_tagskeepsdocs/openapi.jsonaligned with FastAPI’s configuredopenapi_tags.Makefile (1)
116-130: LGTM — local fallback keepsverifyusable.The
npxguard prevents localmake verifyfrom failing solely because Node.js is absent, while still allowing CI to enforce Spectral.src/app/main.py (1)
132-135: LGTM — FastAPI is wired to emit declared tags.Removing the trailing slash from the local server URL and passing
openapi_tags=_OPENAPI_TAGSaligns the app metadata with generated OpenAPI output..github/workflows/openapi_spectral.yaml (1)
18-42: LGTM — CI checks both schema drift and Spectral errors.The workflow regenerates the schema, diffs it against the committed file, and then runs the same Spectral CLI path CI should enforce.
CONTRIBUTING.md (1)
21-62: LGTM — contributor docs cover the new OpenAPI lint flow.The docs now explain the Node/npx prerequisite, local skip behavior, CI enforcement, and the tag-regeneration process.
Also applies to: 223-226
docs/contributing/openapi-tags-and-spectral.md (1)
1-27: LGTM — tag maintenance and Spectral behavior are documented.This gives contributors the missing operational context for updating router tags, regenerating
docs/openapi.json, and running the new lint target.docs/openapi.json (6)
20-20: LGTM — server URL is valid and consistent.Removing the trailing slash keeps the local server URL normalized without changing the intended endpoint base.
8583-8585: LGTM — SSE metadata is documented in a valid OpenAPI location.The response-level description clearly distinguishes the
text/event-streambehavior.
8638-8642: LGTM — SSE example matches the declared string payload.The example preserves SSE framing while keeping the media type schema OpenAPI-compatible.
9938-9940: LGTM — A2A GET documentation is clearer.The summary uses standard JSON-RPC casing and the description avoids handler implementation details.
9956-9958: LGTM — POST operation now has a distinct operationId.This addresses the duplicate-operationId class of OpenAPI lint failures while keeping the public text consistent with GET.
19486-19584: LGTM — top-level tags are declared consistently.The added tag catalog should satisfy operation-tag declaration linting and improves generated API navigation.
43bcfb9 to
97f498e
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (4)
src/app/main.py (1)
35-61: 🛠️ Refactor suggestion | 🟠 MajorMark
_OPENAPI_TAGSasFinal.This module-level tag list is a constant and should follow the repository typing convention. As per coding guidelines, Use Final[type] as type hint for all constants.
Proposed fix
import os from collections.abc import AsyncIterator from contextlib import asynccontextmanager +from typing import Final @@ -_OPENAPI_TAGS: list[dict[str, str]] = [ +_OPENAPI_TAGS: Final[list[dict[str, str]]] = [🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/main.py` around lines 35 - 61, The module-level constant _OPENAPI_TAGS should be annotated as Final; import Final from typing and change its annotation to use Final (e.g., _OPENAPI_TAGS: Final[list[dict[str, str]]]) so the constant follows the repo typing convention; update the top-level imports to include Final if missing and keep the existing value unchanged..github/workflows/openapi_spectral.yaml (1)
7-9:⚠️ Potential issue | 🟡 MinorAvoid duplicate Spectral runs for same-repo PR branches.
With both unfiltered
pushandpull_request, PRs from repository branches run this workflow twice. Scopepushto protected branches while keeping PR checks.Suggested workflow trigger update
-on: - - push - - pull_request +on: + push: + branches: [main] + pull_request:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/openapi_spectral.yaml around lines 7 - 9, The workflow currently triggers on both an unfiltered push and pull_request, causing duplicate runs for PRs; update the triggers so that push is scoped to protected branches only (e.g., main, master, release/*) while leaving pull_request intact — modify the top-level on block to replace the bare push entry with a push: branches: [...] list and keep the pull_request entry as-is so PRs still run but branch pushes do not duplicate PR runs.docs/openapi.json (1)
17517-19614:⚠️ Potential issue | 🔴 CriticalCI failure:
docs/openapi.jsonis again out of sync with the generator.The Spectral workflow's regenerate-and-diff step is failing:
OpenAPI schema is out of date. 'diff -u docs/openapi.json /tmp/openapi-generated.json' failed; regenerate with: 'uv run scripts/generate_openapi_schema.py docs/openapi.json'.Although a prior regeneration was applied in 785d35e, subsequent changes to
_OPENAPI_TAGS/a2aendpoint metadata /ResponsesResponse.openapi_response()have drifted the committed artifact again. Please regenerate and recommit:uv run scripts/generate_openapi_schema.py docs/openapi.jsonLikely drift points to check after regeneration: the top-level
tagsarray (hunk 6, ordering/content from_OPENAPI_TAGS), the/a2aGET/POST response metadata (hunks 4–5), and the SSE response description/example (hunks 2–3).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/openapi.json` around lines 17517 - 19614, The committed OpenAPI schema is out of date with the generator: regenerate the JSON using the provided script to incorporate recent changes to _OPENAPI_TAGS, the /a2a endpoint metadata, and ResponsesResponse.openapi_response(); run the generator (uv run scripts/generate_openapi_schema.py docs/openapi.json) locally, verify the top-level tags array and the /a2a GET/POST response metadata plus the SSE response/example reflect current code, then commit the regenerated docs/openapi.json.src/app/endpoints/a2a.py (1)
696-735:⚠️ Potential issue | 🟠 MajorDo not advertise JSON-RPC over GET as A2A spec-compliant.
The A2A v0.3.0 spec states JSON-RPC methods are invoked via HTTP POST to the Agent Card URL, while GET is used by other bindings/discovery flows. If this GET route is compatibility-only, keep behavior but exclude it from OpenAPI or reword it as non-spec compatibility. Source: A2A v0.3.0 specification.
🛠️ Proposed compatibility-only OpenAPI fix
`@router.get`( "/a2a", response_model=None, - responses=a2a_jsonrpc_responses, - operation_id="handle_a2a_jsonrpc_a2a_get", - summary="Handle A2A JSON-RPC GET", - description=( - "Handle GET on /a2a for A2A JSON-RPC requests following the A2A protocol specification." - ), + include_in_schema=False, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/endpoints/a2a.py` around lines 696 - 735, The GET handler handle_a2a_jsonrpc_get advertises JSON-RPC over HTTP GET as A2A-spec behavior; change it to be explicitly compatibility-only and remove it from the public OpenAPI surface: update the router.get decorator on handle_a2a_jsonrpc_get to include include_in_schema=False (so it is not advertised in OpenAPI) and/or edit the summary/description to clearly state "compatibility-only, not A2A spec-compliant; use POST for JSON-RPC" while keeping the handler delegating to _handle_a2a_jsonrpc for runtime compatibility.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@CONTRIBUTING.md`:
- Around line 221-225: The CONTRIBUTING.md linter list omits Spectral: update
the Linting Tools summary paragraph (the line listing Black, Ruff, Pyright,
Pylint, Pydocstyle, Mypy, and Bandit) to also include Spectral so it matches the
later "OpenAPI (Spectral)" section; mention Spectral by name and, if helpful, a
parenthetical reference to the existing lint-openapi target (npx --yes
`@stoplight/spectral-cli`@6 / make lint-openapi) to keep the overview consistent
with the OpenAPI documentation.
In `@docs/contributing/openapi-tags-and-spectral.md`:
- Around line 21-25: Update the doc so the local lint command uses the "uv run"
prefix: replace the shown command "make lint-openapi" with the uv-run form (use
"uv run make lint-openapi") and update the surrounding text to recommend using
"uv run" for all local OpenAPI/Spectral invocations; keep notes about
npx/Node.js, that this is part of "make verify", and that CI
(.github/workflows/openapi_spectral.yaml) always runs Spectral and failures are
driven by error-severity rules.
In `@src/app/endpoints/a2a_openapi.py`:
- Around line 3-8: The module-level OpenAPI metadata variable
a2a_jsonrpc_responses should be declared as a constant using Final: import Final
from typing and change the type hint to Final[dict[int | str, dict[str, Any]]],
and update the declaration of a2a_jsonrpc_responses accordingly so the name
remains the same but is statically typed as Final; ensure the import for Final
is added to the existing typing imports and no runtime behavior is changed.
---
Duplicate comments:
In @.github/workflows/openapi_spectral.yaml:
- Around line 7-9: The workflow currently triggers on both an unfiltered push
and pull_request, causing duplicate runs for PRs; update the triggers so that
push is scoped to protected branches only (e.g., main, master, release/*) while
leaving pull_request intact — modify the top-level on block to replace the bare
push entry with a push: branches: [...] list and keep the pull_request entry
as-is so PRs still run but branch pushes do not duplicate PR runs.
In `@docs/openapi.json`:
- Around line 17517-19614: The committed OpenAPI schema is out of date with the
generator: regenerate the JSON using the provided script to incorporate recent
changes to _OPENAPI_TAGS, the /a2a endpoint metadata, and
ResponsesResponse.openapi_response(); run the generator (uv run
scripts/generate_openapi_schema.py docs/openapi.json) locally, verify the
top-level tags array and the /a2a GET/POST response metadata plus the SSE
response/example reflect current code, then commit the regenerated
docs/openapi.json.
In `@src/app/endpoints/a2a.py`:
- Around line 696-735: The GET handler handle_a2a_jsonrpc_get advertises
JSON-RPC over HTTP GET as A2A-spec behavior; change it to be explicitly
compatibility-only and remove it from the public OpenAPI surface: update the
router.get decorator on handle_a2a_jsonrpc_get to include
include_in_schema=False (so it is not advertised in OpenAPI) and/or edit the
summary/description to clearly state "compatibility-only, not A2A
spec-compliant; use POST for JSON-RPC" while keeping the handler delegating to
_handle_a2a_jsonrpc for runtime compatibility.
In `@src/app/main.py`:
- Around line 35-61: The module-level constant _OPENAPI_TAGS should be annotated
as Final; import Final from typing and change its annotation to use Final (e.g.,
_OPENAPI_TAGS: Final[list[dict[str, str]]]) so the constant follows the repo
typing convention; update the top-level imports to include Final if missing and
keep the existing value unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 634b071c-bcad-499a-bbf8-a9e4ea10570b
📒 Files selected for processing (11)
.github/workflows/openapi_spectral.yaml.spectral.yamlCONTRIBUTING.mdMakefiledocs/contributing/openapi-tags-and-spectral.mddocs/openapi.jsonscripts/generate_openapi_schema.pysrc/app/endpoints/a2a.pysrc/app/endpoints/a2a_openapi.pysrc/app/main.pysrc/models/responses.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: build-pr
- GitHub Check: Pylinter
- GitHub Check: unit_tests (3.12)
- GitHub Check: E2E Tests for Lightspeed Evaluation job
- GitHub Check: E2E: server mode / ci / group 2
- GitHub Check: E2E: library mode / ci / group 3
- GitHub Check: E2E: library mode / ci / group 2
- GitHub Check: E2E: library mode / ci / group 1
- GitHub Check: E2E: server mode / ci / group 3
- GitHub Check: E2E: server mode / ci / group 1
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Use absolute imports for internal modules:from authentication import get_auth_dependency
Import FastAPI dependencies with:from fastapi import APIRouter, HTTPException, Request, status, Depends
Import Llama Stack client with:from llama_stack_client import AsyncLlamaStackClient
Checkconstants.pyfor shared constants before defining new ones
All modules start with descriptive docstrings explaining purpose
Uselogger = get_logger(__name__)fromlog.pyfor module logging
Type aliases defined at module level for clarity
Use Final[type] as type hint for all constants
All functions require docstrings with brief descriptions
Complete type annotations for parameters and return types in functions
Usetyping_extensions.Selffor model validators in Pydantic models
Use modern union type syntaxstr | intinstead ofUnion[str, int]
UseOptional[Type]for optional type hints
Use snake_case with descriptive, action-oriented function names (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns; return new data structures instead
Useasync deffor I/O operations and external API calls
HandleAPIConnectionErrorfrom Llama Stack in error handling
Use standard log levels with clear purposes: debug, info, warning, error
All classes require descriptive docstrings explaining purpose
Use PascalCase for class names with standard suffixes: Configuration, Error/Exception, Resolver, Interface
Use ABC for abstract base classes with@abstractmethoddecorators
Use@model_validatorand@field_validatorfor Pydantic model validation
Complete type annotations for all class attributes; use specific types, notAny
Follow Google Python docstring conventions with Parameters, Returns, Raises, and Attributes sections
Files:
scripts/generate_openapi_schema.pysrc/app/endpoints/a2a_openapi.pysrc/app/main.pysrc/models/responses.pysrc/app/endpoints/a2a.py
src/app/endpoints/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Use FastAPI
HTTPExceptionwith appropriate status codes for API endpoints
Files:
src/app/endpoints/a2a_openapi.pysrc/app/endpoints/a2a.py
src/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Pydantic models extend
ConfigurationBasefor config,BaseModelfor data models
Files:
src/app/endpoints/a2a_openapi.pysrc/app/main.pysrc/models/responses.pysrc/app/endpoints/a2a.py
🧠 Learnings (14)
📚 Learning: 2026-04-15T18:53:54.785Z
Learnt from: Lifto
Repo: lightspeed-core/lightspeed-stack PR: 1510
File: src/models/requests.py:742-769
Timestamp: 2026-04-15T18:53:54.785Z
Learning: In `src/models/requests.py`, keep `ResponsesRequest.validate_body_size` using `json.dumps(values)` with Python defaults (including ASCII escaping and the default separators). This default formatting is intentional to make the 65,536-character limit conservatively strict (accounting for small encoding/format overhead). Do not recommend changing it to `ensure_ascii=False` or using compact separators for this validator, since an exact wire-format byte match with the client payload is not achievable via `json.dumps` anyway.
Applied to files:
.spectral.yaml
📚 Learning: 2026-04-20T15:09:45.119Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1548
File: src/app/endpoints/rlsapi_v1.py:56-56
Timestamp: 2026-04-20T15:09:45.119Z
Learning: In `src/app/endpoints/rlsapi_v1.py`, treat the line `_get_rh_identity_context = get_rh_identity_context` as an intentional temporary backward-compatibility shim (introduced for PR `#1548`, Splunk HEC telemetry work). Do not flag it as dead/unnecessary/unused code during review until the planned part 3 is merged and the responses endpoint is fully wired up such that no tests or external consumers reference the underscore-prefixed name.
Applied to files:
.spectral.yaml
📚 Learning: 2026-04-06T20:18:07.852Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1463
File: src/app/endpoints/rlsapi_v1.py:266-271
Timestamp: 2026-04-06T20:18:07.852Z
Learning: In the lightspeed-stack codebase, within `src/app/endpoints/` inference/MCP endpoints, treat `tools: Optional[list[Any]]` in MCP tool definitions as an intentional, consistent typing pattern (used across `query`, `responses`, `streaming_query`, `rlsapi_v1`). Do not raise or suggest this as a typing issue during code review; changing it in isolation could break endpoint typing consistency across the codebase.
Applied to files:
src/app/endpoints/a2a_openapi.pysrc/app/endpoints/a2a.py
📚 Learning: 2026-03-02T15:57:15.929Z
Learnt from: asamal4
Repo: lightspeed-core/lightspeed-stack PR: 1250
File: AGENTS.md:11-12
Timestamp: 2026-03-02T15:57:15.929Z
Learning: In lightspeed-stack's AGENTS.md, the Linting Tools section lists actual tool names (e.g., mypy, pylint, pyright, ruff, black), not make target names. For example, `check-types` is a make target that runs mypy, but mypy is the tool name documented in the Linting Tools section.
Applied to files:
MakefileCONTRIBUTING.md
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv run make verify` to run all linters (black, pylint, pyright, ruff, docstyle, check-types)
Applied to files:
MakefileCONTRIBUTING.md
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv run make format` to auto-format code with black + ruff
Applied to files:
Makefile
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Applies to **/*.py : Use Final[type] as type hint for all constants
Applied to files:
src/app/main.py
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Applies to **/constants.py : Central `constants.py` for shared constants with descriptive comments
Applied to files:
src/app/main.py
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv` package manager with `uv run` prefix for all commands
Applied to files:
CONTRIBUTING.md
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Use `uv sync --group dev --group llslibdev` for dependencies
Applied to files:
CONTRIBUTING.md
📚 Learning: 2026-01-12T10:58:40.230Z
Learnt from: blublinsky
Repo: lightspeed-core/lightspeed-stack PR: 972
File: src/models/config.py:459-513
Timestamp: 2026-01-12T10:58:40.230Z
Learning: In lightspeed-core/lightspeed-stack, for Python files under src/models, when a user claims a fix is done but the issue persists, verify the current code state before accepting the fix. Steps: review the diff, fetch the latest changes, run relevant tests, reproduce the issue, search the codebase for lingering references to the original problem, confirm the fix is applied and not undone by subsequent commits, and validate with local checks to ensure the issue is resolved.
Applied to files:
src/models/responses.py
📚 Learning: 2026-02-25T07:46:33.545Z
Learnt from: asimurka
Repo: lightspeed-core/lightspeed-stack PR: 1211
File: src/models/responses.py:8-16
Timestamp: 2026-02-25T07:46:33.545Z
Learning: In the Python codebase, requests.py should use OpenAIResponseInputTool as Tool while responses.py uses OpenAIResponseTool as Tool. This difference is intentional due to differing schemas for input vs output tools in llama-stack-api. Apply this distinction consistently to other models under src/models (e.g., ensure request-related tools use the InputTool variant and response-related tools use the ResponseTool variant). If adding new tools, choose the corresponding InputTool or Tool class based on whether the tool represents input or output, and document the rationale in code comments.
Applied to files:
src/models/responses.py
📚 Learning: 2026-04-20T15:09:48.726Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1548
File: src/app/endpoints/rlsapi_v1.py:56-56
Timestamp: 2026-04-20T15:09:48.726Z
Learning: In `src/app/endpoints/rlsapi_v1.py`, the `_get_rh_identity_context = get_rh_identity_context` alias is a deliberate, temporary backward-compatibility shim introduced in PR `#1548` (part 1/3 of Splunk HEC telemetry work). It is planned for removal in part 3 once the responses endpoint is fully wired up and no tests/consumers reference the underscore-prefixed name. Do not flag this alias as unnecessary or dead code until part 3 is merged.
Applied to files:
src/app/endpoints/a2a.py
📚 Learning: 2026-04-19T15:40:25.624Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-19T15:40:25.624Z
Learning: Applies to **/*.py : Follow Google Python docstring conventions with Parameters, Returns, Raises, and Attributes sections
Applied to files:
src/app/endpoints/a2a.py
🪛 checkmake (0.3.2)
Makefile
[warning] 116-116: Target body for "lint-openapi" exceeds allowed length of 5 lines (6).
(maxbodylength)
[warning] 123-123: Target body for "verify" exceeds allowed length of 5 lines (7).
(maxbodylength)
🪛 Checkov (3.2.519)
docs/openapi.json
[high] 1-19614: Ensure that the global security field has rules defined
(CKV_OPENAPI_4)
[high] 1-19614: Ensure that security operations is not empty.
(CKV_OPENAPI_5)
🪛 GitHub Actions: OpenAPI (Spectral)
docs/openapi.json
[error] 1-1: OpenAPI schema is out of date. 'diff -u docs/openapi.json /tmp/openapi-generated.json' failed; regenerate with: 'uv run scripts/generate_openapi_schema.py docs/openapi.json'.
🪛 LanguageTool
CONTRIBUTING.md
[uncategorized] ~52-~52: The official name of this software platform is spelled with a capital “H”.
Context: ...check. CI always runs Spectral (see .github/workflows/openapi_spectral.yaml). The...
(GITHUB)
[uncategorized] ~62-~62: The official name of this software platform is spelled with a capital “H”.
Context: ...work. CI uses Node 22 for Spectral (see .github/workflows/openapi_spectral.yaml). #...
(GITHUB)
docs/contributing/openapi-tags-and-spectral.md
[style] ~25-~25: Consider using the synonym “brief” (= concise, using a few words, not lasting long) to strengthen your wording.
Context: ...akefile skips Spectral and prints a short message so make verify can still ...
(QUICK_BRIEF)
[uncategorized] ~25-~25: The official name of this software platform is spelled with a capital “H”.
Context: ...de.js to run the check locally. CI (.github/workflows/openapi_spectral.yaml) alway...
(GITHUB)
🔇 Additional comments (7)
src/models/responses.py (1)
1707-1714: LGTM.The SSE note is now on the response-level
description, while the media type entry keeps only schema/example metadata..spectral.yaml (1)
1-9: LGTM.The ruleset is minimal, documented, and scoped to the current OpenAPI linting goal.
scripts/generate_openapi_schema.py (1)
78-89: LGTM.This keeps generated OpenAPI tag metadata in sync with the FastAPI app configuration.
.github/workflows/openapi_spectral.yaml (1)
18-42: LGTM.The job verifies generated OpenAPI drift before running Spectral, so CI covers both stale artifacts and lint violations.
src/app/main.py (1)
132-135: LGTM.The normalized local server URL and
openapi_tagswiring correctly expose the declared tag metadata to FastAPI.Makefile (1)
116-130: LGTM.The local target avoids breaking contributors without Node installed, while
verifystill includes OpenAPI linting whennpxis available.src/app/endpoints/a2a.py (1)
738-777: POST/a2aOpenAPI metadata looks good.The distinct
operation_id, shareda2a_jsonrpc_responses, and expanded Google-style wrapper docstring align the POST route with the generated schema and SSE response documentation.
| ```bash | ||
| make lint-openapi | ||
| ``` | ||
|
|
||
| This is part of **`make verify`**. If **`npx`** is not on your **`PATH`**, the Makefile **skips** Spectral and prints a short message so **`make verify`** can still pass; install Node.js to run the check locally. **CI** (`.github/workflows/openapi_spectral.yaml`) always runs Spectral. Failures are driven by **error**-severity rules. |
There was a problem hiding this comment.
Use the uv run form for local commands.
This page already uses uv run for schema generation on Line 14; use the same convention for OpenAPI linting and verification. Based on learnings, “Use uv package manager with uv run prefix for all commands.”
📝 Proposed update
-make lint-openapi
+uv run make lint-openapi-This is part of **`make verify`**.
+This is part of **`uv run make verify`**.🧰 Tools
🪛 LanguageTool
[style] ~25-~25: Consider using the synonym “brief” (= concise, using a few words, not lasting long) to strengthen your wording.
Context: ...akefile skips Spectral and prints a short message so make verify can still ...
(QUICK_BRIEF)
[uncategorized] ~25-~25: The official name of this software platform is spelled with a capital “H”.
Context: ...de.js to run the check locally. CI (.github/workflows/openapi_spectral.yaml) alway...
(GITHUB)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/contributing/openapi-tags-and-spectral.md` around lines 21 - 25, Update
the doc so the local lint command uses the "uv run" prefix: replace the shown
command "make lint-openapi" with the uv-run form (use "uv run make
lint-openapi") and update the surrounding text to recommend using "uv run" for
all local OpenAPI/Spectral invocations; keep notes about npx/Node.js, that this
is part of "make verify", and that CI (.github/workflows/openapi_spectral.yaml)
always runs Spectral and failures are driven by error-severity rules.
bcae72f to
cfa1268
Compare
@tisnik I fixed the JIRA issues s/OLS/LCORE in the commit messages, but I don't permissions to change the PR title. Could you fix the PR title, please? |
done |
tisnik
left a comment
There was a problem hiding this comment.
also please rebase. The changes in Makefile (dev-tools) were already made in different PR, should not be displayed there.
cfa1268 to
9e49da8
Compare
Done. |
asimurka
left a comment
There was a problem hiding this comment.
LGTM in overall. Suggested some nits. Needs to be rebased again before merge.
| "description": ( | ||
| f"{SUCCESSFUL_RESPONSE_DESCRIPTION}. " | ||
| "For `text/event-stream`, the body is a Server-Sent Events stream." | ||
| ), |
There was a problem hiding this comment.
Keep the description short as it is. I don't see any benefit of this. The description of application/json and rext/event-stream is already captured by content that exposes both response formats.
| # 200 may be buffered JSON-RPC (application/json) or SSE (text/event-stream). | ||
| a2a_jsonrpc_responses: Final[dict[int | str, dict[str, Any]]] = { | ||
| 200: { | ||
| "description": ( |
There was a problem hiding this comment.
Same here, I think the description here should be very brief and description of the individual content types could be more specific.
Separate GET/POST operationId values on /a2a, tighten route summaries and descriptions for the spec, and fix Responses streaming OpenAPI (200 text/event- stream: drop invalid sibling description; clarify JSON success description). Regenerate docs/openapi.json for those paths only (no global tag list yet).
9e49da8 to
d0f82ba
Compare
…fined Add _OPENAPI_TAGS / openapi_tags on FastAPI, pass tags through scripts/generate_openapi_schema.py, and regenerate docs/openapi.json with a root-level tags array and normalized server URL. Document tag maintenance in docs/contributing/openapi-tags-and-spectral.md (tag list section).
Add .spectral.yaml, make lint-openapi (optional npx locally), wire into verify, GitHub workflow to regenerate-check docs/openapi.json and run Spectral, and CONTRIBUTING prerequisites for Node. Extend contributing doc with Spectral section.
d0f82ba to
c2b7827
Compare
| "application/json": {"example": json_example} if json_example else {}, | ||
| "text/event-stream": { | ||
| "schema": {"type": "string"}, | ||
| "description": "SSE stream of events", |
Description
Removed OpenAPI spec violations in docs/openapi.json - duplicate operation ids and an unexpected "description" attribute.
Added OpenAPI linting to CI as a GitHub action via the 'make lint-openapi' target.
Type of change
Tools used to create PR
Identify any AI code assistants used in this PR (for transparency and review context)
Related Tickets & Documents
Checklist before requesting a review
Testing
Please provide detailed steps to perform tests related to this code change.
Run 'make lint-openapi', which should produce no errors.
How were the fix/results from this change verified? Please provide relevant screenshots or results.
Summary by CodeRabbit
New Features
Refactor
Documentation