Skip to content

feat(server): add GET /sandboxes/{id}/logs endpoint for log streaming#397

Open
zerone0x wants to merge 1 commit intoalibaba:mainfrom
zerone0x:feat/sandbox-log-stream
Open

feat(server): add GET /sandboxes/{id}/logs endpoint for log streaming#397
zerone0x wants to merge 1 commit intoalibaba:mainfrom
zerone0x:feat/sandbox-log-stream

Conversation

@zerone0x
Copy link
Contributor

Summary

Closes #34

Adds a GET /sandboxes/{sandboxId}/logs endpoint that streams the combined stdout/stderr of a running sandbox, addressing the question raised in #34.

Changes

  • SandboxService – new abstract get_logs(sandbox_id, follow, tail, timestamps) method
  • DockerSandboxService – implementation using container.logs(stream=True) from the Docker SDK
  • KubernetesSandboxService – implementation using read_namespaced_pod_log(); follow=True sets _preload_content=False for real streaming via urllib3
  • server/src/api/lifecycle.py – new route with follow, tail, and timestamps query parameters; returns StreamingResponse with text/plain
  • specs/sandbox-lifecycle.yml – documents the new endpoint in the OpenAPI spec
  • server/tests/test_routes_logs.py – 6 unit tests: happy path, parameter forwarding, empty stream, 404 propagation, auth enforcement, and tail validation

API

GET /sandboxes/{sandboxId}/logs?follow=false&tail=100&timestamps=false
Parameter Type Default Description
follow bool false Keep streaming until the sandbox exits
tail int (≥1) Return only the last N lines
timestamps bool false Prefix each line with an RFC3339 timestamp

Response: 200 text/plain – streaming log output

Testing

  • Unit tests (6 new tests, all passing)
  • ruff check – no errors
  • Integration tests (requires running sandbox environment)

Breaking Changes

  • None

Checklist

  • Linked Issue (Sandbox log stream support #34)
  • Added/updated tests
  • Added/updated docs (OpenAPI spec)
  • Security impact considered (no new attack surface; subject to existing API key auth)
  • Backward compatibility considered

Closes alibaba#34

Add a `GET /sandboxes/{sandboxId}/logs` endpoint that streams the
combined stdout/stderr of a running sandbox to the caller.

Changes:
- `SandboxService`: add abstract `get_logs()` method
- `DockerSandboxService`: implement via `container.logs(stream=True)`
- `KubernetesSandboxService`: implement via `read_namespaced_pod_log()`;
  `follow=True` uses `_preload_content=False` for true streaming
- `lifecycle.py`: add route with `follow`, `tail`, `timestamps` query params
- `specs/sandbox-lifecycle.yml`: document the new endpoint
- `tests/test_routes_logs.py`: 6 unit tests covering happy path, params
  forwarding, empty stream, 404 propagation, auth, and validation

Co-Authored-By: Claude <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings March 10, 2026 02:47
@chatgpt-codex-connector
Copy link

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Credits must be used to enable repository wide code reviews.

@CLAassistant
Copy link

CLAassistant commented Mar 10, 2026

CLA assistant check
All committers have signed the CLA.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a new API endpoint to stream sandbox logs, backed by Docker/Kubernetes service implementations and documented in the OpenAPI spec.

Changes:

  • Introduces GET /sandboxes/{sandboxId}/logs with follow, tail, and timestamps query params returning a StreamingResponse.
  • Adds SandboxService.get_logs(...) plus Docker and Kubernetes implementations.
  • Adds unit tests and updates the OpenAPI spec for the new endpoint.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
specs/sandbox-lifecycle.yml Documents the new logs streaming endpoint and its query params/responses
server/src/api/lifecycle.py Adds the FastAPI route returning StreamingResponse(text/plain)
server/src/services/sandbox_service.py Adds abstract get_logs(...) contract to the service interface
server/src/services/docker.py Implements log streaming via Docker SDK container.logs(stream=True)
server/src/services/k8s/kubernetes_service.py Implements log streaming via read_namespaced_pod_log() including streaming mode
server/tests/test_routes_logs.py Adds unit tests for happy path, param forwarding, auth, and validation

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

in: query
description: |
Return only the last *N* lines of existing log output before
streaming new lines. Omit or set to 0 to return all lines.
Copy link

Copilot AI Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tail description says “omit or set to 0 to return all lines”, but the schema enforces minimum: 1 (and the route/tests also reject tail=0). Update the description to match the implemented validation (e.g., “Omit to return all lines”).

Suggested change
streaming new lines. Omit or set to 0 to return all lines.
streaming new lines. Omit to return all lines.

Copilot uses AI. Check for mistakes.
Comment on lines +1578 to +1586
log_gen = container.logs(
stream=True,
follow=follow,
stdout=True,
stderr=True,
timestamps=timestamps,
tail=tail_arg,
)
yield from log_gen
Copy link

Copilot AI Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With Docker containers that are not running with a TTY, container.logs(stdout=True, stderr=True) returns the Docker multiplexed stream format (binary framing), but the API advertises text/plain combined stdout/stderr. This will produce non-plain-text bytes for real containers. Consider demuxing/parsing the multiplexed stream (or using an option/approach that returns plain log bytes) before yielding, so the HTTP response body is actually plain-text.

Copilot uses AI. Check for mistakes.
Comment on lines +737 to +739
for chunk in response.stream(amt=4096):
if chunk:
yield chunk
Copy link

Copilot AI Mar 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In streaming mode, the raw HTTP response should be explicitly closed/released when the generator ends (including on client disconnect) to avoid leaking connections. Wrap the streaming loop in a try/finally (or context manager) and call response.close() (and/or response.release_conn() depending on the returned type) in the finally block.

Suggested change
for chunk in response.stream(amt=4096):
if chunk:
yield chunk
try:
for chunk in response.stream(amt=4096):
if chunk:
yield chunk
finally:
# Ensure the underlying HTTP response/connection is properly cleaned up.
try:
response.close()
except Exception:
# Best-effort close; avoid masking original exceptions.
pass
# Some response types (e.g., urllib3.HTTPResponse) expose release_conn().
release_conn = getattr(response, "release_conn", None)
if callable(release_conn):
try:
release_conn()
except Exception:
pass

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Sandbox log stream support

3 participants