Skip to content

Add LogfireSink for pydantic-evals online evaluation integration#1804

Closed
Kludex wants to merge 9 commits intomainfrom
feat/logfire-evaluation-sink
Closed

Add LogfireSink for pydantic-evals online evaluation integration#1804
Kludex wants to merge 9 commits intomainfrom
feat/logfire-evaluation-sink

Conversation

@Kludex
Copy link
Copy Markdown
Member

@Kludex Kludex commented Mar 25, 2026

Summary

  • Add LogfireSink implementing pydantic-evals' EvaluationSink protocol, sending online eval results to the new /v1/annotations HTTP API
  • Add AnnotationsClient async HTTP client with retry on 5xx/timeout, using write token auth (same as OTLP ingest)
  • Auto-configure LogfireSink as the default sink in logfire.configure() when pydantic-evals is installed and a token is present
  • Add create_annotation() / create_annotation_sync() as user-facing HTTP-based annotation API
  • Deprecate raw_annotate_span() and record_feedback() in favor of the new HTTP path
  • Add logfire-api stubs for new modules

Depends on the platform /v1/annotations endpoint (can be developed in parallel, must release after backend).
Based on dmontagu/online-eval-capability branch in pydantic-ai for the EvaluationSink protocol.

See plan.local.md for the full design context.

Test plan

  • Unit tests for AnnotationsClient (auth header, retry on 5xx, no retry on 4xx, close)
  • Unit tests for LogfireSink (result/failure serialization, idempotency keys, None span_reference no-op, exception catching)
  • Unit tests for create_annotation() API
  • Existing test_annotations.py updated to handle deprecation warnings
  • All pyright/ruff checks pass

🤖 Generated with Claude Code


Summary by cubic

Adds LogfireSink to send pydantic-evals online evaluation results to Logfire via the new /v1/annotations HTTP API. Also adds a simple annotations API for manual feedback and auto-configures the sink during logfire.configure() when a token is set.

  • New Features

    • LogfireSink implements EvaluationSink, serializes values (assertion/score/label), includes failures, and uses deterministic idempotency keys.
    • AnnotationsClient async HTTP client with write-token auth and one retry on 5xx/timeout.
    • New HTTP helpers: logfire.experimental.annotations_api.create_annotation() and create_annotation_sync().
  • Migration

    • raw_annotate_span() and record_feedback() are deprecated; use the new HTTP APIs.
    • Depends on the platform /v1/annotations endpoint; release after backend is live.

Written for commit 489a62a. Summary will update on new commits.

Introduce `LogfireSink` that implements the `EvaluationSink` protocol from
pydantic-evals, sending online evaluation results to the Logfire annotations
HTTP API. Auto-configured by `logfire.configure()` when pydantic-evals is
installed.

- `AnnotationsClient`: async HTTP client for `POST /v1/annotations` with
  retry on 5xx/timeout
- `LogfireSink`: maps `EvaluationResult`/`EvaluatorFailure` to annotation
  payloads with idempotency keys
- `create_annotation()` / `create_annotation_sync()`: user-facing HTTP-based
  annotation API as a non-OTEL alternative to `record_feedback()`
- Deprecate `raw_annotate_span()` and `record_feedback()` in favor of the
  new HTTP-based API

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages bot commented Mar 25, 2026

Deploying logfire-docs with  Cloudflare Pages  Cloudflare Pages

Latest commit: e9d090b
Status: ✅  Deploy successful!
Preview URL: https://2c41881a.logfire-docs.pages.dev
Branch Preview URL: https://feat-logfire-evaluation-sink.logfire-docs.pages.dev

View logs

Comment thread logfire/_internal/annotations_client.py Outdated
devin-ai-integration[bot]

This comment was marked as resolved.

cubic-dev-ai[bot]

This comment was marked as resolved.

…logging, and null comment

- Extract _raw_annotate_span_impl to avoid duplicate deprecation warnings when
  record_feedback() calls raw_annotate_span()
- Bind retry exception properly in annotations_client retry handler
- Only include comment in failure annotations when error_stacktrace is present
devin-ai-integration[bot]

This comment was marked as resolved.

dmontagu and others added 3 commits March 29, 2026 22:09
Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
- Use `values` dict format (name → value) instead of individual name/value fields
- Source: 'automated' for evals, 'app' for SDK (matching platform enum)
- Embed reason/comment in value as {"value": v, "reason": r}
- Remove annotation_type, source_name, idempotency_key (platform uses natural key for upsert)
- Fix CI: catch Exception (not just ImportError) in _try_configure_online_evals
  for Pydantic 2.4 compatibility (transitive dep uses pydantic.Tag)
- Fix reconfiguration: update sink on subsequent logfire.configure() calls
- Fix create_annotation_sync: use sync httpx.Client instead of asyncio.run()
  to work safely within running event loops
- Update tests and logfire-api stubs
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 4 new potential issues.

View 6 additional findings in Devin Review.

Open in Devin Review

) -> None:
"""Sync version of `create_annotation`.

Safe to call from both sync contexts and within running event loops.
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 create_annotation_sync docstring falsely claims safety in running event loops

The docstring at logfire/experimental/annotations_api.py:98 states "Safe to call from both sync contexts and within running event loops." However, the implementation uses a synchronous httpx.Client (lines 119-125) which performs blocking I/O. Calling this from within an async handler would block the event loop thread for up to 30 seconds (the configured DEFAULT_TIMEOUT), causing severe performance degradation for all concurrent async tasks. Users who trust this claim and call it from async code (instead of using the proper create_annotation async version) will silently degrade their application's performance.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment thread logfire/_internal/config.py Outdated
Comment on lines +615 to +616
if config.send_to_logfire and config.token:
_try_configure_online_evals(config.token, config.advanced)
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 _try_configure_online_evals modifies global pydantic-evals state even for local=True configurations

At logfire/_internal/config.py:615-616, _try_configure_online_evals is called regardless of whether local=True or local=False. When local=True, the purpose is to create an isolated LogfireConfig that doesn't affect global state (line 577: config = LogfireConfig()). However, _try_configure_online_evals unconditionally sets evals_config.default_sink (line 643) on the global pydantic_evals.online.DEFAULT_CONFIG singleton. This means a local logfire configuration leaks into global pydantic-evals state, which is inconsistent with the semantics of local=True.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment thread logfire/_internal/annotations_client.py Outdated
Comment thread logfire/_internal/annotations_client.py Outdated
…PIClient

- Remove standalone AnnotationsClient; add create_annotations() to
  LogfireAPIClient and AsyncLogfireAPIClient
- Auth now uses API keys (Bearer token) instead of write tokens,
  matching the platform's updated V1 annotations endpoint
- Auto-config requires LOGFIRE_API_KEY (already exists in config)
  in addition to LOGFIRE_TOKEN
- LogfireSink now uses AsyncLogfireAPIClient with retry logic
  (single retry on 5xx/timeout)
- create_annotation_sync uses sync LogfireAPIClient
- Update tests, stubs, remove obsolete annotations_client tests
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 2 new potential issues.

View 6 additional findings in Devin Review.

Open in Devin Review

Comment on lines +73 to +84
except httpx.HTTPStatusError as exc:
if exc.response.status_code >= 500:
try:
await self._client.create_annotations([annotation])
except Exception as retry_exc:
logfire.error('Annotations batch retry failed: {error}', error=str(retry_exc), _exc_info=retry_exc)
else:
logfire.error(
'Annotations batch request failed: {status} {error}',
status=exc.response.status_code,
error=str(exc),
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 5xx retry logic is dead code because _handle_response raises DatasetApiError, not httpx.HTTPStatusError

The error handling in LogfireSink.submit() catches httpx.HTTPStatusError to detect server errors (>= 500) and retry the request. However, AsyncLogfireAPIClient.create_annotations() at logfire/experimental/api_client.py:979-980 calls self._handle_response(response), which raises DatasetApiError for any HTTP status >= 400 (logfire/experimental/api_client.py:231-233), not httpx.HTTPStatusError. Since httpx's AsyncClient.post() also does not auto-raise HTTPStatusError, the except httpx.HTTPStatusError block is unreachable. All API errors (including retriable 5xx) fall through to the generic except Exception on line 90, which only logs but never retries. The same applies to the status-code-specific error message on lines 79-84.

Prompt for agents
In logfire/experimental/evaluation.py, the except blocks on lines 73-84 catch httpx.HTTPStatusError, but this exception is never raised by create_annotations(). The _handle_response() method in logfire/experimental/api_client.py (line 224-236) raises DatasetApiError for HTTP errors, not httpx.HTTPStatusError. To fix the retry logic:

Option A: Change the except clause on line 73 from `except httpx.HTTPStatusError as exc` to `except DatasetApiError as exc`, and update `exc.response.status_code` to `exc.status_code` on line 74. Import DatasetApiError from logfire.experimental.api_client.

Option B: Alternatively, in the create_annotations methods of both LogfireAPIClient and AsyncLogfireAPIClient (api_client.py lines 715-726 and 974-980), call response.raise_for_status() before _handle_response() so that httpx.HTTPStatusError is raised for error status codes. But this would be inconsistent with other methods in the client.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +615 to +616
if config.send_to_logfire and config.token and config.api_key:
_try_configure_online_evals(config.api_key, config.advanced)
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 Online evals auto-configuration runs even for local configs

At logfire/_internal/config.py:615-616, _try_configure_online_evals is called regardless of whether local=True. This modifies the global pydantic_evals.online.DEFAULT_CONFIG.default_sink, which is process-wide state. When local=True, the user explicitly opts for a non-global Logfire config, but this side-effect still mutates global pydantic-evals state. This may be intentional since pydantic-evals only has a single global config, but it's worth documenting or guarding against.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may be intentional since pydantic-evals only has a single global config

seems this isn't true

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 new potential issue.

View 6 additional findings in Devin Review.

Open in Devin Review

Returns:
The API response.
"""
response = self.client.post('/v1/annotations', json={'annotations': annotations})
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 Annotation endpoints use no trailing slash unlike all dataset endpoints

The new annotation endpoint paths (/v1/annotations at api_client.py:725 and api_client.py:979) do not use trailing slashes, while every dataset endpoint consistently uses trailing slashes (e.g., /v1/datasets/, /v1/datasets/{id}/cases/). This may be intentional if the server-side annotation API is defined without trailing slashes, but if the server enforces trailing slashes (returning 307 redirects), this could cause issues depending on httpx's redirect-following behavior. Worth confirming against the actual API specification.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 new potential issue.

View 9 additional findings in Devin Review.

Open in Devin Review

Comment on lines +52 to +57
for failure in failures:
error_value = json.dumps({'error': True, 'error_message': failure.error_message})
if failure.error_stacktrace:
values[failure.name] = {'value': error_value, 'reason': failure.error_stacktrace[:1000]}
else:
values[failure.name] = error_value
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 LogfireSink error values for failures are JSON strings, not dicts

In evaluation.py:53-57, failure error values are serialized as JSON strings via json.dumps(...) rather than as dicts. When there's no stacktrace, the value is a bare JSON string ('{"error": true, ...}'). When there IS a stacktrace, the value is a dict with {'value': <json_string>, 'reason': ...}. This means the value field within the dict is a JSON-encoded string, not a structured object, creating an inconsistency with how results are stored (where value is a native Python type). This may be intentional for error representation but could cause confusion in the Logfire UI.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah this makes me really want nice types

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about values[failure.name] = {'value': error_value} for consistency?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same for the non-error case, why not {'value': value} when there's no reason?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or separate value and reason in the backend?

Comment on lines +615 to +616
if config.send_to_logfire and config.token and config.api_key:
_try_configure_online_evals(config.api_key, config.advanced)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may be intentional since pydantic-evals only has a single global config

seems this isn't true

def _try_configure_online_evals(api_key: str, advanced: AdvancedOptions | None) -> None:
"""Auto-configure pydantic-evals LogfireSink if pydantic-evals is installed."""
try:
_online_mod = __import__('pydantic_evals.online', fromlist=['DEFAULT_CONFIG'])
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't a normal import be used?


client = AsyncLogfireAPIClient(api_key=api_key, base_url=base_url)
sink = LogfireSink(client=client)
evals_config.default_sink = sink
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe best to use configure(default_sink=sink), even if it's the same right now


from logfire.experimental.api_client import AsyncLogfireAPIClient

base_url = advanced.base_url if advanced and advanced.base_url else get_base_url_from_token(api_key)
Copy link
Copy Markdown
Collaborator

@alexmojaki alexmojaki Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
base_url = advanced.base_url if advanced and advanced.base_url else get_base_url_from_token(api_key)
base_url = advanced.generate_base_url(api_key)

advanced can't actually be None here

Comment on lines +26 to +29
results: Sequence[Any],
failures: Sequence[Any],
context: Any,
span_reference: Any | None,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these could be real type hints

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why a new module? the name annotations is nice.

value: Any = result.value
if result.reason is not None:
value = {'value': value, 'reason': result.reason}
values[result.name] = value
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are names guaranteed to be unique?

for failure in failures:
error_value = json.dumps({'error': True, 'error_message': failure.error_message})
if failure.error_stacktrace:
values[failure.name] = {'value': error_value, 'reason': failure.error_stacktrace[:1000]}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
values[failure.name] = {'value': error_value, 'reason': failure.error_stacktrace[:1000]}
values[failure.name] = {'value': error_value, 'reason': truncate_string(failure.error_stacktrace, max_length=1000)}

But why doesn't the error_message go in the reason?

Comment on lines +52 to +57
for failure in failures:
error_value = json.dumps({'error': True, 'error_message': failure.error_message})
if failure.error_stacktrace:
values[failure.name] = {'value': error_value, 'reason': failure.error_stacktrace[:1000]}
else:
values[failure.name] = error_value
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah this makes me really want nice types

Comment on lines +52 to +57
for failure in failures:
error_value = json.dumps({'error': True, 'error_message': failure.error_message})
if failure.error_stacktrace:
values[failure.name] = {'value': error_value, 'reason': failure.error_stacktrace[:1000]}
else:
values[failure.name] = error_value
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about values[failure.name] = {'value': error_value} for consistency?

try:
await self._client.create_annotations([annotation])
except Exception as retry_exc:
logfire.error('Annotations batch retry failed: {error}', error=str(retry_exc), _exc_info=retry_exc)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
logfire.error('Annotations batch retry failed: {error}', error=str(retry_exc), _exc_info=retry_exc)
logfire.exception('Annotations batch retry failed: {error}', error=str(retry_exc))

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe this should be a warning, not an error?

Comment thread tests/test_annotations.py
Comment on lines +20 to +21
with warnings.catch_warnings():
warnings.simplefilter('ignore', DeprecationWarning)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
with warnings.catch_warnings():
warnings.simplefilter('ignore', DeprecationWarning)
with pytest.warns(DeprecationWarning):

Comment on lines +13 to +16
mock_client = AsyncMock()
mock_client.create_annotations = AsyncMock()
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
mock_client.__aexit__ = AsyncMock(return_value=None)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

way too much mocking. needs vcr.

mock_client.__aexit__ = AsyncMock(return_value=None)

with patch(
'logfire.experimental.annotations_api._get_api_key_and_base_url',
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just configure an api key, no reason to mock

Comment on lines +35 to +40
assert len(annotations) == 1
assert annotations[0]['trace_id'] == 'a' * 32
assert annotations[0]['span_id'] == 'b' * 16
assert annotations[0]['values'] == {'quality': {'value': 0.95, 'reason': 'Great response'}}
assert annotations[0]['source'] == 'app'
assert annotations[0]['metadata'] == {'reviewer': 'alice'}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
assert len(annotations) == 1
assert annotations[0]['trace_id'] == 'a' * 32
assert annotations[0]['span_id'] == 'b' * 16
assert annotations[0]['values'] == {'quality': {'value': 0.95, 'reason': 'Great response'}}
assert annotations[0]['source'] == 'app'
assert annotations[0]['metadata'] == {'reviewer': 'alice'}
assert annotations == snapshot(...)

Comment on lines +52 to +57
for failure in failures:
error_value = json.dumps({'error': True, 'error_message': failure.error_message})
if failure.error_stacktrace:
values[failure.name] = {'value': error_value, 'reason': failure.error_stacktrace[:1000]}
else:
values[failure.name] = error_value
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same for the non-error case, why not {'value': value} when there's no reason?

values[result.name] = value

for failure in failures:
error_value = json.dumps({'error': True, 'error_message': failure.error_message})
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like there should be something proper in the backend for distinguishing failures from results. Imagine writing a SQL WHERE clause that filters for errors.

Comment on lines +52 to +57
for failure in failures:
error_value = json.dumps({'error': True, 'error_message': failure.error_message})
if failure.error_stacktrace:
values[failure.name] = {'value': error_value, 'reason': failure.error_stacktrace[:1000]}
else:
values[failure.name] = error_value
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or separate value and reason in the backend?

'source': 'automated',
}
if context.metadata is not None:
annotation['metadata'] = context.metadata
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does 'metadata': None mean something different from it being absent?


annotation: dict[str, Any] = {
'trace_id': trace_id,
'span_id': span_id,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the plan for retrieving annotation data? Is it only in the UI for now? Will we add API client methods? Will we add SQL query support? Will users be able to join annotations against records? Or combine the two types of data in some other way?

Attaching annotation data to span attributes is hard but maybe possible. Attaching the span data to the annotation data is probably straightforward.

"""Build the annotation request body in the platform V1 API format."""
annotation_value: Any = value
if comment is not None:
annotation_value = {'value': value, 'reason': comment}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the backend seems to have explicit support for comment

'values': values,
'source': 'automated',
}
if context.metadata is not None:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we attach metadata, but not other stuff from context, like inputs and outputs?

return api_key, base_url


def _build_annotation_body(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about annotation_stream and timestamp as seen in https://github.com/pydantic/platform/pull/19387/?

@dmontagu
Copy link
Copy Markdown
Contributor

Closing — retired in favor of pydantic-evals emitting gen_ai.evaluation.result events by default (no sink needed). Net diff for this branch is empty after that refactor.

@dmontagu dmontagu closed this Apr 16, 2026
@dmontagu dmontagu deleted the feat/logfire-evaluation-sink branch April 16, 2026 21:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants