feat: Log failed LLM calls to PromptLayer before re-raising#296
Merged
adagradschool merged 4 commits intomasterfrom Feb 24, 2026
Merged
feat: Log failed LLM calls to PromptLayer before re-raising#296adagradschool merged 4 commits intomasterfrom
adagradschool merged 4 commits intomasterfrom
Conversation
hasaan21
requested changes
Feb 23, 2026
Contributor
hasaan21
left a comment
There was a problem hiding this comment.
Update package version in promptlayer/init.py and pyproject
hasaan21
requested changes
Feb 24, 2026
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Records a real round-trip: template fetch (success) → OpenAI call (401 auth failure) → track-request with error fields (success). Verified against local promptlayer-app server. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Cloudflare tracking cookies (\_\_cf\_bm, \_cfuvid) were being recorded in VCR cassettes. Added set-cookie/cookie to VCR filter_headers and stripped existing cookies from the error tracking cassette. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
c70b1ce to
a8b0251
Compare
hasaan21
approved these changes
Feb 24, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
When
.run()throws on an LLM call, the SDK now catches the exception, logs it to PromptLayer withstatus=ERROR,error_type, anderror_message, then re-raises. Also adds these fields tolog_request()for manual callers. Error categorization is duck-typed — no provider imports in runtime code.Edge cases tested
status_code=402quota detection"quota","timeout") gated to known provider modules onlyKnown concerns
exceptblock usestrack_request/atrack_requestwhich have retry logic. If the PromptLayer API is down, this adds latency before the original exception propagates.throw_on_errorinteraction: Whenthrow_on_error=True, a failure in the error-tracking call itself (e.g. PromptLayer API rejects the payload) would raise aPromptLayerAPIErrorinside the innertry/except, which we suppress to re-raise the original LLM error. This is intentional — the LLM error always takes priority — but means tracking failures are only visible atlogger.debuglevel regardless ofthrow_on_error.🤖 Generated with Claude Code