-
Notifications
You must be signed in to change notification settings - Fork 34
Anthropic endpoint for claude code #45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
|
Starting my review of the Anthropic endpoint implementation. This is a significant addition bringing Claude Code compatibility via format translation between Anthropic and OpenAI APIs. Analyzing the endpoint handlers, model definitions, and streaming wrapper now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall Assessment
This PR delivers a well-structured Anthropic API compatibility layer, enabling Claude Code and other Anthropic clients to use the proxy via format translation. The implementation demonstrates solid understanding of both API formats and handles edge cases thoughtfully (e.g., forcing high thinking for Opus models, defensive error handling in streaming).
Verdict: The code quality is good with comprehensive error handling and proper Anthropic error format responses. There are two minor improvements suggested inline, but neither is blocking.
Architectural Feedback
The approach of translating Anthropic format → OpenAI format → processing → translating back is sound and aligns with the existing proxy architecture. The streaming wrapper correctly maintains state across the SSE event lifecycle, and the block index tracking for tool calls is well-implemented.
One architectural note: The code adds ~940 lines to main.py, which is already substantial. Consider whether these Anthropic-specific handlers could be extracted to a separate module (e.g., anthropic_compat.py) in a future refactor to improve maintainability.
Key Suggestions
-
Tool ID collision risk (inline comment): The fallback
int(time.time())for tool IDs could collide with parallel calls. Using UUID is recommended. -
Unused state variables (inline comment):
accumulated_textandaccumulated_thinkingin the streaming wrapper are tracked but never used.
Minor Observations (Not Inline)
-
Token counting for tools: The
count_tokensendpoint serializes tool definitions to JSON text and counts that. This is an approximation since tokenizers may handle function schemas differently. Consider documenting this limitation. -
Reasoning effort thresholds: The budget_tokens → reasoning_effort mapping (32000/10000/5000) works but lacks inline documentation explaining the rationale.
-
Testing: No tests are included in this PR for the new endpoints. Consider adding test coverage for the format translation functions and endpoint handlers.
Questions for the Author
-
Are the
accumulated_textandaccumulated_thinkingvariables in the streaming wrapper intended for future use (e.g., logging), or are they leftover from development? -
Is there a plan to add integration tests for the Anthropic endpoint compatibility?
This review was generated by an AI assistant.
src/proxy_app/main.py
Outdated
| content_blocks.append( | ||
| { | ||
| "type": "tool_use", | ||
| "id": tc.get("id", f"toolu_{int(time.time())}"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using int(time.time()) as a fallback tool ID could cause collisions if multiple tool calls are processed within the same second. Since uuid is already imported in this file, consider using a UUID for more robust uniqueness.
| "id": tc.get("id", f"toolu_{int(time.time())}"), | |
| "id": tc.get("id", f"toolu_{uuid.uuid4().hex[:12]}"), |
src/proxy_app/main.py
Outdated
| thinking_block_started = False | ||
| current_block_index = 0 | ||
| accumulated_text = "" | ||
| accumulated_thinking = "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variables accumulated_text and accumulated_thinking are initialized and updated throughout the streaming wrapper but are never read. If these are intended for debugging or future use (e.g., logging the complete response), consider adding a comment explaining their purpose. Otherwise, removing them would reduce cognitive overhead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds Anthropic API endpoint support to enable compatibility with Claude Code and other Anthropic API clients. The implementation translates between Anthropic's message format and the existing OpenAI-compatible format used internally.
Key changes:
- Added Anthropic Messages API endpoint (
/v1/messages) with streaming and non-streaming support - Implemented format translation between Anthropic and OpenAI request/response structures
- Added Claude extended thinking parameter validation to ensure max_tokens > thinking_budget
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 14 comments.
| File | Description |
|---|---|
src/rotator_library/providers/antigravity_provider.py |
Adds validation logic to ensure max_tokens exceeds thinking_budget for Claude models with extended thinking enabled |
src/proxy_app/main.py |
Implements Anthropic Messages API endpoint with request/response translation, streaming support, token counting endpoint, and dual authentication (x-api-key and Bearer token) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/proxy_app/main.py
Outdated
| anthropic_usage["cache_read_input_tokens"] = details["cached_tokens"] | ||
|
|
||
| return { | ||
| "id": openai_response.get("id", f"msg_{int(time.time())}"), |
Copilot
AI
Dec 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using time.time() as a fallback for message IDs can generate duplicate IDs if multiple messages are processed within the same second. Consider using uuid or a more robust unique ID generation strategy for the fallback case, similar to request_id generation at line 1672.
| # CRITICAL: For Claude with extended thinking, max_tokens MUST be > thinking.budget_tokens | ||
| # Per Claude docs: https://docs.claude.com/en/docs/build-with-claude/extended-thinking | ||
| # If this constraint is violated, the API returns 400 INVALID_ARGUMENT |
Copilot
AI
Dec 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment references Claude documentation, but the logic is being applied to all Claude models via the antigravity provider. Consider clarifying which specific models or model versions this constraint applies to, as extended thinking may not be available for all Claude models.
| # CRITICAL: For Claude with extended thinking, max_tokens MUST be > thinking.budget_tokens | |
| # Per Claude docs: https://docs.claude.com/en/docs/build-with-claude/extended-thinking | |
| # If this constraint is violated, the API returns 400 INVALID_ARGUMENT | |
| # CRITICAL: For Claude models that support extended thinking (e.g., Sonnet 4.5 and Opus 4.5 | |
| # via the Antigravity proxy), max_tokens MUST be > thinking.budget_tokens when | |
| # extended thinking is enabled. Per Claude docs: | |
| # https://docs.claude.com/en/docs/build-with-claude/extended-thinking | |
| # If this constraint is violated for those models, the API can return 400 INVALID_ARGUMENT |
| # Check x-api-key first (Anthropic style) | ||
| if x_api_key and x_api_key == PROXY_API_KEY: | ||
| return x_api_key | ||
| # Fall back to Bearer token (OpenAI style) | ||
| if auth and auth == f"Bearer {PROXY_API_KEY}": | ||
| return auth | ||
| raise HTTPException(status_code=401, detail="Invalid or missing API Key") |
Copilot
AI
Dec 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When PROXY_API_KEY is not set or empty (open access mode), this function will always raise an HTTPException because neither condition will match. This is inconsistent with verify_api_key at line 794 which allows access when PROXY_API_KEY is not set. Consider adding a check similar to line 794 to allow open access mode.
src/proxy_app/main.py
Outdated
| elif "opus" in body.model.lower(): | ||
| # Force high thinking for Opus models when no thinking config is provided | ||
| # Opus 4.5 always uses the -thinking variant, so we want maximum thinking budget | ||
| # Without this, the backend defaults to thinkingBudget: -1 (auto) instead of high | ||
| openai_request["reasoning_effort"] = "high" | ||
| openai_request["custom_reasoning_budget"] = True |
Copilot
AI
Dec 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model name matching logic uses a simple substring check ("opus" in body.model.lower()), which could match unintended model names. Consider using a more specific pattern or a list of known Opus model names to avoid false positives with models that might contain "opus" in their name but aren't Claude Opus models.
src/proxy_app/main.py
Outdated
| # Build OpenAI-compatible request | ||
| openai_request = { | ||
| "model": body.model, | ||
| "messages": openai_messages, | ||
| "max_tokens": body.max_tokens, | ||
| "stream": body.stream or False, | ||
| } | ||
|
|
||
| if body.temperature is not None: | ||
| openai_request["temperature"] = body.temperature | ||
| if body.top_p is not None: | ||
| openai_request["top_p"] = body.top_p | ||
| if body.stop_sequences: | ||
| openai_request["stop"] = body.stop_sequences | ||
| if openai_tools: | ||
| openai_request["tools"] = openai_tools | ||
| if openai_tool_choice: | ||
| openai_request["tool_choice"] = openai_tool_choice |
Copilot
AI
Dec 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The top_k parameter from the Anthropic request (defined in AnthropicMessagesRequest at line 301) is not being mapped to the OpenAI request. Anthropic's top_k parameter controls sampling diversity and should be included if provided. Consider adding a check similar to lines 1699-1702 to include top_k in the openai_request when it's not None.
src/proxy_app/main.py
Outdated
| openai_request["tools"] = openai_tools | ||
| if openai_tool_choice: | ||
| openai_request["tool_choice"] = openai_tool_choice | ||
|
|
Copilot
AI
Dec 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The metadata parameter from the Anthropic request (defined in AnthropicMessagesRequest at line 306) is not being mapped or handled. While metadata may not directly map to OpenAI format, it could be useful for logging or tracking purposes. Consider whether this parameter should be preserved for request logging or passed through as custom fields.
| # Preserve Anthropic metadata for logging/tracking or downstream use | |
| metadata = anthropic_request.get("metadata") | |
| if metadata is not None: | |
| openai_request["metadata"] = metadata |
| if current_max_tokens <= thinking_budget: | ||
| lib_logger.warning( | ||
| f"max_tokens ({current_max_tokens}) must be > thinkingBudget ({thinking_budget}). " | ||
| f"Adjusting to {min_required_tokens}" | ||
| ) | ||
| gen_config["maxOutputTokens"] = min_required_tokens |
Copilot
AI
Dec 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The condition checks if current_max_tokens <= thinking_budget, but the adjustment only occurs in this case. However, the comment states that max_tokens MUST be > thinking_budget. If current_max_tokens is between thinking_budget + 1 and thinking_budget + 1024 (exclusive), it satisfies the API requirement (> thinking_budget) but might not provide enough buffer for the actual response content. Consider changing the condition to check if current_max_tokens < min_required_tokens instead.
src/proxy_app/main.py
Outdated
| # Close any open thinking block | ||
| if thinking_block_started: | ||
| yield f'event: content_block_stop\ndata: {{"type": "content_block_stop", "index": {current_block_index}}}\n\n' | ||
| current_block_index += 1 | ||
| thinking_block_started = False | ||
|
|
Copilot
AI
Dec 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable thinking_block_started is not used.
| # Close any open thinking block | |
| if thinking_block_started: | |
| yield f'event: content_block_stop\ndata: {{"type": "content_block_stop", "index": {current_block_index}}}\n\n' | |
| current_block_index += 1 | |
| thinking_block_started = False |
src/proxy_app/main.py
Outdated
| if content_block_started: | ||
| yield f'event: content_block_stop\ndata: {{"type": "content_block_stop", "index": {current_block_index}}}\n\n' | ||
| current_block_index += 1 | ||
| content_block_started = False |
Copilot
AI
Dec 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable content_block_started is not used.
| content_block_started = False |
src/proxy_app/main.py
Outdated
| continue | ||
|
|
||
| delta = choices[0].get("delta", {}) | ||
| finish_reason = choices[0].get("finish_reason") |
Copilot
AI
Dec 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable finish_reason is not used.
| finish_reason = choices[0].get("finish_reason") |
|
Problem I noticed from the start - only files edited are antigravity and main.py. main.py does not do anything by itself - logic should be in client.py or helper files called by it. Tldr the logic should be in the library. |
|
Sure, go ahead Edit: oh you want me to make these changes, maybe i misunderstood? |
Preferably you, but it can go either way. I didn't set up any guidance for issues or PRs, contributing guidelines, etc. - so it is technically my fault you didn't do that. Not sure when I'll get around to this, but soon-ish I'll review and work on making this be a part of the library proper. This is full on anthropic translation layer, right? If it works like the bot says - it should be compatible with all providers automatically. |
|
@mirrobot-agent what will it take to implement what I said? To move it to the library and make it a separate method for anthropic endpoint proper. Converting to and from anthropic to openai, and from opeanai any existing provider can work as normal. |
|
@Mirrowel, I'm analyzing the codebase to provide a detailed plan for moving the Anthropic translation layer from Give me a moment to review the architecture... |
|
@Mirrowel, I've completed my analysis of the codebase. Here's a detailed plan for moving the Anthropic translation layer from SummaryThe goal is to extract the ~700 lines of Anthropic format translation code from
What Needs to MoveFrom 1. Pydantic Models (~130 lines, lines 218-348)
2. Format Translation Functions (~200 lines, lines 822-1068)
3. Streaming Wrapper (~280 lines, lines 1071-1351)
Proposed Library ArchitectureNew Files to CreateIntegration Point:
|
| Component | Lines of Code | Complexity |
|---|---|---|
Create models.py |
~130 | Low (copy/move) |
Create translator.py |
~200 | Medium (refactor for reuse) |
Create streaming.py |
~280 | Medium (remove FastAPI deps) |
Update client.py |
~80 | Medium (new methods) |
Simplify main.py |
-650 / +50 | Low (remove/replace) |
| Total | ~560 new, -650 removed | Medium |
Questions
- Should the Anthropic models/functions be exposed in the library's public API (via
__init__.py), or kept internal? - Do you want me to implement this refactoring, or just provide this plan for FammasMaz or yourself to implement?
Let me know how you'd like to proceed!
This analysis was generated by an AI assistant.
|
@mirrobot-agent implement this, and open a separate PR with that against this PR. Make sure it is comprehensive and integrated with any provider library and proxy work with. Maintain opeanai- compatible format internally, and convert to and from it for providers, so they can essentially be unchanged. Try to do this iteratively, so make a plan and go along with it, committing as you go. You are not allowed to commit to this PR - only the new one you will open. |
|
@Mirrowel, I'm on it! I'll implement the refactoring plan to move the Anthropic translation layer from My PlanI'll work iteratively, committing as I go:
Starting now... 🚀 This task is being handled by an AI assistant. |
Refactoring PR CreatedI've created PR #47 which refactors the Anthropic translation layer from this PR into the Key Changes in #47:
Benefits:
See: #47 |
…atibility - Add /v1/messages endpoint with Anthropic-format request/response - Support both x-api-key and Bearer token authentication - Implement Anthropic <-> OpenAI format translation for messages, tools, and responses - Add streaming wrapper converting OpenAI SSE to Anthropic SSE events - Handle tool_use blocks with proper stop_reason detection - Fix NoneType iteration bug in tool_calls handling
- Add AnthropicThinkingConfig model and thinking parameter to request - Translate Anthropic thinking config to reasoning_effort for providers - Handle reasoning_content in streaming wrapper (thinking_delta events) - Convert reasoning_content to thinking blocks in non-streaming responses
When no thinking config is provided in the request, Opus models now automatically use reasoning_effort=high with custom_reasoning_budget=True. This ensures Opus 4.5 uses the full 32768 token thinking budget instead of the backend's auto mode (thinkingBudget: -1) which may use less. Opus always uses the -thinking variant regardless, but this change guarantees maximum thinking capacity for better reasoning quality.
…ling - Add validation to ensure maxOutputTokens > thinkingBudget for Claude extended thinking (prevents 400 INVALID_ARGUMENT API errors) - Improve streaming error handling to send proper message_start and content blocks before error event for better client compatibility - Minor code formatting improvements
Track each tool_use block index separately and emit content_block_stop for all blocks (thinking, text, and each tool_use) when stream ends. Fixes Claude Code stopping mid-action due to malformed streaming events.
…nabled - Fixed bug where budget_tokens between 10000-32000 would get ÷4 reduction - Now any explicit thinking request sets custom_reasoning_budget=True - Added logging to show thinking budget, effort level, and custom_budget flag - Simplified budget tier logic (removed redundant >= 32000 check) Before: 31999 tokens requested → 8192 tokens actual (÷4 applied) After: 31999 tokens requested → 32768 tokens actual (full "high" budget)
When using /v1/chat/completions with Opus and reasoning_effort="high" or "medium", automatically set custom_reasoning_budget=true to get full thinking tokens instead of the ÷4 reduced default. This makes the OpenAI endpoint behave consistently with the Anthropic endpoint for Opus models - if you're using Opus with high reasoning, you want the full thinking budget. Adds logging: "🧠 Thinking: auto-enabled custom_reasoning_budget for Opus"
…treaming Claude Code and other Anthropic SDK clients require message_start to be sent before any other SSE events. When a stream completed quickly without content chunks, the wrapper would send message_stop without message_start, causing clients to silently discard all output.
Signed-off-by: Moeeze Hassan <fammas.maz@gmail.com>
This reverts commit e80645e.
…ing is enabled" This reverts commit 2ee549d.
- Create rotator_library/anthropic_compat module with models, translator, and streaming - Add anthropic_messages() and anthropic_count_tokens() methods to RotatingClient - Simplify main.py endpoints to use library methods - Remove ~762 lines of duplicate code from main.py - Fix: Use UUID instead of time.time() for tool/message IDs (avoids collisions) - Fix: Remove unused accumulated_text/accumulated_thinking variables - Fix: Map top_k parameter from Anthropic to OpenAI format
- Add comment explaining metadata parameter is intentionally not mapped (OpenAI doesn't have an equivalent field) - Use safer regex pattern matching for Opus model detection (avoids false positives like "magnum-opus-model") - Document reasoning budget thresholds and // 4 reduction behavior - Conserve thinking tokens for Opus auto-detection (use // 4 like other models) Only set custom_reasoning_budget=True when user explicitly requests 32000+ tokens
aef492b to
d91f98b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Follow-up Review
Previous Feedback Status
Both suggestions from the first review have been addressed:
-
✅ Tool ID collision risk: The fallback tool ID generation now uses
uuid.uuid4().hex[:12]instead ofint(time.time())(translator.py:253). This eliminates the collision risk for parallel requests. -
✅ Unused state variables: The
accumulated_textandaccumulated_thinkingvariables have been removed from the streaming wrapper. The code now only tracks the variables it actually uses.
New Changes Review
This batch of commits includes several well-implemented features:
Anthropic Compat Module Refactoring:
- Clean extraction of translation logic into
rotator_library/anthropic_compat/ - Proper separation:
models.py(Pydantic),translator.py(format conversion),streaming.py(SSE wrapper) - The refactor significantly improves maintainability by reducing
main.pycomplexity
Quota Tracking System:
AntigravityQuotaTrackermixin provides API-based quota baseline tracking- Background job system allows provider-specific periodic tasks
TransientQuotaErrorcorrectly handles bare 429s (without retry info) as transient errors, enabling internal retries before credential rotation
Model Support Expansion:
- New models:
gemini-2.5-flash,gemini-2.5-flash-lite,gemini-3-flash,gpt-oss-120b-medium - Updated quota groups reflect empirical quota sharing behavior
Minor Observations
-
Token counting approximation (from first review): Still uses JSON serialization for tools. This is acknowledged in the implementation and is a reasonable trade-off.
-
Model Filter GUI & Quota Viewer TUI: Large additions (~5K lines combined) but contained to their own files. The virtual canvas implementation for performance is a good choice for large model lists.
Overall, this is a solid incremental update that improves the codebase structure while adding valuable functionality.
This review was generated by an AI assistant.
|
Maybe this is better now...was a rebase |
Tool results with images (e.g., from Read tool) were being dropped during Anthropic→OpenAI translation, and not properly converted to Gemini format. - translator.py: Extract image blocks from tool_result content and convert to OpenAI image_url format - antigravity_provider.py: Handle multimodal tool responses by converting image_url to Gemini inlineData format
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Follow-up Review
This incremental update adds multimodal support for tool responses across the Anthropic translation layer and the Antigravity provider. The changes enable images from tool results to be properly converted between API formats.
Assessment of New Changes:
The implementation is well-structured:
-
translator.py: Now handles Anthropic tool_result blocks that contain mixed text and images, converting them to OpenAI's multimodal format. The code properly handles all cases: string content, list of text blocks, list with images, and empty content.
-
antigravity_provider.py: The
_transform_tool_messagemethod now converts OpenAI'simage_urlformat to Gemini'sinlineDataformat. The docstring update clearly documents the new behavior.
Minor Suggestion:
One inline comment regarding logging for debugging when non-data URLs are encountered.
Overall Status:
The changes are solid and ready to merge. The multimodal handling completes the tool response flow across the translation layers.
This review was generated by an AI assistant.
| elif item_type == "image_url": | ||
| # Convert OpenAI image_url format to Gemini inlineData | ||
| image_url = item.get("image_url", {}).get("url", "") | ||
| if image_url.startswith("data:"): | ||
| try: | ||
| # Parse: data:image/png;base64,iVBORw0KG... | ||
| header, data = image_url.split(",", 1) | ||
| mime_type = header.split(":")[1].split(";")[0] | ||
| image_parts.append({ | ||
| "inlineData": { | ||
| "mimeType": mime_type, | ||
| "data": data, | ||
| } | ||
| }) | ||
| except Exception as e: | ||
| lib_logger.warning(f"Failed to parse image data URL in tool response: {e}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider logging a warning when URL-based images are encountered but cannot be converted. Currently, only data: URLs are parsed; external URLs (e.g., https://...) are silently skipped. While this may be intentional since external URLs cannot be converted to inlineData without fetching them, a debug/warning log would help when debugging image handling issues.
| elif item_type == "image_url": | |
| # Convert OpenAI image_url format to Gemini inlineData | |
| image_url = item.get("image_url", {}).get("url", "") | |
| if image_url.startswith("data:"): | |
| try: | |
| # Parse: data:image/png;base64,iVBORw0KG... | |
| header, data = image_url.split(",", 1) | |
| mime_type = header.split(":")[1].split(";")[0] | |
| image_parts.append({ | |
| "inlineData": { | |
| "mimeType": mime_type, | |
| "data": data, | |
| } | |
| }) | |
| except Exception as e: | |
| lib_logger.warning(f"Failed to parse image data URL in tool response: {e}") | |
| if item_type == "text": | |
| text_parts.append(item.get("text", "")) | |
| elif item_type == "image_url": | |
| # Convert OpenAI image_url format to Gemini inlineData | |
| image_url = item.get("image_url", {}).get("url", "") | |
| if image_url.startswith("data:"): | |
| try: | |
| # Parse: data:image/png;base64,iVBORw0KG... | |
| header, data = image_url.split(",", 1) | |
| mime_type = header.split(":")[1].split(";")[0] | |
| image_parts.append({ | |
| "inlineData": { | |
| "mimeType": mime_type, | |
| "data": data, | |
| } | |
| }) | |
| except Exception as e: | |
| lib_logger.warning(f"Failed to parse image data URL in tool response: {e}") | |
| else: | |
| lib_logger.debug(f"Skipping non-data URL image in tool response (cannot convert to inlineData): {image_url[:50]}...") |
Important
Adds Anthropic endpoint for Claude code with format translation and updates to Antigravity provider for compatibility.
/v1/messagesand/v1/messages/count_tokensendpoints inmain.pyfor Anthropic API compatibility.anthropic_messagesandanthropic_count_tokensfunctions to handle requests and responses in Anthropic format.AnthropicTextBlock,AnthropicImageSource,AnthropicImageBlock,AnthropicToolUseBlock,AnthropicToolResultBlock,AnthropicMessage,AnthropicTool,AnthropicThinkingConfig,AnthropicMessagesRequest,AnthropicUsage,AnthropicMessagesResponse,AnthropicCountTokensRequest, andAnthropicCountTokensResponseinmain.py.anthropic_to_openai_messages,anthropic_to_openai_tools,anthropic_to_openai_tool_choice, andopenai_to_anthropic_responseinmain.pyfor format conversion.anthropic_streaming_wrapperinmain.pyto convert OpenAI streaming format to Anthropic streaming format._transform_to_antigravity_formatinantigravity_provider.pyto ensuremax_tokensis greater thanthinking.budget_tokensfor Claude models.This description was created by
for aef492b. You can customize this summary. It will automatically update as commits are pushed.