Skip to content

Conversation

@nestharus
Copy link
Contributor

@nestharus nestharus commented Nov 27, 2025

Add support for gemini-3-pro-preview-low and gemini-3-pro-preview-high model variants via suffix parsing
and normalization.

  • Parse -low and -high suffixes from Gemini 3 model names
  • Store reasoning effort in request metadata during model normalization
  • Inject reasoning_effort into payload at executor level
  • Routes through base gemini-3-pro-preview model

How it works

  1. gemini-3-pro-preview-low → normalized to gemini-3-pro-preview with {"gemini3_reasoning_effort": "low"} in metadata
  2. Executor reads metadata and injects reasoning_effort: "low" into the payload
  3. Request routes through the base model with reasoning effort applied

Test plan

  • Test gemini-3-pro-preview-low request - uses ~143 reasoning tokens
  • Test gemini-3-pro-preview-high request - uses ~271 reasoning tokens
  • Verify both route through gemini-3-pro-preview model

@chatgpt-codex-connector
Copy link

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Credits must be used to enable repository wide code reviews.

@gemini-code-assist
Copy link

Summary of Changes

Hello @nestharus, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces new Gemini 3 Pro Preview model variants that allow for explicit low or high reasoning effort. It also integrates the necessary functionality to automatically apply the corresponding reasoning_effort setting to API requests when these specific model identifiers are used, ensuring proper interaction with the updated models.

Highlights

  • New Gemini 3 Pro Preview Models: Added definitions for gemini-3-pro-preview-low and gemini-3-pro-preview-high models, enabling explicit reasoning effort settings.
  • Automated Reasoning Effort Injection: Implemented logic to automatically inject the reasoning_effort parameter (low or high) into the request payload for Gemini 3 model variants based on their model ID suffix.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for Gemini 3 Pro Preview's low and high reasoning effort modes by adding new model definitions and injecting the reasoning_effort parameter based on the model ID suffix. The implementation is mostly correct, but I've identified a critical bug in the new injectGemini3ReasoningEffort function where an error is ignored, potentially leading to corrupted request payloads. I've also included a medium-severity suggestion to address code duplication in the model definitions to improve maintainability.

@nestharus nestharus force-pushed the feat/gemini-3-reasoning-effort-models branch 2 times, most recently from 4a0b6c0 to 23919f3 Compare November 27, 2025 06:57
Add support for gemini-3-pro-preview-low and gemini-3-pro-preview-high
model variants via suffix parsing and normalization.

Changes:
- Add ParseGemini3ReasoningEffortSuffix to parse -low/-high suffixes
- Add Gemini3ReasoningEffortFromMetadata to read effort from metadata
- Update NormalizeGeminiThinkingModel to handle reasoning effort first
- Add injectGemini3ReasoningEffort to inject reasoning_effort into payload
- Add IsGemini3Model utility function

The -low and -high suffixes:
1. Get normalized to base model for routing
2. Store reasoning effort in metadata
3. Executor injects reasoning_effort into payload

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
@nestharus nestharus force-pushed the feat/gemini-3-reasoning-effort-models branch from 23919f3 to 3f6220f Compare December 1, 2025 08:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant