Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @abhijitjavelin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a fix in the langchain_azure_universal.py example. The changes ensure the example correctly interacts with Azure OpenAI and Javelin by updating the model name, adjusting API endpoint paths, and refining a provider header. This makes the example functional and aligned with current deployment configurations.
Highlights
- Example Configuration Update: Updated the
model_choicevariable fromgpt-4togpt35in thelangchain_azure_universal.pyexample, reflecting a change in the default model used. - API Endpoint Adjustments: Modified the
base_urlfor both non-streaming and streamingAzureChatOpenAIinstances to correctly reference thegpt35deployment path. Specifically, the non-streaming path also changed from/v1/azureopenai/to/v1/openai/. - Header Correction: Corrected the
x-javelin-providerheader by removing the trailing/openaifrom its URL, ensuring the correct provider endpoint is used.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request fixes an example for using Langchain with Azure OpenAI through Javelin. It updates the model name and corrects URL paths. My review identifies a couple of areas for improvement: using a variable for the model name in URL construction for better maintainability, and fixing an inconsistency in the URL path between the streaming and non-streaming examples which is likely a bug.
| openai_api_key=azure_openai_api_key, | ||
| api_version="2024-08-01-preview", | ||
| base_url=f"{base_url}/v1/azureopenai/deployments/gpt-4/", | ||
| base_url=f"{base_url}/v1/azureopenai/deployments/gpt35/", |
There was a problem hiding this comment.
This base_url has two issues:
- Hardcoded Model Name: Similar to the non-streaming function, the model name
gpt35is hardcoded. It should use themodel_choicevariable to avoid inconsistencies. - Inconsistent URL Path: The path segment
azureopenaiis used here, while the non-streaming function on line 37 usesopenai. This inconsistency is likely a bug and may cause this streaming example to fail. It should probably beopenaito match the other function.
Combining these fixes will improve correctness and maintainability.
| base_url=f"{base_url}/v1/azureopenai/deployments/gpt35/", | |
| base_url=f"{base_url}/v1/openai/deployments/{model_choice}/", |
| api_version="2024-08-01-preview", | ||
| # The base_url is Javelin’s universal route | ||
| base_url=f"{base_url}/v1/azureopenai/deployments/gpt-4/", | ||
| base_url=f"{base_url}/v1/openai/deployments/gpt35/", |
There was a problem hiding this comment.
For better maintainability, it's recommended to use the model_choice variable defined on line 21 instead of hardcoding the model name gpt35 in the base_url. This ensures that if the model is changed in one place, it's updated everywhere, preventing potential mismatches between the x-javelin-model header and the deployment name in the URL.
| base_url=f"{base_url}/v1/openai/deployments/gpt35/", | |
| base_url=f"{base_url}/v1/openai/deployments/{model_choice}/", |
No description provided.