Open
Conversation
…nd set the temperature default value to 6.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
✨ Feature: Support for New GPT-5 Models
This Pull Request introduces full compatibility with OpenAI's new model series: gpt-5 and gpt-5-mini.
This upgrade is essential as the latest models use a revised API schema, making them incompatible with the current version of TopicGPT.
⚙️ Technical Changes
The primary changes address necessary parameter updates to correctly invoke the new gpt-5 models:
The max_tokens argument is deprecated and no longer supported when invoking the gpt-5 series.
Action: I have globally replaced instances of max_tokens with the new, required parameter: max_completion_tokens.
The gpt-5 model series is designed to accept a default temperature value of 1.0.
Action: I have updated all relevant model invocation code to explicitly set the temperature to 1.0 to ensure optimal and consistent model behavior.
💡 Future Improvement (Follow-up Task)
While the current implementation hardcodes the temperature to 1.0, it would significantly improve the user experience to make this configurable.
Proposal: I suggest creating a follow-up issue (which I am happy to open) to refactor the temperature parameter. This would involve moving it to a configurable setting (e.g., via environment variables or a configuration file) to allow users to easily adjust the model's temperature without modifying the codebase.
✅ Testing Notes
I have verified successful completion calls using the following models:
gpt-4o,gpt-4o-mini,gpt-5,gpt-5-mini, as well as Ollama models, includinggemma3:latestandllama3.1:latest.