Skip to content

Add support for OpenAI, Cohere, and improve Ollama integration#1

Open
maddygoround wants to merge 1 commit intomainfrom
feat/multi-llm-provider-support
Open

Add support for OpenAI, Cohere, and improve Ollama integration#1
maddygoround wants to merge 1 commit intomainfrom
feat/multi-llm-provider-support

Conversation

@maddygoround
Copy link
Owner

This change extends DeepReview to support multiple Large Language Model providers, including local Ollama instances, OpenAI API, and Cohere API.

Key changes:

  • Modified config.yaml structure to support provider-specific configurations (Ollama, OpenAI, Cohere, and a generic API placeholder).
  • Added llm_provider setting to select the active LLM provider.
  • Updated BranchDiffAnalyzer to load new configurations and initialize the appropriate LLM client (Ollama, ChatOpenAI, Cohere) via a unified get_llm(model_type) method.
  • API keys for OpenAI and Cohere can be sourced from config or environment variables (OPENAI_API_KEY, COHERE_API_KEY).
  • analyze_file method now routes requests to the correct LLM provider, using chat-style interaction for OpenAI and LLMChain for Ollama/Cohere.
  • interactive_qa updated to use the configured QA model from the chosen provider, adapting interaction style (chat vs. LLMChain).
  • Command-line arguments enhanced to allow overriding config settings for LLM provider, models, main branch, output options, etc.
  • Improved error handling and logging for API calls, configuration loading, and git operations.
  • Updated README.md to document new features, command-line arguments, and detailed configuration for each provider.
  • Added langchain-community, langchain-openai, and langchain-cohere to setup.py dependencies.
  • Initial unit tests were added but encountered environment issues; these can be revisited.

This change extends DeepReview to support multiple Large Language Model
providers, including local Ollama instances, OpenAI API, and Cohere API.

Key changes:
- Modified `config.yaml` structure to support provider-specific
  configurations (Ollama, OpenAI, Cohere, and a generic API placeholder).
- Added `llm_provider` setting to select the active LLM provider.
- Updated `BranchDiffAnalyzer` to load new configurations and initialize
  the appropriate LLM client (Ollama, ChatOpenAI, Cohere) via a unified
  `get_llm(model_type)` method.
- API keys for OpenAI and Cohere can be sourced from config or environment
  variables (OPENAI_API_KEY, COHERE_API_KEY).
- `analyze_file` method now routes requests to the correct LLM provider,
  using chat-style interaction for OpenAI and LLMChain for Ollama/Cohere.
- `interactive_qa` updated to use the configured QA model from the chosen
  provider, adapting interaction style (chat vs. LLMChain).
- Command-line arguments enhanced to allow overriding config settings for
  LLM provider, models, main branch, output options, etc.
- Improved error handling and logging for API calls, configuration loading,
  and git operations.
- Updated `README.md` to document new features, command-line arguments,
  and detailed configuration for each provider.
- Added `langchain-community`, `langchain-openai`, and `langchain-cohere`
  to `setup.py` dependencies.
- Initial unit tests were added but encountered environment issues;
  these can be revisited.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant