fix(llm): fix errant default provider from anthropic to openai for voice …#184
fix(llm): fix errant default provider from anthropic to openai for voice …#184neurocis wants to merge 11 commits intospacedriveapp:mainfrom
Conversation
…transcription - resolve_model() now defaults to 'openai' instead of 'anthropic' when no provider prefix is specified, since anthropic doesn't support input_audio - Add debug logging for voice transcription to help diagnose issues Co-authored-by: KiloCodium <KiloCoder@neurocis.ai>
Co-authored-by: tembo[bot] <208362400+tembo[bot]@users.noreply.github.com>
Co-authored-by: tembo[bot] <208362400+tembo[bot]@users.noreply.github.com>
Co-authored-by: tembo[bot] <208362400+tembo[bot]@users.noreply.github.com>
Co-authored-by: tembo[bot] <208362400+tembo[bot]@users.noreply.github.com> Co-authored-by: KiloCodium <KiloCoder@neurocis.ai>
038bf14 to
e6b1847
Compare
Co-authored-by: KiloCodium <KiloCoder@neurocis.ai>
Co-authored-by: KiloCodium <KiloCoder@neurocis.ai>
…provider' into fix/voice-transcription-default-provider Co-authored-by: KiloCodium <KiloCoder@neurocis.ai>
7572919 to
71b2b15
Compare
Co-authored-by: KiloCodium <KiloCoder@neurocis.ai>
WalkthroughAdded debug logging around voice transcription resolution and provider retrieval, and changed the implicit provider default for unprefixed model names from Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Co-authored-by: KiloCodium <KiloCoder@neurocis.ai>
b1787d1 to
29c5120
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/llm/manager.rs`:
- Around line 292-304: The doc and inline comments incorrectly qualify the
default behavior as "for voice models"; update the documentation and the comment
inside the resolve_model function to state that the function is general-purpose
and that when no provider prefix is specified it defaults to "openai" (used as
the fallback provider for cases like Whisper/vision/chat), e.g., change the doc
comment on resolve_model and the inline comment that currently mentions "for
voice models" to a neutral wording such as "Default to 'openai' when no provider
is specified" and keep the existing tracing::debug message but remove the "for
voice models" phrase so maintainers of chat/vision code paths are not misled.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/agent/channel.rssrc/llm/manager.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- src/agent/channel.rs
| /// Format: "provider/model-name" or just "model-name" (defaults to openai for voice models). | ||
| pub fn resolve_model(&self, model_name: &str) -> Result<(String, String)> { | ||
| if let Some((provider, model)) = model_name.split_once('/') { | ||
| tracing::debug!(provider = %provider, model = %model, "resolved model with explicit provider"); | ||
| Ok((provider.to_string(), model.to_string())) | ||
| } else { | ||
| Ok(("anthropic".into(), model_name.into())) | ||
| // Default to openai for voice models (most common for Whisper/vision) | ||
| // rather than anthropic, since anthropic doesn't support input_audio | ||
| tracing::debug!( | ||
| model = %model_name, | ||
| "no provider prefix specified in model name, defaulting to 'openai'. \ | ||
| Specify as 'provider/model' (e.g., 'openai/whisper-1') to use a different provider." | ||
| ); |
There was a problem hiding this comment.
"for voice models" qualifier in doc and inline comment is inaccurate.
resolve_model is general-purpose — it resolves model names for all usages (chat, vision, voice, etc.), not just voice. The "for voice models" qualifier in both the doc comment (Line 292) and the inline comment (Line 298) will mislead developers maintaining non-voice code paths that hit this fallback.
📝 Proposed fix
- /// Format: "provider/model-name" or just "model-name" (defaults to openai for voice models).
+ /// Format: "provider/model-name" or just "model-name" (defaults to openai when no provider prefix is given).
pub fn resolve_model(&self, model_name: &str) -> Result<(String, String)> {
if let Some((provider, model)) = model_name.split_once('/') {
tracing::debug!(provider = %provider, model = %model, "resolved model with explicit provider");
Ok((provider.to_string(), model.to_string()))
} else {
- // Default to openai for voice models (most common for Whisper/vision)
- // rather than anthropic, since anthropic doesn't support input_audio
+ // Default to openai for all unprefixed model names.
+ // Previously defaulted to anthropic, which lacks input_audio support for voice.
tracing::debug!(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/llm/manager.rs` around lines 292 - 304, The doc and inline comments
incorrectly qualify the default behavior as "for voice models"; update the
documentation and the comment inside the resolve_model function to state that
the function is general-purpose and that when no provider prefix is specified it
defaults to "openai" (used as the fallback provider for cases like
Whisper/vision/chat), e.g., change the doc comment on resolve_model and the
inline comment that currently mentions "for voice models" to a neutral wording
such as "Default to 'openai' when no provider is specified" and keep the
existing tracing::debug message but remove the "for voice models" phrase so
maintainers of chat/vision code paths are not misled.
Root Cause
In
src/llm/manager.rs, theresolve_model()function was defaulting to"anthropic"when no provider prefix was specified in the model name. Since Anthropic doesn't supportinput_audio(the feature used for voice transcription), all transcription requests failed with an error.Changes
src/llm/manager.rs: Changed default provider from"anthropic"→"openai"so that bare model names like"whisper"work out of the box with OpenAI-compatible APIs.src/agent/channel.rs: Added diagnostic logging to help debug voice transcription issues by logging:For Custom Providers
Users with custom providers (like localai) should still specify the provider prefix:
The fix ensures that bare model names default to OpenAI-compatible rather than Anthropic.
Note
This PR fixes a critical bug where voice transcription requests would fail because the LLM manager defaulted to Anthropic for unprefixed model names, but Anthropic's API doesn't support the
input_audiofeature required for transcription. The fix changes the default to OpenAI and adds diagnostic logging to the transcription flow for easier debugging. Users with custom providers should continue specifying the provider prefix in their config.Written by Tembo for commit 663d385. This will update automatically on new commits.