Skip to content

Comments

fix(llm): fix errant default provider from anthropic to openai for voice …#184

Open
neurocis wants to merge 11 commits intospacedriveapp:mainfrom
neurocis:fix/voice-transcription-default-provider
Open

fix(llm): fix errant default provider from anthropic to openai for voice …#184
neurocis wants to merge 11 commits intospacedriveapp:mainfrom
neurocis:fix/voice-transcription-default-provider

Conversation

@neurocis
Copy link

@neurocis neurocis commented Feb 24, 2026

Root Cause

In src/llm/manager.rs, the resolve_model() function was defaulting to "anthropic" when no provider prefix was specified in the model name. Since Anthropic doesn't support input_audio (the feature used for voice transcription), all transcription requests failed with an error.

Changes

  1. src/llm/manager.rs: Changed default provider from "anthropic""openai" so that bare model names like "whisper" work out of the box with OpenAI-compatible APIs.

  2. src/agent/channel.rs: Added diagnostic logging to help debug voice transcription issues by logging:

    • Resolved voice model (provider + model)
    • Provider configuration (base_url, api_type)
    • Endpoint being called

For Custom Providers

Users with custom providers (like localai) should still specify the provider prefix:

voice = "localai/whisper-large-turbo"

The fix ensures that bare model names default to OpenAI-compatible rather than Anthropic.

Note

This PR fixes a critical bug where voice transcription requests would fail because the LLM manager defaulted to Anthropic for unprefixed model names, but Anthropic's API doesn't support the input_audio feature required for transcription. The fix changes the default to OpenAI and adds diagnostic logging to the transcription flow for easier debugging. Users with custom providers should continue specifying the provider prefix in their config.

Written by Tembo for commit 663d385. This will update automatically on new commits.

…transcription

- resolve_model() now defaults to 'openai' instead of 'anthropic' when no
  provider prefix is specified, since anthropic doesn't support input_audio
- Add debug logging for voice transcription to help diagnose issues

Co-authored-by: KiloCodium <KiloCoder@neurocis.ai>
neurocis and others added 4 commits February 23, 2026 16:13
Co-authored-by: tembo[bot] <208362400+tembo[bot]@users.noreply.github.com>
Co-authored-by: tembo[bot] <208362400+tembo[bot]@users.noreply.github.com>
Co-authored-by: tembo[bot] <208362400+tembo[bot]@users.noreply.github.com>
Co-authored-by: tembo[bot] <208362400+tembo[bot]@users.noreply.github.com>
Co-authored-by: KiloCodium <KiloCoder@neurocis.ai>
@neurocis neurocis force-pushed the fix/voice-transcription-default-provider branch from 038bf14 to e6b1847 Compare February 24, 2026 00:17
jamiepine
jamiepine previously approved these changes Feb 24, 2026
Co-authored-by: KiloCodium <KiloCoder@neurocis.ai>
…provider' into fix/voice-transcription-default-provider

Co-authored-by: KiloCodium <KiloCoder@neurocis.ai>
@neurocis neurocis force-pushed the fix/voice-transcription-default-provider branch from 7572919 to 71b2b15 Compare February 24, 2026 01:39
Co-authored-by: KiloCodium <KiloCoder@neurocis.ai>
@neurocis neurocis requested a review from jamiepine February 24, 2026 03:33
@coderabbitai
Copy link

coderabbitai bot commented Feb 24, 2026

Walkthrough

Added debug logging around voice transcription resolution and provider retrieval, and changed the implicit provider default for unprefixed model names from anthropic to openai, with explanatory debug logs.

Changes

Cohort / File(s) Summary
Voice transcription debug tracing
src/agent/channel.rs
Inserted debug logs in transcribe_audio_attachment to report successful voice model resolution (provider, model, voice_config), provider retrieval details (provider_id, base_url, api_type), and the constructed transcription endpoint before request. No control-flow or error-handling changes.
Provider default and resolution logging
src/llm/manager.rs
Changed implicit provider for unprefixed model names to return ("openai", model_name) instead of ("anthropic", model_name); added debug logs when a provider is specified and explanatory debug message when falling back to the new default.
Manifest update
Cargo.toml
Minor change(s) to manifest (lines added/removed); no API/signature changes.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 I nibble on logs in the moonlight glow,

tracing models where the debug streams flow.
OpenAI now leads the unmarked way,
I thump my foot and cheer the clearer day. 🥕🐇

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: fixing the default provider for voice models from anthropic to openai, which directly matches the primary change in src/llm/manager.rs.
Description check ✅ Passed The description is well-related to the changeset, clearly explaining the root cause of the bug, the specific changes made to both files, and guidance for custom providers.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Co-authored-by: KiloCodium <KiloCoder@neurocis.ai>
@neurocis neurocis force-pushed the fix/voice-transcription-default-provider branch from b1787d1 to 29c5120 Compare February 24, 2026 17:33
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/llm/manager.rs`:
- Around line 292-304: The doc and inline comments incorrectly qualify the
default behavior as "for voice models"; update the documentation and the comment
inside the resolve_model function to state that the function is general-purpose
and that when no provider prefix is specified it defaults to "openai" (used as
the fallback provider for cases like Whisper/vision/chat), e.g., change the doc
comment on resolve_model and the inline comment that currently mentions "for
voice models" to a neutral wording such as "Default to 'openai' when no provider
is specified" and keep the existing tracing::debug message but remove the "for
voice models" phrase so maintainers of chat/vision code paths are not misled.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b1787d1 and 29c5120.

📒 Files selected for processing (2)
  • src/agent/channel.rs
  • src/llm/manager.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/agent/channel.rs

Comment on lines +292 to +304
/// Format: "provider/model-name" or just "model-name" (defaults to openai for voice models).
pub fn resolve_model(&self, model_name: &str) -> Result<(String, String)> {
if let Some((provider, model)) = model_name.split_once('/') {
tracing::debug!(provider = %provider, model = %model, "resolved model with explicit provider");
Ok((provider.to_string(), model.to_string()))
} else {
Ok(("anthropic".into(), model_name.into()))
// Default to openai for voice models (most common for Whisper/vision)
// rather than anthropic, since anthropic doesn't support input_audio
tracing::debug!(
model = %model_name,
"no provider prefix specified in model name, defaulting to 'openai'. \
Specify as 'provider/model' (e.g., 'openai/whisper-1') to use a different provider."
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

"for voice models" qualifier in doc and inline comment is inaccurate.

resolve_model is general-purpose — it resolves model names for all usages (chat, vision, voice, etc.), not just voice. The "for voice models" qualifier in both the doc comment (Line 292) and the inline comment (Line 298) will mislead developers maintaining non-voice code paths that hit this fallback.

📝 Proposed fix
-    /// Format: "provider/model-name" or just "model-name" (defaults to openai for voice models).
+    /// Format: "provider/model-name" or just "model-name" (defaults to openai when no provider prefix is given).
     pub fn resolve_model(&self, model_name: &str) -> Result<(String, String)> {
         if let Some((provider, model)) = model_name.split_once('/') {
             tracing::debug!(provider = %provider, model = %model, "resolved model with explicit provider");
             Ok((provider.to_string(), model.to_string()))
         } else {
-            // Default to openai for voice models (most common for Whisper/vision)
-            // rather than anthropic, since anthropic doesn't support input_audio
+            // Default to openai for all unprefixed model names.
+            // Previously defaulted to anthropic, which lacks input_audio support for voice.
             tracing::debug!(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/llm/manager.rs` around lines 292 - 304, The doc and inline comments
incorrectly qualify the default behavior as "for voice models"; update the
documentation and the comment inside the resolve_model function to state that
the function is general-purpose and that when no provider prefix is specified it
defaults to "openai" (used as the fallback provider for cases like
Whisper/vision/chat), e.g., change the doc comment on resolve_model and the
inline comment that currently mentions "for voice models" to a neutral wording
such as "Default to 'openai' when no provider is specified" and keep the
existing tracing::debug message but remove the "for voice models" phrase so
maintainers of chat/vision code paths are not misled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants