When AI post-processing is enabled, if the underlying LLM API call fails, the error message is returned as the "final text" and typed directly into the user's active application.
For example, a user dictating into a document could see Error: HTTP 503: This model is currently experiencing high demand. Spikes in demand are usually temporary. Please try again later. inserted instead of their spoken words.
Steps to reproduce:
- Enable AI post-processing in settings
- Trigger any LLM error (model overloaded, network issue, invalid API key, etc.)
- Start dictation, speak a sentence, and complete dictation
- Observe the error string typed into the active app instead of the transcribed text
Affected code paths in ContentView.processTextWithAI():
- Empty API response → returns
"<no content>"
- Any LLM error → returns
"Error: {description}"
Expected behavior: When AI processing fails, the app should gracefully degrade and return the raw transcription to the user. The error should be logged for debugging but never surfaced as typed text.
When AI post-processing is enabled, if the underlying LLM API call fails, the error message is returned as the "final text" and typed directly into the user's active application.
For example, a user dictating into a document could see
Error: HTTP 503: This model is currently experiencing high demand. Spikes in demand are usually temporary. Please try again later.inserted instead of their spoken words.Steps to reproduce:
Affected code paths in
ContentView.processTextWithAI():"<no content>""Error: {description}"Expected behavior: When AI processing fails, the app should gracefully degrade and return the raw transcription to the user. The error should be logged for debugging but never surfaced as typed text.