-
Notifications
You must be signed in to change notification settings - Fork 4
Description
Summary
OpenSecretClient currently supports chat completions, embeddings, and model listing, but not /v1/audio/transcriptions. The Maple backend exposes whisper-large-v3 as an available model, but there's no way to reach it through the SDK.
Problem
encrypted_openai_call is private and only supports JSON-serialized request bodies. Audio transcription requires multipart/form-data with a file upload. Because session_manager and encrypted_openai_call are private, downstream consumers (like maple-proxy) cannot implement transcription support themselves.
Requested API
pub async fn create_transcription(
&self,
file: Vec<u8>,
filename: String,
model: String,
language: Option<String>,
response_format: Option<String>,
) -> Result<TranscriptionResponse>This would need to encrypt the audio file using the session key, send it to /v1/audio/transcriptions, and decrypt the response.
Context
Building a local proxy (maple-proxy) that exposes an OpenAI-compatible API on localhost. Chat completions and embeddings work, but audio transcription is blocked by this missing SDK method.