Transcribe input from your microphone and turn it into key presses on a virtual keyboard. This allows you to use speech-to-text on any application or window system in Linux. In fact you can use it on the system console.
VoxInput is meant to be used with LocalAI, but it will function with any OpenAI compatible API that provides the transcription endpoint or realtime API.
- Speech-to-Text Daemon: Runs as a background process to listen for signals to start or stop recording audio.
- Audio Capture and Playback: Records audio from the microphone and plays it back for verification.
- Transcription: Converts recorded audio into text using a local or remote transcription service.
- Text Automation: Simulates typing the transcribed text into an application using
dotool
. - Voice Activity Detection: In realtime mode VoxInput uses VAD to detect speech segments and automatically transcribe them.
dotool
(for simulating keyboard input)OPENAI_API_KEY
orVOXINPUT_API_KEY
: Your OpenAI API key for Whisper transcription. If you have a local instance with no key, then just leave it unset.OPENAI_BASE_URL
orVOXINPUT_BASE_URL
: The base URL of the OpenAI compatible API server: defaults tohttp://localhost:8080/v1
OPENAI_WS_BASE_URL
orVOXINPUT_WS_BASE_URL
: The base URL of the realtime websocket API: defaults tows://localhost:8080/v1/realtime
- OpenAI Realtime API support - VoxInput's realtime mode with VAD requires a websocket endpoint that support's OpenAI's realtime API in transcription only mode. You can disable realtime mode with
--no-realtime
.
Note that the VoxInput env vars take precedence over the OpenAI ones.
Unless you don't mind running VoxInput as root, then you also need to ensure the following is setup for dotool
- Your user is in the
input
user group - You have the following udev rule
KERNEL=="uinput", GROUP="input", MODE="0620", OPTIONS+="static_node=uinput"
This can be set in your NixOS config as follows
services.udev.extraRules = ''
KERNEL=="uinput", GROUP="input", MODE="0620", OPTIONS+="static_node=uinput"
'';
-
Clone the repository:
git clone https://github.com/yourusername/VoxInput.git cd VoxInput
-
Build the project:
go build -mod=vendor -o voxinput
-
Ensure
dotool
is installed on your system and it can make key presses. -
It makes sense to bind the
record
andwrite
commands to keys using your window manager. For instance in my Sway config I have the following
bindsym $mod+Shift+t exec voxinput record
bindsym $mod+t exec voxinput write
Alternatively you can use the Nix flake.
The LANG
and VOXINPUT_LANG
environment variables are used to tell the transcription service which language to use.
For multi-lingual use set VOXINPUT_LANG
to an empty string.
-
listen
: Starts the speech-to-text daemon../voxinput listen
-
record
: Sends a signal to the daemon to start recording audio then exits. In realtime mode this will start transcription../voxinput record
-
write
orstop
: Sends a signal to the daemon to stop recording. When not in realtime mode this triggers transcription../voxinput write
-
help
: Displays help information../voxinput help
-
Start the daemon in a terminal window:
OPENAI_BASE_URL=http://ai.local:8081/v1 OPENAI_WS_BASE_URL=ws://ai.local:8081/v1/realtime ./voxinput listen
-
Select a text box you want to speak into and use a global shortcut to run the following
./voxinput record
-
Begin speaking, when you pause for a second or two your speach will be transcribed and typed into the active application.
-
Send a signal to stop recording
./voxinput stop
-
Start the daemon in a terminal window:
OPENAI_BASE_URL=http://ai.local:8081/v1 ./voxinput listen --no-realtime
-
Select a text box you want to speak into and use a global shortcut to run the following
./voxinput record
-
After speaking, send a signal to stop recording and transcribe:
./voxinput write
-
The transcribed text will be typed into the active application.
- Put playback behind a debug switch
- Create a release
- Realtime Transcription
- GUI and system tray
- Voice detection and activation (partial, see below)
- Code words to start and stop transcription
- Allow user to describe a button they want to press (requires submitting screen shot and transcription to LocalAGI)
SIGUSR1
: Start recording audio.SIGUSR2
: Stop recording and transcribe audio.SIGTERM
: Stop the daemon.
- Uses the default audio input, make sure you have the device you want to use set as the default on your system.
This project is licensed under the MIT License. See the LICENSE file for details.
- malgo for audio handling.
- go-openai for OpenAI API integration.
- numen and dotool, I did consider modifying numen to use LocalAI, but decided to go with a new tool for now.
Feel free to contribute or report issues! 😊