Skip to content

0m364/radio-assistant

Repository files navigation

An assistant ... For the radio

Offline Assistance with RFML and Local LLMs

This project is configured out-of-the-box to be a natural-language "Radio Worker" using Radio Frequency Machine Learning (RFML) concepts. The application defaults to looking for a model named rfml via a local API like Ollama or LM Studio.

How to use offline:

  1. Install Ollama (or another local provider like LM Studio).
  2. Create or run your preferred RFML-capable model (or tag an existing one as rfml):
    ollama run rfml
    (Alternatively, you can just change the model name in settings to gemma3:8b or qwen3.5:9b depending on what you have installed.)
  3. Open the application Settings and ensure the API Base URL is set to http://localhost:11434/v1 and the Model Name is set to rfml (or your model of choice).

Natural Language Operations

The AI acts as an RFML specialist and semi-chatbot. Tell the AI what you want to do (e.g., "Tune the radio to 14.074 MHz in USB mode"), and it will automatically control the dashboard using internal tags.

Tested Models

02 MAR 2026 : tested with several offline available models Gemma 3 8B preforms the best for its size ..... smaller models can be used but must be fine tuned 08 MAR 2026 : testing with Qwen 3.5 9B is looking promising

Credits and Licensing

About

an assistant .... for the radio

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors