RT-Rename standardizes radiotherapy structure names against TG-263 using local or cloud-hosted LLMs. The repository is now organized so the web UI, parsing logic, DICOM helpers, and inference orchestration are separated, which makes it much easier to extend toward future VLM support.
- Core logic now lives in the
rt_rename/package instead of a singleutils.pymodule. - The Dash UI is isolated in
rt_rename/web.py. app.pyandbatch_rename.pyremain as thin entry points.- Parsing, prompt rendering, guideline loading, inference, exports, and DICOM updates are split into focused modules.
- Basic automated tests were added under
tests/.
rt_rename/
config.py Model registry loading
constants.py Shared paths and defaults
dicom_utils.py RTStruct read/write helpers
exports.py CSV export helpers
guidelines.py TG-263 workbook loading
inference.py Local/cloud inference adapters
parsers.py CSV, DICOM, and filename parsing
prompts.py Prompt template rendering
rename_service.py End-to-end rename orchestration
web.py Dash app layout and callbacks
app.py Web entry point
batch_rename.py CLI batch entry point
utils.py Backward-compatible shim
tests/ Unit tests for core logic
- Clone the repository.
- Start the stack:
cd docker
docker-compose up -d- Open the app at
http://localhost:8055.
If you want cloud inference, create a root .env file with:
OPEN_AI_URL=your_api_url_here
OPEN_AI_API_KEY=your_api_key_here- Create a virtual environment.
- Install dependencies:
pip install -r docker/requirements.txt- Run the web app:
python app.pypython -m unittest discover -s testsThe batch entry point is now a CLI instead of a hard-coded script.
python batch_rename.py input.csv output.csv \
--model "Llama 3.1 | 70B | local" \
--prompt prompt_latest.txt \
--guideline TG263 \
--region ThoraxFor DICOM RTStruct input you can also export a renamed RTStruct:
python batch_rename.py input.dcm output.csv \
--model "Llama 3.1 | 70B | local" \
--prompt prompt_latest.txt \
--output-dicom renamed_rtstruct.dcmconfig/models.json defines the available models shown in the UI. The loader now supports provider-aware metadata, so we can extend the registry later for multimodal or VLM-capable models without rewriting the app architecture.
Current fields:
nameparametersmodel_strcloud
Optional future-facing fields already supported by the loader:
providermodalities
Prompt templates live in config/prompt_*.txt and are rendered by rt_rename/prompts.py.
The TG-263 workbook is stored at config/TG263_nomenclature.xlsx.
The codebase is in a better position for future VLM work because:
- inference requests are funneled through a dedicated adapter layer
- cloud requests already support structured content parts
- UI state is less dependent on module-level globals
- pure rename logic is testable without importing the Dash app
The next natural step for VLM support would be extending rt_rename/inference.py and the model registry so image-bearing requests can flow through the same orchestration path.