Deep learning-based spatiotemporal fusion for high-fidelity ultra-high-speed x-ray radiography
A model to reconstruct high quality x-ray images by combining the high spatial resolution of high-speed camera and high temporal resolution of ultra-high-speed camera image sequences.
This implementation is based on the BasicSR toolbox. Data for model pre-training are collected from the REDS dataset.
Currently, xfusion supports 2 model familities for high-quality xray image sequence reconstruction- the EDVR and Swin vision transformer, respectively.
Navigate to the project root directory and then run
pip install .
to install the package to the selected virtual environment.
Run
xfusion init --model_type [EDVRModel or SwinIRModel]
After initialization, a configuration file "xfusion.conf" will be generated in the home directory. This configuration file will be updated automatically within the workflow of the xfusion package.
Download the Sharp dataset called train_sharp and Low Resolution dataset called train_sharp_bicubic from REDS dataset to the directories specified in the "convert" section of the configuration file.
Fine tuning data are not available at this moment.
There are two sets of sample data to be downloaded from the Tomobank.
To convert the REDS data to gray-scale, run
xfusion convert --dir-lo-convert [directory/to/low resolution/RGB/training image] --dir-hi-convert [directory/to/high resolution/RGB/training image] --out-dir-lo [directory/to/low resolution/gray-scale/training image] --out-dir-hi [directory/to/high resolution/gray-scale/training image]
Run
xfusion train --dir-lo-train [directory/to/low resolution/gray-scale/training image] --dir-hi-train [directory/to/high resolution/gray-scale/training image] --dir-lo-val [directory/to/low resolution/gray-scale/validation image] --dir-hi-val [directory/ti/high resolution/gray-scale/validation image] --opt directory/to/training setting/yaml file --path-train-meta-info-file [directory/to/training image/meta data] --path-val-meta-info-file [directory/to/validation image/meta data] --pretrain_network_g [directory/to/model weight/file/for/model initialization]
To download test data, run
xfusion download --dir-inf [tomobank/link/address/of/test/dataset] --out-dir-inf [directory/to/testing image]
Run
xfusion inference --opt directory/to/testing dataset/setting/yaml file --arch-opt directory/to/training setting/yaml file --model_file [path/to/model file] --machine tomo or polaris
Currently to work for EDVRModel in the single-process mode and SwinIRModel in the multi-process mode.