Skip to content

Deep learning-based spatiotemporal fusion for high-fidelity ultra-high-speed full-field x-ray radiography

License

Notifications You must be signed in to change notification settings

xray-imaging/XFusion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

109 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

XFusion

Deep learning-based spatiotemporal fusion for high-fidelity ultra-high-speed x-ray radiography
A model to reconstruct high quality x-ray images by combining the high spatial resolution of high-speed camera and high temporal resolution of ultra-high-speed camera image sequences.

Prerequisites

This implementation is based on the BasicSR toolbox. Data for model pre-training are collected from the REDS dataset.

Usage

Package description

Currently, xfusion supports 2 model familities for high-quality xray image sequence reconstruction- the EDVR and Swin vision transformer, respectively.

Package installation

Navigate to the project root directory and then run

pip install .

to install the package to the selected virtual environment.

Initialization

Run

xfusion init --model_type [EDVRModel or SwinIRModel]

After initialization, a configuration file "xfusion.conf" will be generated in the home directory. This configuration file will be updated automatically within the workflow of the xfusion package.

Data preparation

Data for model pretraining

Download the Sharp dataset called train_sharp and Low Resolution dataset called train_sharp_bicubic from REDS dataset to the directories specified in the "convert" section of the configuration file.

Data for model fine tuning

Fine tuning data are not available at this moment.

Data for testing

There are two sets of sample data to be downloaded from the Tomobank.

Data conversion

To convert the REDS data to gray-scale, run

xfusion convert --dir-lo-convert [directory/to/low resolution/RGB/training image] --dir-hi-convert [directory/to/high resolution/RGB/training image] --out-dir-lo [directory/to/low resolution/gray-scale/training image] --out-dir-hi [directory/to/high resolution/gray-scale/training image]

Training

Run

xfusion train --dir-lo-train [directory/to/low resolution/gray-scale/training image] --dir-hi-train [directory/to/high resolution/gray-scale/training image] --dir-lo-val [directory/to/low resolution/gray-scale/validation image] --dir-hi-val [directory/ti/high resolution/gray-scale/validation image] --opt directory/to/training setting/yaml file --path-train-meta-info-file [directory/to/training image/meta data] --path-val-meta-info-file [directory/to/validation image/meta data] --pretrain_network_g [directory/to/model weight/file/for/model initialization]

Test data

To download test data, run

xfusion download --dir-inf [tomobank/link/address/of/test/dataset] --out-dir-inf [directory/to/testing image]

Inference

Run

xfusion inference --opt directory/to/testing dataset/setting/yaml file --arch-opt directory/to/training setting/yaml file --model_file [path/to/model file] --machine tomo or polaris

Currently to work for EDVRModel in the single-process mode and SwinIRModel in the multi-process mode.

About

Deep learning-based spatiotemporal fusion for high-fidelity ultra-high-speed full-field x-ray radiography

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •