Skip to content

Instantiating Pylinac analysis classes from Pydicom data sets. #550

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
alanphys opened this issue Feb 26, 2025 · 2 comments
Open

Instantiating Pylinac analysis classes from Pydicom data sets. #550

alanphys opened this issue Feb 26, 2025 · 2 comments

Comments

@alanphys
Copy link
Contributor

Is your feature request related to a problem? Please describe.
Following on from this post I am experiencing an increasing need to be able to instantiate a Pylinac analysis class such as CatPhan604 from a Pydicom data set. The solution I described only works for 2D analysis like PicketFence and is not terribly efficient.

Describe the solution you'd like
I would like to do something along the lines of:

datasets = [list of pydicom FileDataSet]
ct = CatPhan604(datasets)
ct.analyze()

Describe alternatives you've considered
I've considered a few possible options

  1. Allow creating an empty class, i.e with folder path None or an empty string, and then manually assign the datasets.
  2. Create the class from an existing DicomImageStack. DicomImageStack and DicomImage will need .from_dataset functions
  3. Create the class from a Pydicom data set by implementing a .from_dataset function. By implication this will require the first option.

I think 2 is the most viable and opens up the possibility of possibility of opening an image or series of images, performing some operations on them and then passing them to an analysis. It will require, though, changes to DicomImageStack, LazyDicomImageStack and DicomImage.

Additional context
The restriction to store image files on disk was understandable ten years ago when computer memory was at a premium, but these days even modest laptops have enough memory. The performance hit, even writing to an SSD, to my mind outweighs the requirement to conserve memory.

Regards
Alan

@jrkerns
Copy link
Owner

jrkerns commented Feb 26, 2025

I definitely like option 2 and 3 as single images can be constructed from .from_dataset so the consistency makes sense.

Loading from disk was indeed the initial assumption in 2014. You can read from a stream (not a dataset) currently by doing something like:

# read from streams
streams = [io.BytesIO(f.read_bytes()) for f in Path(r"path/to/files").iterdir()]
cbct = QuartDVT(streams)
cbct.analyze()
cbct.plot_analyzed_image()

but that is relatively kludgy. Definitely on board w/ loading straight from a dataset. In fact, at one point internally we thought of loading from dataset to be the default and then loading from disk as the classmethod (since the dataset is loaded from disk and then passed to init vs the other way around) but that was a breaking change.

but these days even modest laptops have enough memory. The performance hit, even writing to an SSD, to my mind outweighs the requirement to conserve memory.

This is obviously clinic and system dependent. In RadMachine memory is at a premium (in the cloud app disk == memory); we load zipped datasets straight from a stream.

@alanphys
Copy link
Contributor Author

Thanks for the insight James. I hadn't considered cloud based apps. I'll see what I can put together.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants