Catphan 600 support?

Would it be possible to use pylinac to analyse the Catphan 600? See: https://www.phantomlab.com/catphan-600

Like the Catphan 504, it has the CTP404 module, so I believe it should be possible. The images I have acquired are from a GE CT scanner but I can’t seem to get them to work in pylinac as it can’t find the HU slice. The HU slice is on Z slice 0, although it is not in the centre of the scan in the z direction (the scan extent is asymmetric around z=0 slice). Would this be the issue? I’ve seen in a previous discussion you’ve mentioned a z-offset can be applied? How is this applied? I can’t find it in the docs but I’m sure I’m being blind!

I can send a sample dataset if that would help?

Many thanks,

Ben

Ben,
It would be possible with some relatively small modifications. I would have added support already but I don’t have any 600 images. If you have some, please send them my way and I’d love to support the 600. Given that most of the modules are the same it’s pretty easy to modify it such that the relatively slice locations can be changed.

Re: the z slice, at this time there is not z-offset available. There was back in the day before automatic registration, but now everything should be automated. If it’s not detecting the HU module correctly there could be a few issues: the whole phantom wasn’t scanned. This would prevent analysis from finishing. I’m working on modular support for partial scans, but I haven’t seen too many people interested in this so it’s been on the back burner. I also don’t have many scans from CT scanners (vs. CBCTs). Most of the time it doesn’t make a difference but sometimes the table can be caught in the registration and mess up analysis. If you have full scans of the phantom I’ll take as many as you can get your hands on and be happy to modify pylinac to support the 600. You can upload them here privately.

Hi James,

I’m uploading some data for you now for the 600. They’re taken using a variety of scan settings, so hopefully that will give the model some idea of the range of possible images. (One set is scanned prone rather than HFS, it would be interesting to see if it can handle that scenario!)

Let me know if there’s anything else I can supply to help. I do have some more scans if needed, just let me know how you get on with the ones uploaded so far.

Many thanks,

Ben

Excellent! Will do.

I’m progressing on the catphan 600 support; I decided to rework some things while I was under the hood so it’s taking a bit longer while I finish out the tests and so forth.

I’m also looking for anyone who can provide catphan 604 images. If you can, email me and let’s talk!

Thanks for persevering with the 600 support! I have more images but I presume you would like some from different scanners/institutions? Otherwise, I’ll upload some more.

Images from different institutions are best because a given institution usually does things the same way, but institutions may differ in their process. That being said, I’ll happily test out any further images you may have to make sure all is well.

pylinac v1.8 will be out soon. The code is written and tested; it just needs updates to the documentation. Due to supporting more catphan models, the loading will have to be specific to the catphan. I.e. instead of

`
from pylinac import CBCT
cbct = CBCT(‘folder’)

`
it will look like this:

from pylinac import CatPhan504, CatPhan600 cp600 = CatPhan600('folder1') cp504 = CatPhan504('folder2')

Morning
I can provide CatPhan604 images.
How can i send to you?
And I’m interest about modify source code for CatPhan
Can we discuss about it?

James Kerns於 2016年12月16日星期五 UTC+8上午9時09分21秒寫道:

Awesome! You can upload them privately here. You are also welcome to submit a pull request. What changes are you thinking about doing?

I tried to add catphan604 function.
But still can’t import Catphan604.
I hope can have independent function can get any value for layout and analysis.

About CTP404
New specification isn’t 40 HU difference.
Need change to a HU range

James Kerns於 2017年8月10日星期四 UTC+8下午9時35分39秒寫道:

ct.py (78.1 KB)

What version of pylinac are you using? Can you attach a stacktrace?

You could also try importing from the module directly:
`
from pylinac.ct import CatPhan604

`

I used 2.0.0
I thing because i import from pylinac not from pylinac.ct
so it’s work now
but have other problem
i tested catphan604 from CT is good
But when i tested the CBCT image
Have an error occurred on:

if np.max(edges) < 0.1:
raise ValueError(“Unable to locate Catphan 1”)

what’s np?

James Kerns於 2017年8月16日星期三 UTC+8下午10時53分21秒寫道:

image.rar (3 MB)

image.rar (2.47 MB)

About the MTF
lp/mm setting looking like doubled
For example
In catphan use manual
When Gap size is 0.5cm ,0.25cm
The lp/mm should be 1, 2
but in Pylinac.ct setting 2, 4
What’s intention?

Tatsuo Go於 2017年8月21日星期一 UTC+8上午8時51分44秒寫道:

np is the ubiquitous numpy library.

Yes, these values are doubled. I’ll clarify this in the documentation. Thanks

Reviving this old thread on Catphan 600…

One feature I think would be usful is extending the geometry analysis into the couch travel direction, as currently it is only assessed in the imaging plane. We have found issues on a few scanners now where image distortion in the couch direction can occur. An assessment of the anterior marker positions I think could be used for this purpose:
Catphan600.png
However, I’m not sure how to go about using the existing tooling to do this. The markers themselves aren’t in a module but in the main phantom housing. I also couldn’t see an suggestion raised in https://github.com/jrkerns/pylinac/issues so I don’t think this is in the works already? Would this need to be a new feature or can an analysis be implemented using the existing Pylinac features?

Many thanks,

Ben

Hi Ben,
There’s no tooling out of the box for this type of work as all the modules are assumed to be slices. However, I can give you a starting point. In the upcoming 3.14 release there’s a new “side view” of the phantom to show where the module slices were.
I’d start there by copying this: pylinac/pylinac/ct.py at master · jrkerns/pylinac · GitHub. (preview of the image here: Changelog - pylinac 3.13.0 documentation) This would give you a 2D array which you can then process. Without having put too much thought into it, I’d start by using a threshold of some sort on this array since the markers are relatively high pixel values, which will give you a binary result (e.g. using this pylinac/pylinac/core/image.py at release-v3.13 · jrkerns/pylinac · GitHub, see also scikit-images guide: Thresholding — skimage 0.21.0 documentation (scikit-image.org)). Then use scikit-image to detect the ROIs via regionprops (by far the most powerful image-processing function I’ve ever used!) Label image regions — skimage 0.21.0 documentation (scikit-image.org) and find the marker ROIs, probably by filtering for the size of the ROIs and/or find the 4 markers with the same y-height. I’m not sure what exactly you want to assess but this would get you a good way there.

Thanks for those pointers, we will give it a go.