Import Gamma Analysis Function to Analyze two Different Fluences?

This is a noob question.

I am trying to learn how to hack my own module. Presently I have two planar images loaded with “Image.load(filename)”. I have resized the images to have the same dimensions. I’d like to import the gamma function from the log_analyzer method and use it to compare the two arrays. Is it possible to use the gamma function to compare two fluences not generated by the log file module?

Hi Eric,
That’s not a noob question at all.

If you’re using a newer version of pylinac (v1.5+) you can do the following:

1 - Use the load function from the image module rather than loading from the Image class since that has been deprecated (your way will work until v2.0 though)
2 - Use the built-in gamma method that each image object has.
3 - Optionally, use the equate images function to get your images to the same size & dpi.

I.e.:

`
from pylinac import image

myimg = image.load(r’path/to/img’)
myotherimg = image.load(r’path/to/other/image’)

optionally

myimg, myotherimg = image.equate_images(myimg, myotherimg)

perform gamma

gammamap = myimg.gamma(myotherimg)

plot it or whatever you need…

`

Let me know if you have trouble!

James

Thank you so much James!

Followup questions:
I’m getting a warning from the equate images function and the output does not match up.

…/Continuum\Anaconda3\envs\35Dicom\lib\site-packages\scipy\ndimage\interpolation.py:600: UserWarning: From scipy 0.13.0, the output shape of zoom() is calculated with round() instead of int() - for these inputs the size of the returned array has changed.

“the returned array has changed.”, UserWarning)

myimage
shape: (512,512) → (512,512) || dpi:51.9176…-> 51.9176… || and a blank sid → blank sid
myotherimage:
shape: (768x1024) → ( (1707,512) || dpi: 64.7974… → 172.793… || and a blank sid.

James,
I’ve attached my image files if that helps. Basically one is from Eclipse and the other from an EPID. I think the expected dose may have been exported for portal dosimetry, but for some reason portal dosiemtry couldn’t analyze them. The images are for EDW delivery. I’d like to make a profile or planar comparison for routine QA.

gamma.py (780 Bytes)

RD.1.2.246.352.71.7.799271414713.1424202.20170412110436.dcm (1 MB)

RI.1.2.246.352.62.1.4625589395255983245.9103590929697145013.dcm (1.5 MB)

Sorry it’s not working out as expected. Let me analyze the images this weekend and I’ll see what I can dig up!

James,

No worries. I’m still trying to wrap my head around what you’ve done here. I can’t believe that you were able to make everything so elegant. My lack of experience with python is holding me back from being able to infer how to go about things.

In any case I did manage to pull the SID out of the DICOM header for one of the files 1000.02421908406. I’m not sure why the variable is popping up empty on the SID, but I do see that the equate feature says it requires SID to be populated.

Hope you have a great weekend! Don’t work on this stuff over Easter! :slight_smile:

James,

I found the bug. In image.py under def remove_edges the “:” was on the wrong side for both left and right statements. Well it works for my images anyway.

`

if ‘left’ in edges:
self.array = self.array[:, :pixels]
if ‘right’ in edges:
self.array = self.array[:, -pixels:]

`

James,

I think I found the bug. In image.py under def remove_edges the “:” was on the wrong side for both left and right statements. Basically I grabbed the guts of the equate function and commented out step by step. I’ve attached my current script (which works) for your reference.

`

if ‘left’ in edges:
self.array = self.array[:, :pixels]
if ‘right’ in edges:
self.array = self.array[:, -pixels:]

`

I feel like the way I’m handling the gamma reporting is pretty clunky, since the image gamma kicks out only the NAN array. Do it more simply?

Thank you for all your help!

gamma.py (5.21 KB)

Okay, I took a look and the problem is with the equate images function. If one image is bigger it will remove the equivalent physical extra from the larger image. Unfortunately 50% of the time the second image could be larger and then the amount to remove is negative. Instead of dropping, say, 10 pixels, it will only keep the last 10 pixels due to the way Python indexes negative values. Your hack will work until you have images in the opposite order. The solution is to ensure that the amount to crop is always positive. Of course I’ll add this fix to the next version (see issue #88), but in the meantime you can edit the equate_images function like so: wrap all the lines with int(round(... in this section to also be absolute. I.e. abs(int(round(....

The reason for NaN’s in the gamma map is so that statistics on the map aren’t influenced by pixels under the threshold. I.e. NaN’s were pixels not analyzed in the gamma calc. To get statistics use the line of numpy.nan... functions, e.g. numpy.nanmean(gammamap).

As an aside, there are several small helper tools in pylinac that I don’t advertise but are nice to have. E.g. your gamma.py script could be rewritten like such using pylinac built-ins. Of course you don’t have to, just a tip! Let me know if you run into any other issues.

James

You are amazing, thank you!
As soon as you pointed out that either image could be the bigger image it all clicked into place. Thank you so much for communicating so clearly and quickly!Also I continue to be amazed at the under the good tools and work you have in this thing! Thank you so much for the example script as well.

Followup Question: Should I focus on the “core” stuff to find the small helper tools you mentioned or are they spread around in each file/class; also does each helper functiond depend on a specific type of data type (object structure) if so how can I tell which one (image object vs. dynalog analysis object)?
I would love to explore the under the hood tools you mentioned. In particular I’d like to figure out how to generate templated style reports. Can you point me in the right direction?

Glad it worked out!

The under-the-hood tools were really for me to easily add modules and encapsulate functions that could be reused (e.g. loading data, URL, filtering, making PDFs, etc). The best bang for your buck would be to cruise the core modules documentation; that way you’re not randomly plowing through code. Each major module (ct, picketfence, etc) should only contain things directly related to that module/test. There are a few helper functions in some modules but they are usually directly related to that task. E.g. is_tlog() in the log analyzer module could be used by any user to retrieve just the trajectory logs from a directory. Incidentally, that’s what pylinac does as well for certain things, but it’s a general enough function that it may be useful to a regular user. Documentation should tell you all the types for parameters of functions so you know what you can pass to which function.

Most of the core modules aren’t really useful by themselves. The helpful ones that could be easily used outside of pylinac are the image, io, utilities and profile modules, with image being the most helpful (lots of methods for manipulating images). Anything worthwhile to be used outside pylinac has been documented (albeit it might not be advertised in the readme).

Little of the core modules has been discussed mostly because physicists just need their answers. There are so few medphys python libraries (let alone active) that it hasn’t proved worthwhile to make all kinds of random tools that physicists might need. As far as templated reports, it’s currently not available. I had debated on it, but there’s not a whole lot extra that a typical physicist would be interested in that’s not in there already; most often they just want the results and want something that’s comparable to the commercial software that’s out there. As well, it means a lot more code and tests to manage all possible preferences and for what I see as not much gain. However, for open-ended modules like the log analyzer, a template might be nice since there are lots of possibilities one could plot…