Starshot: Pylinac vs. PipsPro

Dear all,

As part of the monthly QA, we routinely perform the three starshot tests (collimator, couch, gantry) for each of the treatment machine installed at our clinic. The mechanics of the process is as follows: The slits are imaged with a CR plate, which is subsequently scanned using Kodak ACR 2000 system which produces DICOM images. DICOM files are then sent to the PC running Standard Imaging PipsPro (until recently, v. 4.4). PipsPro is, however, unable to read DICOM files produced by Kodak ACR 2000, so these files are first converted to BMP (yes, BMP) and subsequently imported into PipsPro and analyzed. The whole process is cumbersome and error-prone. We want to replace this solution with a web application based on pylinac starshot module, which would shorten the time spent and reduce the possibility for introducing errors.

However, before introducing a new method, we wanted to check whether the two methods are in agreement. I have assigned the task of comparing the old results with the ones obtained by Pylinac to a MSc student who wished to learn some Python; but being impatient, I have myself analyzed 2,5 years worth of data for a single treatment machine (we have 8 in total). The results surprised me. As you can see in the attached figure, the results are in very poor agreement. How to read the figures: A single image corresponds to a dot in the scatterplot: its x-coordinate is the value for the diameter of the wobble sphere obtained by PipsPro, and its y-coordinate is the value for the same parameter obtained by Pylinac starshot. Ideally, the points should lie on a straight line.

We need to figure out our next steps now. I am thinking about producing a set of synthetic images with known “isocenter wobble” to test both programs. I know the result obtained by PipsPro depends on the choice of the start point, but this variability is too small to explain the deviation. I also know that the result obtained by Pylinac starshot module depends on the choice of “radius” parameter, which I intend to study. Does anybody has some other suggestion?

Thanks in advance,

Best wishes,

ap8-starshot-comp.pdf (8.19 KB)

This is a great study.

I’m sorry the results are unsatisfactory. It would indeed be great if the points were on a line. The gantry plot seems to actually be somewhat correlated except at very small values. I’m not sure what kind of agreement you’re expecting when you are looking at less than 2 or 3 tenths of a mm. In my own comparisons of pylinac vs commercial software there are usually small differences for various results, but almost always it’s from slight differences in, e.g., ROI sizes, starting points, or choosing different ROIs. I would actually be pretty worried if the results matched perfectly because I would interpret that as one of us copying the other person’s algorithm.

I agree that a isocenter test would be helpful, and I have intended to do that for a while now, but never got the chance. My plan was to irradiate an MLC collimator starshot with the MLCs centered, then with a 1mm offset, then 2, on up to 4 or 5 mm. The real starshot will have some diameter, say 0.5mm. The real test is whether the 1, 2,…n mm offset tests have a result of 1.5, 2.5, …n+0.5 mm. That way you’re testing known differences between MLC positions–if you trust your MLC anyway.

The dependence on radius is indeed real and I’ve been chasing that for a while now. Obviously, in theory it shouldn’t change. If you haven’t already, you may want to test the FWHM parameter to see if it changes your results significantly.

Let me know what you find!