A while ago I posted my results on comparison between the isocenter wobble in the star shot test calculated with Pylinac and PipsPro. Since then I have discovered that the result obtained by PipsPro significantly vary with the choice of the start point (which is chosen with a mouse click) and are thus not suitable as a reference.
Instead I have now created a set of synthetic DICOM images which somehow resemble the ones obtained by our old Kodak ACR-2000i scanner and can be read by both Pylinac and PipsPro (although with PipsPro, I haven’t figured out yet which DICOM tag they read to obtain the image resolution). They both seem to do fairly well (see the attached benchmark-plot.pdf); none is perfect, and none seems to be significantly better than the other.
Next, I have tried to simulate the manual start point placement in PipsPro by varying its position slightly. It appears however that the optimization process is quite sensitive to the start_point value and arrives at different centers and thus different wobble values. The file starshot-bench-03-1-03.dcm is an example yielding such behaviour. With other files, I have found out even stranger behavior - with starshot-bench-06-0-05.dcm , NOT changing start_point leads to alternating between two different results when the test is run several times in a row. Please find attached the script I ran (pylinac-starshot.consistency.py) and the results I obtained with the script (synth-star-consistency.csv).
I am trying to figure out what causes this behavior: is there a problem with my DICOM files, am I mis-using Pylinac, or is this an issue with starshot module in Pylinac. The alternating behavior is particularly interesting, as if some internal variable would keep its value between two successive calls.
I forgot to mention that I am still using v2.0.1, but browsing the code repository, I believe there were no significant changes to Starshot module in the later releases.
Best wishes, Primož
Dne petek, 17. avgust 2018 12.01.20 UTC+2 je oseba Primoz Peterlin napisala:
Primoz,
Thanks for looking at this. The results of your benchmark plot look pretty good IMHO. I doubt accuracy could get much better given the various uncertainties. It also seems to me more and more physicists are converting over to winston-lutz to get this value.
The results of the bench-05 file do worry me. I will investigate this.
Hi Primoz,
I know it’s been a while but I’ve been wanting to take a look at this. I spent a very large chunk of this weekend looking into it. It seems to come down to one line of code (!), mostly.
The issue in general was the inconsistency of the iso size. This, it turns out, was caused by the inconsistency in peak-finding for the star lines. After much analysis the profile used to find the peaks was found to be a bit rough. No interpolation or smoothing was performed, and the step-like data was causing the peak location to jostle around. This can all be eliminated with some smoothing. For unknown reasons the starshot module did have smoothing original, but I removed it (o.O) a long time ago. Reinstating the smoothing of the profile made the results very consistent regardless of radii and starting point.
While the result can be seen in your images as well, there is another issue regarding the inversion of the images. In the second image analysis (starshot-bench-06-0-05.dcm) the image keeps inverting from correct to incorrect. This can be eliminated by moving the line mystart = Starshot(DicomPath) to inside the for loop.
I have attached the analysis of your images as well as an excel showing analysis of some of my test images which shows the reduction in range of diameters when smoothing is applied. In all but 3 cases out of 26 the range of results when using different radii get better, if not much better.
These changes will be implemented in v2.2.6, which I will put out sometime in the new few days. I need to revalidate the tests and results for the bank of images I have. Thanks for your investigation.