I’m having difficulty with the MTF analysis of our linac CatPhan CBCTs. Was hoping that someone who knew the code better could give me some hints. I’m trying to report the MTF at 80%, 50%, and 30% and would want to baseline each unit as a consistency check.
On our TrueBeams, regardless of scan mode (Head, Pelvis) we get only 3 points printed on the MTF curve.
For head scan mode the lowest MTF point being above 80%. This leads to all my three reported data points (80%, 50%, 30%) giving exactly the same (wrong) answer. See upload CBCT RT5 Head.png for an example
For pelvis scan mode we still get 3 points plotted but at least the MTF drops low enough that not all reported numbers (80%, 50%, 30%) are identical. See upload CBCT RT5 Pelivis.png for an example- On our Clinacs we get results that are closer to what I would want.
For the head scans, we get 8 points plotted with a good range of values. See upload CBCT RT1 Head.png for an example
For pelvis scans we still only get 3 points plotted.
Why is it that we’re only seeing three points plotted in some cases? Is there a setting that I can change to make the code more likely to pick up more bar patterns? Do I need to just report much higher MTFs? (say, 90%, 75%, 60%).
w/r/t the 3 points vs 8 points: the algorithm hits each line pair region and finds peaks and valleys, starting with the lowest lp region. For each region, the number of peaks and valleys are compared to the known number. As long as the measured peaks and valleys match, the algorithm will continue. Once they hit 8 regions or hit a region where the peaks/valleys don’t match, it will stop. This was done to prevent wacky results where the MTF would increase and/or oscillate. This was because previously the algorithm simply found the 3, 4, or 5 peaks/valleys and then calculated the MTF, regardless of if those peaks/valleys were accurate. So if <8 points show up it’s because pylinac is trying to keep you from seeing false results beyond what it can measure.
There are two possibilities that I think have a reasonable chance of happening. The first is that CTP 528 is rotated slightly relative to the other modules. I’ve seen this before and while it’s often small, the line pair regions are also small and thus it doesn’t take much to throw them off. The second is that pylinac is somehow calculating the phantom center incorrectly, causing the line pair circular profile to be not perfectly about the line pairs.
Plot the following:
mycbct.ctp528.circle_profile.plot()
It should look something like the attached. Count the peaks and valleys for each region. Compare them to the settings here. If they match then something is fishy w/ pylinac. If they don’t match at least we know why you have <8 points, but it’s still not clear why. You can upload your cbcts privately here if it’s still not clear what’s going on and I’ll take a look.
When I look at the plot you suggested it’s got some strange green markings on it that make it somewhat hard to discern what’s happening with the peaks.
Those are the detected peaks for that line pair region. There are only 3 detected vs the 4 it should find. Looks like there is a rotational issue but let me confirm with the file you uploaded.
I have some good news. The start/stop points for parsing the line pair regions were slightly off compared to the average of many phantoms. I had initially come up with the start/stop boundaries from a set of ~3 phantom scans, but clearly these were abnormal when looking at the attached plot. The attached plot is the circular profile for many catphans compared to the boundaries, shown as vertical lines. As you can see, they aren’t ideal and are close enough that some scans may work and some are just beyond the boundary. I fixed a few other bugs and will include this in a 2.2.7 release. Thanks for reporting.
So after some further digging it seems that while each catphan is consistent for a given model, the different models are consistently offset from each other. After looking over the multiple models it seems I tried to split the difference with the original numbers. Having now plotted many scans to see the range of data, I think the best approach will be to have model-specific boundaries. This can still be accomplished in a bug-fix release, I’ll just need some more time to evaluate all models and implement the dynamic boundary conditions.