Hopefully the last in my recent string of planar imaging questions!
The posted documentation on changes to “Pixel Data & Inversion” after Pylinac v3.0 state that when the image has the rescale slope, rescale intercept, and pixel intensity relationship sign attributes: pixel values are set using Pcorrected = Sign∗Slope∗PPraw+Intercept.
However, the image.py init method seems to be using Pcorrected = Sign∗**(Slope∗PPraw+Intercept)**. Ie:
if has_all_rescale_tags:
self.array = ((self.metadata.RescaleSlope*self.array) + self.metadata.RescaleIntercept**)***self.metadata.PixelIntensityRelationshipSign
elif is_ct_storage or has_some_rescale_tags:
self.array = (self.metadata.RescaleSlope*self.array) + self.metadata.RescaleIntercept
else:
invert it
orig_array = self.array
self.array = -orig_array + orig_array.max() + orig_array.min()
Our TrueBeam linacs started including the “Sign” attribute after we upgraded to ARIA 15.6. For our new images the value is always -1. Images acquired prior to this upgrade look identical in ARIA but do not include this sign attribute. Per the documentation (and the code above) they will be interpreted as though the sign were +1, ie: Pcorrected=Slope∗PPraw+Intercept. For these images my pylinac pdf output files reverse the expected “higher pixel values == more radiation == lighter/whiter display” convention stated in the docs.
I wonder if I’m doing something wrong here. If not, I would be tempted to hard code in a default “Sign” of -1 for cases where the image has_some_rescale_tags to avoid contrast discontinuities and preserve the expected convention when analyzing older images.