Picket fence: missing leafpair measurements

Hi everybody,

I have a picket fence dataset from one of my colleagues, and i am trying to apply the picket fence module. We use the 120M MLC from varian, hence have 60 leafpairs in total.

When i run my script it does not register all the leafpairs. i have tried to document the problem to the best of my ability.

My code reports measuring mlc_num=56,2 leafparis per picket. From the subplot in the PDF i see that i am missing 4 leafpairs (i count 56 in total), though some measurements could be zero or at least close to zero since i can’t see them on the plot.

Do i miss anything? Any ideas as to why it misses some leafpairs?


from pylinac import PicketFence

pf_img = r"…\RI."
pf = PicketFence(pf_img)
pf.analyze(tolerance=0.15, action_tolerance=0.03) # tight tolerance to demo fail & warning overlay

print results to the console


access Error_array

pickets is an enumerable for each picket

mlc_meas is the leaf pair measurement at the given picket.

err1 = pf.pickets[0].mlc_meas[10].error

create a dataframe that holds all the errors for all leaves.

first, create a dict of lists, one picket per key

errors = {}

iterate over each picket

for index, picket in enumerate(pf.pickets):
picket_name = f’Picket {index}’
picket_error_list =

iterate over each measurement of the picket and add to the list

for mlc in picket.mlc_meas:

save the error list to the dict

errors[picket_name] = picket_error_list


view analyzed image


RI. (116 KB)

Error_array_meas.txt (12.1 KB)

This could be an issue with the pixel range. Sadly, sometimes the pixel range of the images is ridiculously narrow. Try normalizing your image so that the value range is 0-1:

mypf = PicketFence(…)
mypf.image.ground() ← brings lowest value to 0
mypf.image.normalize() ← brings max value to 1

Hi again,

The problem seems to be persistant even when trying different combinations of filters, cropping the image and what not. Could it be something due to our calibration of the EPID?

i have tried to plot a image like shown for the check of the streak artifacts in the EPID image, though i have troubles fedging the right information. Can you help with that?

Thanks for everything.

fredag den 22. januar 2021 kl. 03.02.50 UTC+1 skrev jker...@gmail.com:

I have discussed the matter with one of my colleagues. It seems to be mostly situated at the first and last leaf. Since we collimate the field with the jaws there is a penumbra effect at the last leafs hence they are harder to detect than those not in the penumbra region. Because of this we are satisfied with only detecting 58 leafpairs, wich was achievable by your advice.

We were not satisfied with the MyVarian PF beause it collimates down to a 10 cm by 10 cm field with the jaws. Thus, we miss the large MLC leafs in the PF test with our Varian 120M MLC. This has been fixed in our department, though i had the urge to inform people about this.

I have a seperate question, is there a way to extend the ROIs in the VMAT module for both DRGS and DRMLC in order to include all MLCs and changing the analysis orientation to be transverse to the gantry?

onsdag den 3. februar 2021 kl. 10.13.35 UTC+1 skrev Christian Storgaard:

Re: leaf pairs. The picket fence module is being worked on in this upcoming release to handle arbitrary MLCs. As a side effect, handling those leaves at the very end was something I had to deal with. The upcoming release will have a parameter to threshold these edge leaves so you can include or exclude them from the calculation. Stay tuned.

Re: VMAT. Currently, the values are buried. You can override them with the following. However, this is a good feature request and I’ll add it to the list so the user can pass in a size to the analyze method.

import pylinac
from pylinac.vmat import Segment
Segment._nominal_width_mm = 15 <-----
Segment._nominal_height_mm = 150 <------

use as normal

vm = pylinac.DRGS.from_demo_images()

That sounds excellent! In conjunction to extending the ROIs, it is only really helpfull if there is an easy way to change the DRGS and DRMLC analysis orientation to be rotated 90 degr. in relation to current orientation (easy/intermediate for me atleast, my pyhton skills are only rudimentary). This is since the EPID image, if included all MLCs, is best to have segments transvers to the gantry in order to have the best image. Currently the analysis only supports images segments parallel to the gantry.

I have a followup on this request, and my other question in the forum. i can’t seem to get import of Image from pylinac.core.image to work. A user suggested using pillow, however i can’t seem to get it to work. I want to rotate the DRGS open and DMLC image by 90 degr. in order to get the analysis to work. My current .py is

from pylinac import DRGS, DRMLC

from urllib.request import urlretrieve

import matplotlib.pyplot as plt
import numpy as np

from pylinac.core.image import Image

open_img = “K:.…\Pylinac\MLC QA\test\GantrySpeedDoseRate\21-02-04_11-38-05/RI.”
dmlc_img = “K:.…\Pylinac\MLC QA\test\GantrySpeedDoseRate\RI.”


mydrgs = DRGS(image_paths=(open_img, dmlc_img))

tirsdag den 9. februar 2021 kl. 02.57.15 UTC+1 skrev jker...@gmail.com:

Your snippet above is performing operations on the string, not the image. This also wouldn’t change the analysis ROIs, just your images. This isn’t something pylinac supports currently, but you could manually override the segment center position calculation method: pylinac/vmat.py at master · jrkerns/pylinac · GitHub This would look something like this:

from pylinac.core.geometry import Point
from pylinac.vmat import DRGS

class MyDRGS(DRGS):

def _calculate_segment_centers(self):

either manually or algorithmically determine the segment center positions

you must return a list of Points

return [Point(x=200, y=300), Point(300, 300), Point(400, 300)]

use the overridden class like vmat

myvmat = MyDRGS(…)

Unrelated, but if you want to use generic images it’s easiest to use the pylinac.core.image.load function which takes a file path and loads the image. Pillow is for “normal” images like TIFF, JPG. Pylinac uses PyDicom under the hood for most kinds of images.

This worked great! I tried to change the orientation of the median profiles by changing axis=0 in

def _median_profiles(images) → Tuple[SingleProfile, SingleProfile]:
“”“Return two median profiles from the open and dmlc image. For visual comparison.”“”
profile1 = SingleProfile(np.mean(images[0], axis=0))
profile2 = SingleProfile(np.mean(images[1], axis=0))

to axis=1, however it did not do as i expected.

I can live with that, the data from the analysis seems to be consistent.

Thanks for the support, it is much appriciated.

tirsdag den 9. februar 2021 kl. 14.49.18 UTC+1 skrev jker...@gmail.com: