V2.3.0 Release

Hi all,
I just released pylinac v2.3.0 on pypi. There are significant changes to the planar imaging and winston-lutz modules as well as several bug fixes. There’s still some outstanding bugs I’m working on, but I wanted to get a new version out with the major new updates. Thanks to all of you that submitted files and bug fixes–it really does help! You can read all the changes in the changelog as usual: https://pylinac.readthedocs.io/en/latest/changelog.html#v-2-3-0

Hi James,

Thanks for your outstanding work!

I updated/tuned your scripts to be used for some tests in proton therapy, and I would like to validate them using some synthetic images with known diameter (starshot) or known distance between field - BB centers (WL).

Do you have any suggestions on how to generate those images?

Thanks in advance!

Andrea

Andrea Girardi
Tel.: +32 493 96 96 84

Hi Andrea,
Awesome, keep me updated!

Generating good fake data has been a goal of mine for a while but I had enough real data to work with that I didn’t give it a high priority. For “good enough” work, i.e. <1mm, this is just fine. To get tighter accuracy, or for a test without a good knowledge of the underlying accuracy, then generating fake data is definitely a good idea.

So, to answer your question directly, I don’t have any suggestions that I’ve actually implemented. However, if I were to do it, I would probably try to approach it with a few ideas in mind: 1) Start with binary data if possible. E.g. fill a 2D numpy array with a square of ones to simulate an exposed field. From there you can convert to float and apply a gaussian filter to simulate penumbra. 2) Keep the algorithm simple. You could make an extensive algorithm to add structures of various sizes but if it’s too complicated you could introduce an error. Don’t let the solution become the problem. Also, the more code you write the more bugs you introduce. 3) Evaluate whether generating it ad hoc is easier than writing an algorithm. E.g. while you could write a function/algorithm to add a square of “radiation”, it may simply be easier to write a one off script and be done with it. Often, this is the correct solution, it’s just not as appealing. 4) Make sure the output data can be easily validated. You definitely don’t want to get into a situation where the image has numerous intricacies and/or filters and cannot be easily interpreted. There’s usually a tradeoff between interpretability and realism. Where possible, err on the side of interpretability.

There’s some specific algorithm details I’d probably think about like numpy masks, but I’ll keep my mouth shut until I have some experience. I do very much like your idea though and would be useful as a different validation step. If you end up creating something let us know!

Of course I will, I’ll start to work on it next week!

I have an additional question on the starshot script: I think that the parameter that affect more the result in the script is the radius, can you confirm it?

Is there any value you suggest, such as 0.5?

Thanks again!

Andrea Girardi
Tel.: +32 493 96 96 84

Unfortunately, there’s some uncertainty there. For some reason, the data doesn’t completely converge when adjusting parameters. E.g. you’d think that the largest radius would be best but it doesn’t appear that way. It’s been 1) hard to debug the minimization algorithm and 2) good enough for most use cases that I left it alone. Yes, the radius changes the result the most it seems. I don’t have a good recommendation other than not super close and not super far out, so 0.5 sounds good =)