Iam a Biomedical engineer working in a radiotherapy department in Sweden and Iam very interested in machine learning.
I noticed that Pylinac has a some kind of machine learning implementation for in example planar images.
As I have just started to read about machine learning I’m curious about how the training images should like? Do the images contained in the training set have to be very different?
If I in example want the classifier to recognize a Diamond shaped MLC field image, how would I prepare a training set? Should I vary certain parameters (MU?) when taking the image, should I take the same image on different machines, or
should I slightly change the shape of the diamond between the images by in example moving one leaf a few mm? And how many images do I need for the classifier to work?
Hi Michael,
I’m no machine learning expert so my advice is more pragmatic than academic. All your questions are good ones.
As much as machine learning is great and if you’re wanting to get some experience with it then absolutely go for it, it’s not always the best solution, especially for smaller datasets and very similar images. E.g. many experiments in the book Machine Learning in Radiation Oncology are an absolute joke. I would try making a procedural algorithm first to see how it fares. But let’s say you really want to go with machine learning as a learning opportunity.
Generally, you want as many images as possible but the horribly vague answer is: as many as it takes to achieve the accuracy you want. If you have less than ~30 images I’d definitely not use machine learning. 100+ is the minimum I’d use and consider it anything close to trustworthy.
Features to vary will be whatever you think may be different between images. E.g. if the MU will always be the same then perhaps investigate other features. Possible other issues: Will this be used for multiple machines? Will the field size vary? Will different MLC sets be used? Are the imager panels the same size and have similar responses? Anything you said yes to will need some test images with variations. If you’re low on test images consider altering the training data by artificially adding features. E.g. you could add salt & pepper noise and make 10 training images from 1 image, but you have to be sure that salt & pepper noise is a real possibility for your images.
The other piece of the puzzle is the negative images. What other images will the model be looking at? You’ll want to include lots of those so the classifier can know what isn’t a positive image.
Start with a simple classifier and if you aren’t getting the results you need then go more complex. Lots of data can be classified with a random forest or SVM. If your negative images are all over the place in terms of shapes and sizes and your positive images are very similar you might also consider using an outlier detection classifier instead.
Finally, I learned after many failures that properly preparing, scaling, and normalizing your training data is critical. Become familiar with this module if you’ll be using Python.
Thanks for the reply and the great discussion. I agree that creating and choosing the training data is very tricky and crucial for the classification result.
I thought beginning with a set of augmented data like in example rotation, contrast enhancement and salt an pepper noise. I noticed that you have an array of labels and images for your single image classifier. I guess that one is used for all the planar images? How did you prepare your training data? Did you use a similar approach?
I had the advantage of having images from several machines. The biggest difference I noticed came from different clinics. Each clinic will usually do things roughly the same way, but a different clinic will do them differently, but similar within the clinic. So I ended up with essentially several batches of data with each batch looking similar. That actually proved difficult for the classifier because it could classify one clinic well, but for my project that is supposed to be generic, it’s actually a bit difficult. So if you don’t have multiple clinics you’re in good shape. I also did not have the ability to use a NN at that point in time. If no one else is going to use your package/data then NN are the way to go for image data if your accuracy w/ a simpler classifier isn’t good enough.
I have a couple of classifiers, the main reason being that they are tuned for different things, but yes the single image classifier is used for identifying any planar images.
Ja, I can imagine that it is easier to build a classifier when the images have been taken in a similar way.
I fed my training data into the Inception NN and retrained the last layer only. I managed to diffierentiate all of the trai ed planar images and I was amazed about how powerful this tools is.
Thanks for all your advices! I will keep you updated about the progress of my project!