I have continued to play with this and have increased the quality of my results. I have done some changes to the initial processing and I still need to experiment with lots of possibilities.
- Color space. I stuck with the HSV. I did an experiment with YCbCr and was not impressed by the results. While I am using all of the HSV dimensions, I still think that I can do without one. I also read a paper yesterday  that stated the possibility of combining different dimensions from different color spaces. This could be something that increases the accuracy (I have to look into it).
- Negative annotations. I re-annotated the pictures to add negative annotations. I added annotations to sections with dry leaves, dung, dirt patches and artificial plot markers. As seen in the picture the re-annotated picture (left) is less noisy than the original one (right). For the re-annotated picture I made sure that the flowers were still appearing as black blobs. This means that lots of non-flower pixels that where previously classified as flower pixels are now correctly classified as non-flower pixels.
- Neural Network. I am training the neural network with two types of inputs: flower and none-flower. I observed that giving the neural network all the flower sub-types as one type gives good results.
I have a lot of non-flower pixels. I have approximately 12×10ˆ6 non-flower pixels. Compared to 0.9×10ˆ6 flower pixels (That include all the sub-types). To keep the training time to a minimum I chose approximately 0.9×10ˆ6 non-flower pixels. I did not do a good job at choosing them though as I don’t really know what type of non-flower they are. They could be any combination of the negative annotations and are probably biased towards the most common non-flower annotation (artificial markers).
I want to solve this by doing a better job at gathering the none-flower pixels. Instead of choosing my initial batch of none-flower pixels from images that had no flowers at all, I will now choose them from all the pictures. The reasoning behind this is that the background in situations where there are no flowers is different from situations where there are.
I’m also slightly modifying the negative annotations. This time around I want to distinguish the types of non-flowers and make sure that I include all of them in the additional training set.