A lot of the assumptions in this paper make me feel that this is not the direction where we should be headed.
- They assume that all the pictures they analyse contain a flower. For them someone has already classified the pictures as containing a flower. This puts me off a bit on using this technique as I wouldn’t know how it behaves in situations where the classification has not occurred.
- When describing their flower shape model they are clear to specify that it is a good approximation given “reasonable viewpoints” and “provided the deformation of the flower is not excessive”. Not sure this applies to our data; the flowers change position drastically.
- The paper describes the types of flowers to which the method is applicable. It is very clear to say that the flowers need to have rotational symmetry. It does mention that there were instances where the segmentation worked with images that did not contain flowers with rotational symmetry but they characterized this as surprising.
- The took their data from the Oxford Flower Dataset but committed flowers which were very small or were too sub-sampled. This is one of the primary reasons why I think that this methodology does not fit our problem.
- I believe their flower shape model does not fit our flowers (not the Salix Arctica anyway). It is centered in the shape of a petal with respect to a flower centre, where all petal vertices should intersect. This is ok for flowers which are rotationally symmetric, but it might not be a good model for elongated flowers like the Salix Arctica. When imaged from above, the Salix Arctica might be segmented correctly; but when imaged from one side, it presents an elongated shape that is not rotationally invariant and might “confuse” the shape model.
I’ve read a couple of papers on flower detection and am seeing that the general assumptions of flower shape might make these approaches unusable for us.