Just wanted to jot this down before I forget it….
I have been working under the assumption that to make a “good” vision algorithm, I should first find an object that is “easy” to detect and then build additional elements from that. In this way I have been using the chessboard as my initial “easy object” and have added stuff on top of that. The additions were made easy because I had something that I could trust: the position of the chessboard corners. With the location of the chessboard positions, I was able to locate other important elements: like a square that contained a color sample.
What if we extrapolated this way of thinking directly into the detection of elements inside a plot? What if we could take an “easy” object (like the chessboard) and use it to locate and sample the organisms of interest in the plot? Wouldn’t that increase the success rate of the detection algorithm?
The algorithm would first find the easy constructs (like a square). These would be very robust against things like lighting differences and perspective warps. Within this initial construct there would be contained an element of interest (like a flower). This element of interest would be “easily” separated from the initial construct because we fully know and understand it (the construct). This would enable the algorithm to sample things like color and shape. This sampling would be done per image instead of trying to generalize for all images. Remember that images are highly variable not only in terms of what is being imaged; but in the mechanism of the image itself.