Genera setup: They used eCognition for image recognition; Kodak DC4800; 2 square meter plots; 1.5 meters above the ground with a 90 degree angle (as opposed to the 45 degree in the digital repeat photography paper); 3.1 megapixels; Used automatic settings for lighting and shutter speed; The analysis took 14 minutes per picture (but they did not specify the size of the images); The used JPEG for the image format.
- The image processing part is explained at high-level. An image recognition software called eCognition was used for the image processing tasks. The process involved 2 steps: 1. image segmentation and 2. image classification. The segmentation was based scale (level of heterogeneity of resulting segments), shape (smoothness, compactness) and colour. They tweaked relative parameters for the image segmentation. The classification was done by “fuzzy logic”. The lack of detail is probably because they used a third-party software.
- Though the specific image recognition tasks are not thoroughly described, it sets a precedent in segmenting an image to find relative types of ground cover. I think we can use a similar technique to reduce the search space in our solution. We can run a segmentation algorithm that will separate the regions of the image were the flowers have a higher probability to be found.
- They were able to classify specific objects (blades of grass). No information is given as to how this was achieved
- They use the Kappa Index of Agreement to assess error. Have not seen these type of error estimation in machine learning. (have to look into this)