Just came back from a wonderful couple of weeks in Zackenberg where I deployed two experimental plots that used multiple chessboard markers. At each plot corner a marker was placed. All four corners had different markers, and only one marker contained the plot id. I took a picture of a plot each day for a bit more than a week. Every picture was taken from a slightly different perspective and with different light (depending on the weather). Nearing the end of my stay at Zackenberg I was left with a directory of pictures of the same place taken from different angles.
Having a list of images, I ran the normalizing algorithm. This resulted in a list of images that seemed to be taken from the same place; each of them taken on a different day. A coordinate in the image represents a physical place in the plot and any coordinate in the normalized images is the same for all image (with a certain amount of error). These “normalized” images allowed me to construct an annotation workflow that leverages the equivalent coordinates to ease annotation, identification and plant tracking.
I made a video showing an example of the workflow. The initial detection of the flowers is slow and tedious because one has to zoom in and out to detect an element. But once the initial detection is done, posterior detection is trivial as the flower does not move. Moreover, one can begin detecting at any point in time. There is no rule that says that one has to begin with the first day. It may be easier to begin with some other day, where the light was more favorable and the flower detection is easier; and then move forward or backward in time.
The video also shows the imperfections of the system. There are instances where the flower moves a bit too much and is found outside the enclosing rectangle. Here the user must adjust the annotation for the flower to be correctly marked.