I was surprised when it actually did what I thought it was going to do :). Surprised and filled with excitement. I’m talking about my image adjust command. Today was the first time it took a video feed (AVI file) and adjusted the image according to the direction of the calibration image (a chessboard).
I coded the command in such a way that two windows appear when you execute it. One displays the normal feed and the second one displays the rotated image. When I rotate the chessboard image on the axis that is perpendicular to the camera image plane, the algorithm rotates the whole window to compensate. The resulting effect is that it seems that the chessboard is actually standing still. The effect is heightened when one compares the two windows.
Watching the video feed doing its thing is very exciting. But my objective is not to create a freaky video effect. I should use all this to apply it in the normalization of a set of pictures that contain a calibration image.
My immediate objective, however, is to code the command in such a way that it will compensate for the distance to the camera image plane. This means that if one gets closer or moves away from the object, the algorithm should compensate ans scale the image. I hope that this will be a bit easier now that I have more knowledge about how the whole thing works.
I also managed to create a github account (easy as pie) and upload the src as it is. You can check it out at . Comments and patches are welcome :) http://github.com/Joelgranados/imageadjust/