Packaging shore storage manager Beta 6.1.0

This will be list of issues that I found while creating the shore-storage-manager-devel package.  Everything that I talk about could be found in the srpm at [1].  That srpm was built in my system and it contains all the patches and build calls necessary to to build in f13.   I have tested the build process in RHEL and only minor modifications are needed.

Installation directory specification

Generally, to create the rpm package, one needs to tell the process to install the package into a dummy directory.  After having all the files “installed” in the dummy dir, one can put them where one wants the in the system with %file.

When the rpmbuild got to the %install, it exploded telling me that it could access  /usr/include because it did not have permissions.  I wondered for some time why the using the DESTDIR argument to `make install` did not specify the destination directory correctly.  I followed it to the Makefile.am in the src directory.  There I had to add a the DESTDIR variable to each line that contained the header directory in order for it to work properly.  In this way, if one specifies DESTDIR,  /usr/include will be in DESTDIR/usr/include.  The file named shore-storage-manager-6.0.1-Beta-builddir.patch contains what I did to solve this issue.

Do not distribute Makefile nor configure

I noticed that the src tarball [2] contained the Makefiles and the configure file.  These files are system dependent and should not be distributed with the source.  They should be created with autotools commands. I discovered this while trying to solve the problem above.  When I looked for places where the string “includedir”, the ack command spit out a bunch of Makefiles, which seemed to me very strange since I had not run ./bootstrap nor ./configure.

The fix for this issue is simple to erase all the Makefiles from all the subdirectories in the sources.  The diff itself looks very big, but the only thing i did was “rm -rfv `find .|grep Makefile`”.  Additional from that we need to make sure to call ./boostrap and ./configure in the spec file.

I’ve talked with the person in charge of the tarball and she told me that the Makefiles and configure file were in the src because they wanted to avoid problems with the use of autotools.  I think making people’s life easier is always good.  But in this case I think that the tools are being misused.  I have suggested the creation of packages for RHEL and Solaris.  Unfortunately I do not know how to create a Solaris package so I can’t be much help there.  I’m pretty confident that the spec file contained in [1] can be used to create a package for RHEL.

Some casting issue.

After getting everything packaged I moved to installing the package on my system (levono w500, 2 cores) and it began to scream again when I compiled a file that contained a “#include “.  I looked at the error message and it seems that there is an issue when doing a cast from unsigned long long int* to uint64_t*.  I googled it a bit and found nothing similar to my situation.  I decided to do an explicit cast and it seem to put my compiler at easy (gcc-c++-4.4.4-10.fc13.x86_64).

whether the compiler is right or wrong in erroring out for this issue is one question.  But regardless of that fact, shore should be consistent with the use of uint64_t.  IMO, it should replace all the unsigned long long int with uint64_t.  This fix is contained in the file named shore-storage-manager-6.0.1-Beta-uint64.patch

Crash in the /usr/include directory

In my system the package that contains the glibc headers (glibc-headers-2.12-3.x86_64) contained a file named regex.h.  When I went to install the shore rpm in my system there was a header crash and the package was not installed.  I chose to solve it in the short and ugly way: I renamed shore’s regex.h file to shore_regex.h.  There was not a lot of changes that needed to be done but this exposes another issue that should be addressed.  Shore is basically a library and its contents is header files and the library generated.  There are 185 header files related to shore and they are all placed in the /usr/include directory by default.  IMO it would be much cleaner to put everything in a directory called shore and change the include lines from “#include something” to “#include shore/something” (But that’s just me).

I’m guessing that there could be something else that could be done in the build that would prevent the crash and, at the same time, be a “cleaner” solution to the problem.  But, as I said, I chose the ugly easy way out :).

Multiple library files

After building shore one ends up with 5 libraries (libatomic_ops.a, libcommon.a, libfc.a, libsm.a, libsthread.a).  These libraries depend on each other.  For example libsthread depends on libfc.  I find it better to have just one generated library called shore (or something more specific).  After talking with the person in charge I was told that there was a way to create a single library with some configure or make argument.  But that it will be included in the next version.

For now I’m building everything with 3 or 4 -l options.  One for every library that is needed in a specific situation.

[1] http://www.itu.dk/people/jogr/shore/
[2] http://www.cs.wisc.edu/~nhall/shore-mt/releases/shore-storage-manager-6.0.1-Beta.tar.bz2
Posted in Uncategorized | Leave a comment

Image adjust with opencv

I was surprised when it actually did what I thought it was going to do :).  Surprised and filled with excitement.  I’m talking about my image adjust command.  Today was the first time it took a video feed (AVI file) and adjusted the image according to the direction of the calibration image (a chessboard).

I coded the command in such a way that two windows appear when you execute it.  One displays the normal feed and the second one displays the rotated image. When I rotate the chessboard image on the axis that is perpendicular to the camera image plane, the algorithm rotates the whole window to compensate.  The resulting effect is that it seems that the chessboard is actually standing still.  The effect is heightened when one compares the two windows.

Watching the video feed doing its thing is very exciting.  But my objective is not to create a freaky video effect.  I should use all this to apply it in the normalization of a set of pictures that contain a calibration image.

My immediate objective, however, is to code the command in such a way that it will compensate for the distance to the camera image plane.  This means that if one gets closer or moves away from the object, the algorithm should compensate ans scale the image.  I hope that this will be a bit easier now that I have more knowledge about how the whole thing works.

I also managed to create a github account (easy as pie) and upload the src as it is.  You can check it out at [1].  Comments and patches are welcome :)

[1] http://github.com/Joelgranados/imageadjust/
Posted in opencv | Leave a comment

Opencv 2.1.0 for fedora13

After some frustrating hours of debugging I found out that my code was not being able to read the AVI file because I was using a opencv built without the gstreamer-devel package (among others).  It was a bit of a pain to find out because when I executed the command it just stopped.  Did not return any error or warning message.  I had to uninstall the opencv that I had built, install the one offered by fedora13, get the error code and then correct my opencv package.  I have now created a new opencv package containing opencv 2.1.0 for fedora 13 [1]

The difference with the original fedora 13 package is that mine forces the use of the following devel packages:

ffmpeg-devel >= 0.4.9
gstreamer-devel
xine-lib-devel

Any changes to my spec file or patches are greatly appreciated.  Have not seen this pop up in the development branch for fedora.  Will keep my eyes open to see what they come up with.

[1]http://www.itu.dk/~jogr/opencv/opencv-2.1.0-2.fc13.src.rpm
Posted in opencv | Leave a comment

Convert 3gp to AVI

Took some video with my phone (HTC hero) and it ended up being encoded in 3gp.  When I went to use it with my opencv code: BUM!!!! it exploded.  So I chose to change from 3gp to AVI, which I believe opencv can handle.  Found this nifty command:

ffmpeg -i clip.3gp -f avi -vcodec xvid -acodec mp3 -ar 22050  file.avi

Since I didn’t really care for sound I shortened it to:

ffmpeg -i clip.3gp -f avi -ar 22050  file.avi

It gave me an AVI file that mplayer could read.  I’ll see if opencv can handle it.  Many thx to [1].  His post showed me how to do it.

[1] http://goinggnu.wordpress.com/2007/02/13/convert-avi-to-3gp-using-ffmpeg/
Posted in commands | Leave a comment

Fiddling with shore-storage-manager-beta-6.0.1

Among the things I’m doing ATM I have been meddling with shore-storage-manager[1].  I took the tar.bz2 file and ran ./bootstrap  ; ./configure ; make ; make check.

Some test script strangeness.

After running the last command in that list I encountered an error that was related to the calls to some test binaries in src/fc/test.  After some debugging I discovered that the script that was calling the test binaries without giving relative or full path.  I also notice that the script was being run by ksh (caveat: I have never run ksh).  I don’t know if ksh automatically looks for the executable in the local directory but that way of calling the test scripts seemed strange to me.  When I added the “./” to force ksh to look in the localdir and everything seemed to keep on working.

Stopped at a thread test

The `make check` process continued to execute but stopped at another point down the line.  This time it output that it was stuck on a test called `thread1`.  It did not error out, but it just stayed there.  I opened up the file that was at “src/sthread/tests” and looked at the source to see if I could find something fishy.  To no avail I started to place wait functions all over the place.  At this point I was quite frustrated, but I had another trick under my sleeve.  We got 2 weeks on the Intel Manycore Testing lab, which is basically a computer with  63 cores (thats what `cat /proc/cpuinfo` spit out).  I connected and ran the same `make check` command.  Surprisingly (for me) this got to the “thread1” spot, where my machine had stopped, and continued after a small pause.

Need the tcl-devel package installed

The test continued until it hit the smsh directory.  Then it started screaming about the lack of headers for tcl.  I did an `rpm -qa | grep tcl-devel` and found out  that the Intel machine did not have it installed (FYI, it’s a RHEL5.4).  I quickly found out that I don’t have the permissions to install anything so I wrote to Intel.  After some hours (surprisingly fast), Intel installed the tcl-devel package and I could test once more.

Need the right version of tcl-devel

ATM it requires tcl-devel 8.5.  Unfortunately the machine only had 8.4.  Wanting to avoid writing Intel again, I tried to change the tcl-devel version in the sources.  Once I changed all the strings with tcl8.5 to tcl8.4 I ran the `make check` once more and this time it finished quietly.

[1] http://pages.cs.wisc.edu/~nhall/shore-mt/html/shore-mt.home.nhall.html
[2] http://www.cs.wisc.edu/~nhall/shore-mt/releases/shore-storage-manager-6.0.1-Beta.tar.bz2
Posted in Uncategorized | Leave a comment

opencv camera extrinsics.

I’m interested in knowing the position of the camera with respect to a known object (calibration object).  This is of interest because I can normalize lots of pictures taken of the same place.  I can normalize them in such a way that the camera position would be similar (to a certain degree) in all of them.  Also, fiddling with opencv is, up until now, cool :)

At the moment I finished the first phase of this mini-project.  I have managed to calculate the extrinsic camera values from my laptop webcam and I can calculate the extrinsic values of a known object, specifically a chessboard printout, moving around in front on the webcam. After calculating the intrinsic values, which I don’t care much about, the algorithm outputs the extrinsic values to stdout.  I can see that movements in pitch, roll and yaw are consistently output to my shell.  I can also see the three directional movements in the three directional axis.

I used opencvs chessboard detection algorithms and its solvePnP and calibrateCamera functions.  The command accepts a list of images or a stream from a camera.  I prefer to use a camera stream for testing, but the final objective is to use it with list of images.  The gist of the process goes something like this:

  1. Calibrate camera (get intrinsic values): The algorithm detects some points in the chessboard image and relates them to the “real” object points.  By using these two sets of information, the algorithm can calculate the camera distortion information and the camera matrix information [1][2].  The calibration takes 20 images/frames.
  2. Even though I get intrinsic values after the camera calibration, I am only interested in the extrinsic values.  So I use the found intrinsics and pass them to solvePnP to get only the extrinsic values.[3]
  3. I output each extrinsic I find.

My next move is to use the re-projection error to improve the intrinsic values calculation.  Hopefully that will increase the accuracy of the calculated extrinsic values.  I also want to put my code in some kind of git repository so I can keep track of it better.

The following were links that helped me find my way through the math and the coding:

http://www.vision.caltech.edu/bouguetj/calib_doc/  (It has matlab code and an extensive explanation of what is happening under the covers.)
http://www.youtube.com/watch?v=DrXIQfQHFv0 (Cool video that shows what can be done with the extrinsic parameters)
http://www.amazon.co.uk/Learning-OpenCV-Computer-Vision-Library/dp/0596516134/ (chapter 11, on camera models and calibration.  Very good explanation and more code goodies )

I’ll put my code on my ITU page for now (until I get something better in the research group server, or until I put it in github),  Comments and patches are greatly appreciated :

http://www.itu.dk/people/jogr/opencv/imageadjust.tar.gz
[1] http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/EPSRC_SSAZ/node3.html
[2] http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html#calibratecamera2
[3] http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html#cv-solvepnp
Posted in opencv | 30 Comments

Preparing for git repository publication

I have a bunch of git repositories that I started in my box.  Now I want to put them on a server and access them through ssh.  Here are the commands that I used to make that happen:

  1. Make a special clone:git clone –bare PROJECT project.git
  2. Put special clone on directory in server : scp -r project.git uname@server:path
  3. Make sure you have correct permissions.  Use chmod.
  4. Test your setup. git clone ssh://uname@server/path/project.git

The first command makes a copy of the git metadata only.  That means that you will not house unnecessary data on the server.

Posted in git | Leave a comment

GIT, you have done it again. (history filtering)

My Friday post talked about my reasons to split my PhD repository into many sub-repositories.  At that moment I did not really know what I was supposed to do or if git even had a command that could help me.  After walking into my office today and doing a little web searching I stumbled upon the file-branch command.  This command can filter stuff out of the git repository history.  This can mean a lot of things, but in my case it allows me to (for each sub-project I want to take out) filter out the rest of the git repository.

My PhD repository was organized in a way that each directory in the root directory contained a sub-project.  This was very convenient and allowed for a very simple command.  Lets say that in my root directory I had A, B and C; each representing a sub-project.  And I want to separate A.  The command to do this is:

git filter-branch –subdirectory-filter directory/ — –all

This command appeared in the man-page for git’s filter-branch.  It filters all the history of the project and keeps whatever you pass to the –subdirectory-filter argument.  The –all argument specifies that all the branches and tags are to be re-written.

At the end of the command I get all the contents of directory A in my root directory.  I also get only the history the pertains to the A directory.  This is awesome!!!  This is just one of the reasons why I keep faithful to git :)  Notice that all my history was also separated by projects.  That is, I did not have any commits that changed stuff in directory A and directory B.  I am not sure how git will act in those cases (It would probably do the sensible thing and ask you to edit the commit),  I wont find out as I don’t really have that problem.

Now its just a matter of organizing what is left in the root directory and voila I have successfully separated my git repository.

Note that I had to do some additional work to make a clean separation.  The git configuration file was left untouched and I had to modify it.  I used the following command to get rid of the remote references:

git remote rm origin

Then I “cleaned” the repository with the following command:

git gc –aggressive –prune=0

After these two additional commands I could continue using the “new” repository without any warnings.

For these post I followed some information posted at http://blog.fealdia.org/2010/02/20/separating-history-of-a-git-repository-subtree/.  Thanks to that rambler for all the good info :)

Posted in git | Leave a comment

Disecting my PhD git repo

I soon realized that the decision of taking everything i do on my PhD and put it in one git repository was no good.  Putting unrelated stuff in a git repository is, in my opinion, a no-no.  In the end, they are all related to what I do on my PhD, but they are not the same project.  I have a Matlab annotation application together with documents that specify how to make camera housings.

What convinced me to do something about this was the resulting disparity in the log.  The log will end up being a collection of groups of commits related to unrelated subset.  At the end the commits for a sub-project will be scattered all over the log and will be separated by commits of other sub-projects.  I know that I could still know what I did and when I did it but the resulting cacophony can be a setback for me and for the people who I might be collaborating with.

Another issue might be that I might want to keep some things private for some time before I release them.  Though this is possible with my current setup, I think it would be much easier with several git repositories.

Luckily for me git has something called “submodule”.  This allows one to have a repository automatically pulling from another repository.  Or, what I will probably end up doing, have a dummy PhD repository that will point to the rest of my PhD sub-projects.  The only thing I need to do now is find a way to make the process fully or semi automatic while at the same time not losing the history I have.  I trust git will have a pleasant surprise waiting for me when I do this in the weekend.

Posted in git | Leave a comment