The auto adjust exposure from ufraw.

The pictures I receive from the biologists are in raw format.  This is good as it allows for tweaking before you put the picture in a more friendly format like jpeg or png.  I’ve been using a really cool tool to adjust parameters in raw format images.  ufraw is a GPL licensed program that can run standalone or as a GIMP plugin.  It has an extensive list of features that lets you modify pretty much everything in a raw image.

I initially did a chromatic aberration correction.  I was very happy with the results and was toying with the idea of doing an automatic exposure adjustment.  But when I did it, the number of overexposed pixels increased.  After the adjustment, the pictures had a better contrast and in general looked sharper.  But I was worried that I would be loosing some pixel information.

To make sure that the overexposed pixels were increasing I saved an image with the adjustment and one without.  I then went to Matlab and counted the pixels that were equal to 255 in both.  The unmodified image had no overexposed pixels while 4% of the adjusted image was overexposed.

My code in Matlab looked like this:

>corrected = imread(‘correctedLHH20110601-0148.png’);
>normal = imread(‘normalLHH20110601-0148.png’);
>( sum(sum(corrected(:,:,3)==255)) +
      sum(sum(corrected(:,:,2)==255)) + 
      sum(sum(corrected(:,:,1)==255)) ) / 
>( sum(sum(normal(:,:,3)==255)) +
      sum(sum(normal(:,:,2)==255)) +
      sum(sum(normal(:,:,1)==255)) ) /

The results vary with the images you try out.  But the fact is that the overexposed pixels increase in some pictures and that should bring the performance of the classification down (depending on where the overexposed pixels are located).  So I’m going to avoid using this feature for now.

Update 19-07-2011:  I kept thinking about this at home and realized that the pixels that result in a 255 value were probably pixels that were in the far end of the histogram to begin with.  The algorithm is probably stretching the histogram in such a way that the outer pixels get put in the 255 bin.  There might still be loss of data but it’s not the type where a pixel of value 1 gets a value of 255 (though it depends on the image :).  In any case the error calculation is not as easy as my procedure in this post.


About joelgranados

I'm fascinated with how technology and science impact our reality and am drawn to leverage them in order to increase the potential of human activity.
This entry was posted in wireless image sensor networks and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s