Image Deblurring via sensor data

dcuartiellesAugust 2nd, 2010

Neel Joshi, Sing Bing Kang, C. Lawrence Zitnick, and Richard Szeliski from Microsoft Research presented a paper at the SIGGRAPH 2010 conference in Los Angeles where they introduced image treatment algorithms using sensor information gathered by an Arduino board (3 gyroscopes and a 3 axis accelerometer) to compensate errors introduced while shooting a camera due to the movement of the capturing device. Their article can be downloaded directly from Microsoft Research’s website; you will find their PDF there, but also the slideshow they used to introduce their work, and some examples of the application of their correction algorithms to some pictures.

 

image teaser from Microsoft Research

(c) 2010 Microsoft Research

 

As mentioned in this article at PC-Magazine, which was our source:

The four researchers named in the study managed to construct their hardware sensor package completely off-the-shelf, using a combination of one three-axis accelerometers, three gyroscopes, and a Bluetooth radio all wired to an open-source Arduino controller.

“Our method is completely automatic, handles per-pixel, spatially-varying blur, and out-performs the current leading image-based methods,” reads the accompanying paper.

“To the best of our knowledge, this is the first work that uses 6 [degrees-of-freedom] inertial sensors for dense, per-pixel spatially-varying image deblurring and the first work to gather dense ground-truth measurements for camera-shake blur.”

 

One Response to “Image Deblurring via sensor data”

  1. J Says:

    SLR camera manufacturers have some sort of image stabilization built in there camera/sensor mechanism (Sony SLR’s) or their lenses (Canon and Nikon systems). These systems stabilize the lens so there isn’t a need for post processing, the light hitting the sensor is already “stabilized”.

    That being said, I get the idea behind there research. Its probably cheaper to implement the 6 DOF sensor than the mechanical correction systems currently used. Plus, its good to see arduinos and other “off the shelf” parts being used in real research. It would have been interesting to see a comparison of there correction with the correction of the IS lens that is mounted on there camera.