The one pixel camera made by Richard Baraniuk and his group at Rice is the one that received the most press. In order to build this new type of camera ones needs a DMD controlled board (at 6K$), one pixel and you're set. That one pixel detector could really be a photodiode, a radiation detector or other Teraherz receiver.
However, the concept of compressed sensing does not really need hardware transformation/implementation as impressive as this one. At one end, of the spectrum, one can just change the way sampling is done as in MRI work ( Sparse MRI: The application of compressed sensing for rapid MR imaging by Michael Lustig, David Donoho, and John M. Pauly), where sampling in the k-space is done below the Nyquist criterion. Current technology already allows for the gathering of signals with less than pure frequency content. Using the ability to put several frequencies together in one spike, allows one to retrieve directly compressed samples thereby enabling substantial savings in acquisition time.
In between the ends of that spectrum are several set-ups that are trying to use current hardware with some slight modification to provide lower sampling sets. I have come across two: Random Lens Imagers and Compressive Sampling Spectrometers.
* The random lens imaging technique developed at MIT by Rob Fergus, Antonio Torralba, and William T. Freeman is where one uses a normal DSLR camera but removes the lens and replaces it with a transparent material in which mirrors where included at random. The point they are making is the following: The image obtained from this system are the compressed sensing measurements. In order to reconstruct the original image, they have to calibrate the new camera set-up. The only way to do this is by having say a laser light shone on the camera and see how the ray is being imaged. When you have a 10 Mp camera, this means that you will shine that laser light from ten millions locations in the field of view in order to have the ten million unit response to each laser light. When that is done, you solve a linear algebra problem (a simple matrix inversion) to obtain the calibration/response matrix. Then every time, you take a picture, you multiply your result with that matrix and obtain the picture you were looking for. It is pretty obvious that the calibration step could be improved by removing the need to shine a laser light ten million times. In other words, you have too many unknowns for too few equations (being lazy you will not shine that laser light ten millions times). The compressed sensing theory really says that you can solve that problem having too few equations (or calibration image ) so that you can find the inverted matrix with much fewer trials. The advantage is the potential ability to provide more information from current CMOS/CCDs. In effect, by using many calibration images, one could potentially obtain superresolution information (smaller than the pixel size of the DSLR) or depth information. The ability to use the current very large CMOS capability (instead of one pixel) has a very real potential. Consider this: if a 1 MP camera can produce 30 KB images in JPEG, there is 30 000 information allow a good representation of a 2-D scene (about 170 information per dimension.) Therefore, a 3-D scene would require about 4.9 MB for a good description. Clearly one is already gathering that much information from normal cameras.
* Compressive sampling imager: With most hyperspectral cameras there is the need to reduce the amount of information gathered while it is being gathered not after. For instance, the Hyperion camera on EO-1 was designed with the bandwidth for transmitting the information down in mind (the TDRSS system cannot handle more than 6 MBit/s) that is why people try to compress the data after it has been gathered in order to allow efficient transmission of signals. But on a spacecraft like EO-1 you don't really have that much computational power and you are really looking for a way to acquire the minimum amount of information in the first place. The current interesting undertaking in this field seems to be about compressed sensing spectrometers or compressive sampling spectrometers by the DISP group at Duke (David J. Brady, Mike Gehm, Scott McCain, Zhaochun Xu, Prasant Potuluri, Mike Sullivan, Nikos Pitsianis, Ben Hamza, Ali Adibi).
A good presentation of their effort can be found here. The idea is that if you assume that:
• Measurements are expensive
• Photons are scarce and
• Spectra are sparse
you can modify your current set-up by introducing a mask between two gratings. So instead of having systems that respond to single spectral signals, group testing is used so that several spectral bands can provide one compressed measurement.
More advances on this type of hardware can be found in this fascinating article where it shows why Nyquist and Golay sampling theorems were reduced to piece by current advances, this a must read.
It is pretty obvious that any of these elements for adopting compressed sensing fit pretty well with radiation measurements where one can direct radiation beams. I am also thinking of implementing some of these systems for the HASP 2007 flight.
http://www.opticsexpress.org/DirectPDFAccess/AF92E787-BDB9-137E-C54AD87A8042C49E_143080.pdf?da=1&id=143080&seq=0&CFID=3644003&CFTOKEN=14936008
ReplyDeleteCheck this out too,
M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, "Single-shot compressive spectral imaging with a dual-disperser architecture," Opt. Express 15, 14013-14027 (2007)
Thank you Scott,
ReplyDeleteI had mentioned it in another post:
http://nuit-blanche.blogspot.com/2007/08/cs-is-not-just-compressed-sampling-nor.html
but it was not available for perusal then (see the comment section). Thanks for the pointer, I'll dig it up.
Igor.
Seeking partner from Academic or non-profit research institute for multi-spectral Compressive Sensing project – paper study. Combine your knowledge of Compressive Sensing algorithms with our skills at Electro-Optical sensor design.
ReplyDeleteContact: Brien Housand
Email: bhousand@dsci.com