## Friday, December 02, 2011

### Calibration and Compressive Sensing Sensor Implementations

Today, we have a very interesting entry and four papers: one on calibration and three on different hardware implementations. There is also a call for papers at the end.

Before we start, I received this question the other day:
" I came across your blog during my research on state of the art compression methods for biosignals and I am following it with much interest since. From what I read, reconstruction algorithms for sparse signals are relatively complex and only run on desktop PCs with considerable RAM. I was wondering if you knew of any attempts to implement a reconstruction algorithm for sparse signals on a Microcontroller or DSP".
My response:
"....On top of my head, there two instances using hardware that are not PC desktop: here is a CMOS one: :Analog Sparse Approximation with Applications to Compressed Sensing, Adam S. Charles, Pierre Garrigues, and Christopher J. Rozell
and another one using an Iphone to perform ECG reconstruction at EPFL

If you any of  know of any other I'll be glad to add it to the list.

First, you probably recall the Planar Fourier capture arrays (PFCAs)Patrick Gill sent me the following:

".....I'm emailing to let you know we have finally "officially" answered some of the questions you posted on your blog at http://nuit-blanche.blogspot.com/2011/07/facing-mona-lisa.html last summer. We used compressed sensing to extend what our PFCA could do in three ways. First, we took a random subsample of the sensor outputs to green light (the design wavelength) and reconstructed the test image using compressed sensing. Even 10% of the sensors gives a reasonable reconstruction. Second, for red light, our first prototype has a hole in Fourier space for spatial frequencies from b = 13 - 21. This kind of systematic hole usually means compressed sensing won't work, but we found compressed sensing was able to fill in the missing information nonetheless (see Figure 11, row R, col D1 of the arXiv paper linked below). Third, we used newly-discovered wavelength sensitivity of the PFCA to determine image colour blindly, also using compressed sensing since allowing multiple colours leads to an underdetermined problem. With a big enough PFCA, compressed sensing would not be necessary since we could make the number of observations equal to the number of unknowns, but CS is more fun, isn't it?
The flavour of CS we used was L1, which I still use as my go-to method when the problem is highly coherent. I'm probably also biased since we have a fast BPDN algorithm - for our problems it gives an exact BPDN solution in about a second where GPSR takes more than an hour.......We've submitted this paper to JINST, and a preprint is available at http://arxiv.org/abs/1111.4524.  ....."

Planar Fourier capture arrays (PFCAs) are optical sensors built entirely in standard microchip manufacturing flows. PFCAs are composed of ensembles of angle sensitive pixels (ASPs) that each report a single coefficient of the Fourier transform of the far-away scene. Here we characterize the performance of PFCAs under the following three non-optimal conditions. First, we show that PFCAs can operate while sensing light of a wavelength other than the design point. Second, if only a randomly-selected subset of 10% of the ASPs are functional, we can nonetheless reconstruct the entire far-away scene using compressed sensing. Third, if the wavelength of the imaged light is unknown, it can be inferred by demanding self-consistency of the outputs.

Next, here is an attack on the calibration problem:

We consider the problem of calibrating a compressed sensing measurement system under the assumption that the decalibration consists in unknown gains on each measure. We focus on {\em blind} calibration, using measures performed on a few unknown (but sparse) signals. A naive formulation of this blind calibration problem, using $\ell_{1}$ minimization, is reminiscent of blind source separation and dictionary learning, which are known to be highly non-convex and riddled with local minima. In the considered context, we show that in fact this formulation can be exactly expressed as a convex optimization problem, and can be solved using off-the-shelf algorithms. Numerical simulations demonstrate the effectiveness of the approach even for highly uncalibrated measures, when a sufficient number of (unknown, but sparse) calibrating signals is provided. We observe that the success/failure of the approach seems to obey sharp phase transitions.

The next paper is about coded aperture and compressive sensing:

Spatio-temporal Compressed Sensing with Coded Apertures and Keyed Exposures by Zachary T. Harmany, Roummel F. Marcia, Rebecca M. Willett. The abstract reads:
Optical systems which measure independent random projections of a scene according to compressed sensing (CS) theory face a myriad of practical challenges related to the size of the physical platform, photon efficiency, the need for high temporal resolution, and fast reconstruction in video settings. This paper describes a coded aperture and keyed exposure approach to compressive measurement in optical systems. The proposed projections satisfy the Restricted Isometry Property for sufficiently sparse scenes, and hence are compatible with theoretical guarantees on the video reconstruction quality. These concepts can be implemented in both space and time via either amplitude modulation or phase shifting, and this paper describes the relative merits of the two approaches in terms of theoretical performance, noise and hardware considerations, and experimental results. Fast numerical algorithms which account for the nonnegativity of the projections and temporal correlations in a video sequence are developed and applied to microscopy and short-wave infrared data.

from the text:

"....Clearly, the estimates from MURA reconstruction are limited by the spatial resolution of the photo-detector. Thus, high resolution reconstructions cannot generally be obtained from low-resolution MURA-coded observations. It can be shown that this mask design and reconstruction result in minimal reconstruction errors at the FPA resolution and subject to the constraint that linear, convolution-based reconstruction methods would be used. However, when the scene of interest is sparse or compressible, and nonlinear sparse reconstruction methods may be employed, then CS ideas can be used to design coded aperture which yield higher resolution images..."
which leads me to think there is work to do for all the other modulations used in the coded aperture work.
in space and nuclear medicine.

The next hardware is the CS ADC version 2.0: The Polyphase Random Demodulator for Wideband Compressive Sensing by Jason N. Laska, J. P. Slavinsky, Richard G. Baraniuk. The abstract reads:
Compressive sensing (CS) provides a mathematical platform for designing analog-to-digital converters (ADCs) that sample signals at sub-Nyquist rates. In particular, the framework espouses a linear sensing system coupled with a non-linear, iterative computational recovery algorithm. A central problem within this platform is the design of practical hardware systems that can be easily calibrated and coupled with computational recovery algorithms. In this paper, we propose a new CS-ADC that resolves some of the practical issues present in prior work. We dub this new system the polyphase random demodulator.

And finally there is the call for papers:

Smartphone Internet Applications of Compressive Sensing, Supervised or Unsupervised
Call for Papers
The compressive sensing (CS) has been with us, since Candes, Romberg, Tao, and Donhoe published in IEEE IT in 2006 and received IEEE Best Paper award in 2008. It is done at the image acquisition level, not the postprocessing compression. Subsequently, about 300 refereed papers have been published worldwide. This math of sparse orthogonal linear combinations can save the patients from suffering unnecessary X-ray radiation hazard if, somehow, there exists a machine which can actually block X-ray transmission at some random pixel level. They can reproduce in principle the original resolution by a linear programming under the minimum city block distance called L1-norm constraint.
We are, furthermore, interested in an unsupervised compressive sensing which is based on artificial neural network unsupervised learning such as eye-ears adaptive wavelet transform and brain-Independent component analysis. Thus, we wish to organize a special online publication on novel compressive sensing, supervised or unsupervised, and how to overcome the digital pollution in video imaging surveillance by publishing an automatic video image Cliff note.
Also, we are interested in sparse CS constraints on EOIR hyperspectral multimedia pattern recognition, especially smartphone video facial recognition, smartphone music index and retrieval, and smartphone home-alone surveillance video knowing the subject of interest and early detecting novel intrusion, falls, illness symptoms, and so forth. Potential topics include, but are not limited to:
• EOIR hyperspectral compressive sampling for pattern recognition
• Smartphone compressive sensing video facial recognition
• Smartphone compressive sensing music index and retrieval
• Smartphone graphic 6W index for compressive sensing, storage, and retrieval
• Smartphone home-alone compressive sensing applications (intrusion, falls, or wellness symptoms)
Before submission authors should carefully read over the journal's Author Guidelines, which are located athttp://www.hindawi.com/journals/acisc/guidelines/. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System athttp://mts.hindawi.com/ according to the following timetable:
 Manuscript Due Friday, 4 May 2012 First Round of Reviews Friday, 27 July 2012 Publication Date Friday, 21 September 2012