Pages

Thursday, June 12, 2008

CS: Video of Rice Illumination Based Single Pixel Cameras, Sparsity and l_p

Richard Baraniuk and Kevin Kelly present both single pixel camera concepts at the TI Developer's Conference 08 that Marco Duarte mentioned earlier. In particular, they show off the illumination based single pixel camera working in real time.


I think this is a great video to show people what Compressed Sensing can do. Richard mentions that his group's current course of research is in increasing the speed on the back-end to perform these random measurements and also the ability to work directly on the manifold of compressed data.

One can notice that there is no need for sparse measurement matrices in this particular application. Also, the ability to perform more rapid measurements is clearly related to how many measurements need to be taken in the first place. Which brings me to a set of papers I overlooked on the reconstruction techniques using the l_p semi-norm. I have covered the results of Rick Chartrand and colleagues and that of Simon Foucart and Ming-Jun Lai (here) that showed us that by performing a minimization using the l_p norm with p to less than 1, one could obtain reconstruction using fewer CS measurements. The new set of papers [1][2][3], as Remi Gribonval pointed out to me, cover:
  • the equivalence of all l_p minimization problems when the data is sufficiently sparse to guarantee that l_1 will be successful
  • the provably superior performance of l_p minimization over l_1, for moderately sparse data and sufficiently small p
  • the stability to noise (see paper [3])
They are:
[1] Remi Gribonval and Morten Nielsen, On the strong uniqueness of highly sparse expansions from redundant dictionaries. The abstract reads:
A series of recent results shows that if a signal admits a sufficiently sparse representation (in terms of the number of nonzero coefficients) in an “incoherent” dictionary, this solution is unique and can be recovered as the unique solution of a linear programming problem. We generalize these results to a large class of sparsity measures which includes the l_p-sparsity measures for . We give sufficient conditions on a signal such that the simple solution of a linear programming problem simultaneously solves all the non-convex (and generally hard combinatorial) problems of sparsest representation w.r.t. arbitrary admissible sparsity measures. Our results should have a practical impact on source separation methods based on sparse decompositions, since they indicate that a large class of sparse priors can be efficiently replaced with a Laplacian prior without changing the resulting solution.

[2] Remi Gribonval and Morten Nielsen, Highly sparse representations from dictionaries are unique and independent of the sparseness measure. The abstract reads:
The purpose of this paper is to study sparse representations of signals from a general dictionary in a Banach space. For so-called localized frames in Hilbert spaces, the canonical frame coefficients are shown to provide a near sparsest expansion for several sparseness measures. However, for frames which are not localized, this no longer holds true and sparse representations may depend strongly on the choice of the sparseness measure. A large class of admissible sparseness measures is introduced, and we give sufficient conditions for having a unique sparse representation of a signal from the dictionary w.r.t such a sparseness measure. Moreover, we give sufficient conditions on a signal such that the simple solution of a linear programming problem simultaneously solves all the non-convex (and generally hard combinatorial) problems of sparsest representation of the signal w.r.t. arbitrary admissible sparseness measures.
[3] Rosa Figueras, Pierre Vandergheynst and Remi Gribonval, A simple test to check the optimality of a sparse signal approximation. The abstract reads:
Approximating a signal or an image with a sparse linear expansion from an overcomplete dictionary of atoms is an extremely useful tool to solve many signal processing problems. Finding the sparsest approximation of a signal from an arbitrary dictionary is a NP-hard problem. Despite of this, several algorithms have been proposed that provide sub-optimal solutions. However, it is generally difficult to know how close the computed solution is to being "optimal", and whether another algorithm could provide a better result. In this paper we provide a simple test to check whether the output of a sparse approximation algorithm is nearly optimal, in the sense that no significantly different linear expansion from the dictionary can provide both a smaller approximation error and a better sparsity. As a by-product of our theorems, we obtain results on the identifiability of sparse overcomplete models in the presence of noise, for a fairly large class of sparse priors.

No comments:

Post a Comment