Page Views on Nuit Blanche since July 2010







Please join/comment on the Google+ Community (1548), the CompressiveSensing subreddit (884), the Facebook page (60 likes), the LinkedIn Compressive Sensing group (3301) or the Advanced Matrix Factorization Group (1048)

Thursday, January 29, 2009

CS: Testing the Nullspace Property using SDP, Compressive Coded Aperture Imaging, Informative Sensing, NB

Alexandre d'Aspremont and Laurent El Ghaoui recently updated their preprint entitled: Testing the Nullspace Property using Semidefinite Programming. The abstract reads
Recent results in compressed sensing show that, under certain conditions, the sparsest solution to an underdetermined set of linear equations can be recovered by solving a linear program. These results rely on nullspace properties of the system matrix. So far, no tractable algorithm is known to test these conditions and most current results rely on asymptotic properties of sparse eigenvalues of random matrices. Given a matrix A, we use semidefinite relaxation techniques to test the nullspace property on A and show on some numerical examples that these relaxation bounds can prove perfect recovery of sparse solutions with relatively high cardinality.
A beautiful summary of the current state of our knowledge on this issue can be found in the paper:
.....Universal conditions for strong recovery based on sparse extremal eigenvalues were derived in Candes and Tao (2005) and Candes and Tao (2006) who also proved that certain (mostly random) matrix classes satisfied these conditions with an exponentially small probability of failure. More recently, Cohen et al. (2006) derived sparse recovery conditions based on properties of the nullspace of A. In particular, if we define:

α_k = max max y^T x,
{Ax=0, ||x||_1=1} {||y||_1=1, ||y||_1≤k}


Cohen et al. (2006) show that α_k less than 1/2 guarantees strong recovery.
One key issue with the current sparse recovery conditions in Candes and Tao (2005) or Cohen et al. (2006) is that except for explicit recovery thresholds available for certain types of random matrices, testing these conditions on generic matrices is potentially harder than solving the combinatorial ℓ0 problem in (1) as it implies either solving a combinatorial problem to compute αk, or computing sparse eigenvalues. Semidefinite relaxation bounds on sparse eigenvalues were used in d’Aspremont et al. (2008) to test the conditions in Candes and Tao (2005) on arbitrary matrices. In recent independent results, Juditsky and Nemirovski (2008) provide an alternative proof of some of the results in Cohen et al. (2006), extend them to the noisy case and produce a linear programming relaxation bound on α_k with explicit performance bounds. In this paper, we derive a semidefinite relaxation bound on α_k, study its tightness and compare its numerical performance with that of the relaxation in Juditsky and Nemirovski (2008). Because it involves solving a semidefinite program, the complexity of the semidefinite relaxation bound derived here is significantly higher than that of the linear programming based relaxation in Juditsky and Nemirovski (2008) and no explicit performance performance bounds are available here on matrices satisfying sparse recovery conditions, we show on small scale examples that the semidefinite bounds on αk are often, but not always, tighter than those produced by the the linear programming relaxation in Juditsky and Nemirovski (2008)....

This new document is a substantial revision from the previous draft and it now includes some discussion about the work of Anatoli Juditsky and Arkadii Nemirovski in On Verifiable Sufficient Conditions for Sparse Signal Recovery via l_1 Minimization. As you recall, this type of work is extremely valuable as it has the potential of producing guidelines for hardware implementers on how to design their Point Spread Functions and so forth. For instance, one may recall that designing a coded aperture mask for the purpose of providing superresolution hinges on the ability to map a mask to a specific type of Toeplitz matrix [1]. But superresolution may be just the beginning of producing designs beyond what we have been able to do for the past 40 years (at least in coded aperture). For this to happen, hardware designers need to have the freedom to tinker with new designs and then be able to ask the question: Can I do Compressive Sensing with this design ? Currently only the implementation of Anatoli Juditsky and Arkadii Nemirovski is available and is located at:


(oops, looks like the code is not available today, I'll report when it is) Alexandre d'Aspremont told me that an implementation of their code will eventually come out. [Update: The numerical part of this draft might change in an upcoming version 3. I'll report shortly on the new version of this paper if there are changes, and on the availability of the code of Anatoli Juditsky and Arkadi Nemirovski. This is a really challenging and important problem and I am glad some of the best minds are looking into it ].

In a somewhat related matter,


Nonlinear image reconstruction based upon sparse representations of images has recently received widespread attention with the emerging framework of compressed sensing (CS). This theory indicates that, when feasible, judicious selection of the type of distortion induced by measurement systems may dramatically improve our ability to perform image reconstruction. However, applying compressed sensing theory to practical imaging systems poses a key challenge: physical constraints typically make it infeasible to actually measure many of the random projections described in the literature, and therefore, innovative and sophisticated imaging systems must be carefully designed to effectively exploit CS theory. In video settings, the performance of an imaging system is characterized by both pixel resolution and field of view. In this work, we propose compressive imaging techniques for improving the performance of video imaging systems in the presence of constraints on the focal plane array size. In particular, we describe a novel yet practical approach that combines coded aperture imaging to enhance pixel resolution with superimposing subframes of a scene onto a single focal plane array to increase field of view. Specifically, the proposed method superimposes coded observations and uses wavelet-based sparsity recovery algorithms to reconstruct the original subframes. We demonstrate the effectiveness of this approach by reconstructing with high resolution the constituent images of a video sequence.
Some of the work has already been covered in [1], but the hardware design and wavelet based reconstruction and the attendant results are new and very interesting. I am sure I'll come back to these later.

Also, I mentioned their work last friday, here is the preprint that introduces us to the concept of "bandwise random projections", I am loving this as it certainly has major ramifications to the understanding of the primary visual cortex, especially trying to compare current modeling efforts and the signal processing aspect of it. The preprint is Informative Sensing by Hyun Sung Chang, Yair Weiss and William T. Freeman. The abstract reads:

Compressed sensing is a recent set of mathematical results showing that sparse signals can be exactly reconstructed from a small number of linear measurements. Interestingly, for ideal sparse signals with no measurement noise, random measurements allow perfect reconstruction while measurements based on principal component analysis (PCA) or independent component analysis (ICA) do not. At the same time, for other signal and noise distributions, PCA and ICA can significantly outperform random projections in terms of enabling reconstruction from a small number of measurements. In this paper we ask: given the distribution of signals we wish to measure, what are the optimal set of linear projections for compressed sensing? We consider the problem of finding a small number of linear projections that are maximally informative about the signal. Formally, we use the InfoMax criterion and seek to maximize the mutual information between the signal, x, and the (possibly noisy) projection y=Wx. We show that in general the optimal projections are not the principal components of the data nor random projections, but rather a seemingly novel set of projections that capture what is still uncertain about the signal, given the knowledge of distribution. We present analytic solutions for certain special cases including natural images. In particular, for natural images, the near-optimal projections are bandwise random, i.e., incoherent to the sparse bases at a particular frequency band but with more weights on the low-frequencies, which has a physical relation to the multi-resolution representation of images.

An attendant presentation of some of the approach can be found in this entry. Let us thank The Google for bringing this one up.

On a different note, the Compressive Sensing Hardware page now has some videos included, if you know of any other, I'll be glad to include them.

Finally, Emmanuel Candes and Terry Tao wrote in the latest IEEE Information Theory Society Newsletter and made a reference to Nuit Blanche. Thanks Laurent for the heads-up.


Reference:
[1] Roummel Marcia and Rebecca Willett, Compressive Coded Aperture Superresolution Image Reconstruction - additional material can be found here while the slides are here

Credit: NASA/JPL-Caltech/University of Arizona/Texas A&M, sol 151 of Phoenix.

No comments:

Printfriendly