Tuesday, September 06, 2011

Compressive Sensing Literature This Week

We have several papers today and an attendant code. But first here is a postdoc at Drexel.

Distributed Basis Pursuit by João F. C. MotaJoão M. F. XavierPedro M. Q. Aguiar, and Markus Püschel. The abstract reads:
We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the least ℓ1- norm solution of the underdetermined linear system Ax = b and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a sensor network, and is designed to minimize the communication between nodes. The algorithm only requires the network to be connected, has no notion of a central processing node, and no node has access to the entire matrix A at any time. We consider two scenarios in which either the columns or the rows of A are distributed among the compute nodes. Our algorithm, named D-ADMM, is a decentralized implementation of the alternating direction method of multipliers. We show through numerical simulation that our algorithm requires considerably less communications between the nodes than the state-of-the-art algorithms.
The code for the distributed Basis pursuit ( BPinSN)can be found here and is listed on the compressive sensing big picture page.

Compressive Matched-Field Processing by William MantzelJustin RombergKarim Sabra. The abstract reads:
Source localization by matched-field processing (MFP) generally involves solving a number of computationally intensive partial differential equations. This paper introduces a technique that mitigates this computational workload by "compressing" these computations. Drawing on key concepts from the recently developed field of compressed sensing, it shows how a low-dimensional proxy for the Green's function can be constructed by backpropagating a small set of random receiver vectors. Then, the source can be located by performing a number of "short" correlations between this proxy and the projection of the recorded acoustic data in the compressed space. Numerical experiments in a Pekeris ocean waveguide are presented which demonstrate that this compressed version of MFP is as effective as traditional MFP even when the compression is significant. The results are particularly promising in the broadband regime where using as few as two random backpropagations per frequency performs almost as well as the traditional broadband MFP, but with the added benefit of generic applicability. That is, the computationally intensive backpropagations may be computed offline independently from the received signals, and may be reused to locate any source within the search grid area.
The code for this paper is here.

As I was reading Singular Regularization of Inverse ProblemsBregman Distances and their Applications to Variational Frameworks with Singular Regularization Energies by Martin Benning, it occured to me that structured norm has already some long history. And this is not about to change as exemplified by the following paper: Gini Index as Sparsity Measure for Signal Reconstruction from Compressive Samples by Dornoosh Zonoobi, Ashraf A. Kassim, Yedatore V. Venkatesh. The abstract reads:
Sparsity is a fundamental concept in compressive sampling of signals/images, which is commonly measured using the ℓ0 norm, even though, in practice, the ℓ1 or the ℓp (0 < p < 1) (pseudo-) norm is preferred. In this paper, we explore the use of the Gini index (GI), of a discrete signal, as a more effective measure of its sparsity for a significantly improved performance in its reconstruction from compressive samples. We also successfully incorporate the GI into a stochastic optimization algorithm for signal reconstruction from compressive samples and illustrate our approach with both synthetic and real signals/images.
Although, the argument is not really about sparsity and compressiblity (as much theoretical work show the bound of the error for compressible signals ), I for one welcome a new structured norm that seems to take into account some sort of flavor of sparsity. Dornoosh Zonoobi tells me that the code implementing these examples should be available within a month or so.

Here is the rest that I need to read:

A new reconstruction method for Wigner function is reported for quantum tomography based on compressed sensing. By analogy with computed tomography, Wigner functions for some quantum states can be reconstructed with less measurements utilizing this compressed sensing based method.

Kronecker Compressive Sensing by Marco F. Duarte and Richard G. Baraniuk. The abstract reads:
Compressive sensing (CS) is an emerging approach for the acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1-D signals and 2-D images, many important applications involve multidimensional signals; the construction of sparsifying bases and measurement systems for such signals is complicated by their higher dimensionality. In this paper, we propose the use of Kronecker product matrices in CS for two purposes. First, such matrices can act as sparsifying bases that jointly model the structure present in all of the signal dimensions. Second, such matrices can represent the measurement protocols used in distributed settings. Our formulation enables the derivation of analytical bounds for sparse approximation of multidimensional signals and CS recovery performance as well as a means to evaluate novel distributed measurement schemes.

Discrete-to-discrete imaging models for computed tomography (CT) are becoming increasingly ubiquitous as the interest in iterative image reconstruction algorithms has heightened. Despite this trend, all the intuition for algorithm and system design derives from analysis of continuous-to-continuous models such as the X-ray and Radon transform. While the similarity between these models justifies some crossover, questions such as what are sufficient sampling conditions can be quite different for the two models. This sampling issue is addressed extensively in the first half of the article using singular value decomposition analysis for determining sufficient number of views and detector bins. The question of full sampling for CT is particularly relevant to current attempts to adapt compressive sensing (CS) motivated methods to application in CT image reconstruction. The second half goes in depth on this subject and discusses the link between object sparsity and sufficient sampling for accurate reconstruction. Particularly, it is pointed out that, because CS motivated image reconstruction is object dependent, there is a need to consider the imaging task so that test phantoms are employed with a similar sparsity level as what might be encountered.
Mismatch and resolution in compressive imaging by Albert Fannjiang, Wenjing Liao. The abstract reads:
Highly coherent sensing matrices arise in discretization of continuum problems such as radar and medical imaging when the grid spacing is below the Rayleigh threshold as well as in using highly coherent, redundant dictionaries as sparsifying operators. Algorithms (BOMP, BLOOMP) based on techniques of band exclusion and local optimization are proposed to enhance Orthogonal Matching Pursuit (OMP) and deal with such coherent sensing matrices. BOMP and BLOOMP have provably performance guarantee of reconstructing sparse, widely separated objects {\em independent} of the redundancy and have a sparsity constraint and computational cost similar to OMP's. Numerical study demonstrates the effectiveness of BLOOMP for compressed sensing with highly coherent, redundant sensing matrices.
This looks interesting! I wish there was a code somewhere to try it out.

A Remark on the Lasso and the Dantzig Selector by Yohann de Castro. The abstract reads:
Numerous authors have established a connection between the Compressed Sensing problem without noise and the estimation of the Gelfand widths. This article shows that this connection is still true in the noisy case. Indeed, we investigate the lasso and the Dantzig selector in terms of the distortion of the design. This latter measures how far is the intersection between the kernel of the design matrix and the unit l1-ball from an l2-ball. In particular, we exhibit the weakest condition to get oracle inequalities in terms of the s-best term approximation.
Compressive Imaging of Subwavelength Structures II. Periodic Rough Surfaces by Albert Fannjiang, Hsiao-Chieh Tseng. The abstract reads:
A compressed sensing scheme for near-field imaging of corrugations of relative sparse Fourier components is proposed. The scheme employs random sparse measurement of near field to recover the angular spectrum of the scattered field. It is shown heuristically and numerically that under the Rayleigh hypothesis the angular spectrum is compressible and amenable to compressed sensing techniques. Iteration schemes are developed for recovering the surface profile from the angular spectrum.The proposed nonlinear least squares in the Fourier basis produces accurate reconstructions even when the Rayleigh hypothesis is known to be false.

We study the inversion of a random field from pointwise measurements collected by a sensor network. We assume that the field has a sparse representation in a known basis. To illustrate the approach, consider the inversion of an acoustic field created by the superposition of a discrete number of propagating noisy acoustic sources. Our method combines compressed sensing (sparse reconstruction by ℓ1-constrained optimization) with distributed average consensus (mixing the pointwise sensor measurements by local communication among the sensors). The paper describes the approach and demonstrates its good performance with synthetic data for several scenarios of practical interest.

A Distributed Sensor Fusion Algorithm for the Inversion of Sparse Fields by Aurora Schmidt and Jose M. F. Moura. The abstract reads:
The objective of the Field Inversion by Consensus and Compressed Sensing (FICCS) algorithm is to use local communications for the distributed estimation of a field, observed by a network of sensors. We use a variation of distributed average consensus algorithms to create tailored linear projections that lead to accurate `1-inversions of the sensed field. By spreading information throughout the network, we eliminate the need for a fusion center. To demonstrate the algorithm, we use the example of localizing multiple discrete acoustic sources with knowledge of the propagation medium. We show noiseless and noisy inversion performance in simulation as a function of the number of observation projections computed and discuss the scalability of the approach with network size.

This paper addresses the problem of video anomaly recovery from a sequence of spectrally compressed video frames. Analysis of anomalies occurring in both time and spectrum is important in video surveillance applications. We present a methodology for the recovery of anomalies such as moving objects and their spectral signatures from spectrally compressed video. The spectrally compressed video frames are obtained by using a Coded Aperture Snapshot Spectral Imaging (CASSI) system. The CASSI system encodes a 3-D data cube containing both 2-D spatial information and spectral information in a single 2-D measurement. In the proposed methodology, we use the spectrally compressed video as columns of a large data matrix . Principal Component Pursuit (PCP) is then used to decompose into the stationary background and a sparse matrix capturing the anomalies in the foreground. The sparse matrix is then used jointly with to recover the spectral information of the objects of interest. An example for the recovery of video anomalies in a 3-channel spectral video system (RGB) is presented.

Spectral imaging is of interest in many applications, including wide-area airborne surveillance, remote sensing, and tissue spectroscopy. Coded aperture spectral snapshot imaging (CASSI) provides an efficient mechanism to capture a 3D spectral cube with a single shot 2D measurement. CASSI uses a focal plane array (FPA) measurement of a spectrally dispersed, aperture coded, source. The spectral cube is then attained using a compressive sensing reconstruction algorithm. In this paper, we explore a new approach referred to as random convolution snapshot spectral imaging (RCSSI). It is based on FPA measurements of spectrally dispersed coherent sources that have been randomly convoluted by a spatial light modulator. The new method, based on the theory of compressive sensing via random convolutions, is shown to outperform traditional CASSI systems in terms of PSNR spectral image cube reconstructions.

Variable Selection in High Dimensions with Random Designs and Orthogonal Matching Pursuit by Antony Joseph. The abstract reads:
The performance of Orthogonal Matching Pursuit (OMP) for variable selection is analyzed for random designs. When contrasted with the deterministic case, since the performance is here measured after averaging over the distribution of the design matrix, one can have far less stringent sparsity constraints on the coefficient vector. We demonstrate that for exact sparse vectors, the performance of the OMP is similar to known results on the Lasso algorithm [\textit{IEEE Trans. Inform. Theory} \textbf{55} (2009) 2183--2202]. Moreover, variable selection under a more relaxed sparsity assumption on the coefficient vector, whereby one has only control on the $\ell_1$ norm of the smaller coefficients, is also analyzed. As a consequence of these results, we also show that the coefficient estimate satisfies strong oracle type inequalities.

Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from

Credit: This image of Earth (on the left) and the moon (on the right) was taken by NASA's Juno spacecraft on Aug. 26, 2011, when the spacecraft was about 6 million miles (9.66 million kilometers) away. It was taken by the spacecraft's onboard camera, JunoCam. Image credit: NASA/JPL-Caltech/SwRI

No comments: