Pages

Sunday, March 02, 2008

Compressed Sensing: RIP implication on CS, Compressed Channel Sensing, Sparse Covariance Selection and Practical image reconstruction using L_p norm


Here is a new batch of interesting papers:

A technical report by Emmanuel Candès on The restricted isometry property and its implications for compressed sensing . The abstract reads:
It is now well-known that one can reconstruct sparse or compressible signals accurately from a very limited number of measurements, possibly contaminated with noise. This technique known as "compressed sensing" or "compressive sampling" relies on properties of the sensing matrix such as the restricted isometry property. In this note, we establish new results about the accuracy of the reconstruction from undersampled measurements which improve on earlier estimates, and have the advantage of being more elegant.

Compressed Channel Sensing by Waheed Bajwa, Jarvis Haupt, Gil Raz, Stephen Wright, and Robert Nowak. The abstract reads:
Reliable wireless communications often requires accurate knowledge of the underlying multipath channel. This typically involves probing of the channel with a known training waveform and linear processing of the input probe and channel output to estimate the impulse response. Many real-world channels of practical interest tend to exhibit impulse responses characterized by a relatively small number of nonzero channel coefficients. Conventional linear channel estimation strategies, such as the least squares, are ill-suited to fully exploiting the inherent low-dimensionality of these sparse channels. In contrast, this paper proposes sparse channel estimation methods based on convex/linear programming. Quantitative error bounds for the proposed schemes are derived by adapting recent advances from the theory of compressed sensing. The bounds come within a logarithmic factor of the performance of an ideal channel estimator and reveal significant advantages of the proposed methods over the conventional channel estimation schemes.
Of related interest in the area of subsampling and the utilization of the l_1 norm:

Onureena Banerjee, Laurent El Ghaoui, Alexandre d'Aspremont wrote Model Selection Through Sparse Maximum Likelihood Estimation. The abstract reads:


We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added ℓ1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive ℓ1-norm penalized regression. Our second algorithm, based on Nesterov’s first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright and Jordan [2006]), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for the binary case. We test our algorithms on synthetic data, as well as on gene expression and senate voting records data.
They also wrote First-Order Methods for Sparse Covariance Selection. The abstract reads:

Given a sample covariance matrix, we solve a maximum likelihood problem penalized by the number of nonzero coefficients in the inverse covariance matrix. Our objective is to find a sparse representation of the sample data and to highlight conditional independence relationships between the sample variables. We first formulate a convex relaxation of this combinatorial problem, we then detail two efficient first-order algorithms with low memory requirements to solve large-scale, dense problem instances.

The related COVSEL source code can be found here.

In the area of subsampling and using the L_p norm, Emil Sidky, Ingrid Reiser, Robert Nishikawa, Xiaochuan Pan, Rick Chartrand, Daniel Kopans and Richard H. Moore produced Practical iterative image reconstruction in digital breast tomosynthesis by non-convex TpV optimization. The abstract reads:


Digital breast tomosynthesis (DBT) is a rapidly developing imaging modality that gives some tomographic information for breast cancer screening. The effectiveness of standard mammography can be limited by the presence of overlapping structures in the breast. A DBT scan, consisting of a limited number of views covering a limited arc projecting the breast onto a fixed flat-panel detector, involves only a small modification of digital mammography, yet DBT yields breast image slices with reduced interference from overlapping breast tissues. We have recently developed an iterative image reconstruction algorithm for DBT based on image total variation (TV) minimization that improves on EM in that the resulting images have fewer artifacts and there is no need for additional regularization. In this abstract, we present the total p-norm variation (TpV) image reconstruction algorithm. TpV has the advantages of our previous TV algorithm, while improving substantially on the efficiency. Results for the TpV on clinical data are shown and compared with EM.

Credit photo: NASA, Atlantis taking off from Florida on Feb. 20, 2008 at 9:07 a.m. EST.

No comments:

Post a Comment