Thursday, May 07, 2009

CS: CS of sparse objects, TV first order, RIP, Robust Principal Component Analysis, Sparse Representation, CS with Poisson noise

So it looks like Herschel has been postponed to May 14th (from May 6th). If you come to the blog, you'll see a countdown on the top right hand side to the launch date.

It's the first time, but I have not read all these papers yet (i'll explain why later) but here they are for your own consumption. Enjoy:

First we have a coherency result for the Helmoltz equation in Compressed Remote Sensing of Sparse Objects by Albert Fannjiang, Pengchong Yan and Thomas Strohmer. The abstract reads:

The linear inverse source and scattering problems are studied from the perspective of compressed sensing, in particular the idea that sufficient incoherence and sparsity guarantee uniqueness of the solution. By introducing the sensor as well as target ensembles, the maximum number of recoverable targets is proved to be at least proportional to the number of measurement data modulo a log-square factor with overwhelming probability.
Important contributions include the discoveries of the threshold aperture, consistent with the classical Rayleigh criterion, and the decoherence e ffect induced by random antenna locations. The prediction of theorems are confirmed by numerical simulations.
I note the following several times in the numerical example:

"The stability of BP with linear model mismatch has been analyzed in [17] for the case when the matrix satisfies the Restricted Isometry Property (RIP), although it is not known if RIP holds in our case....However, it is not known if any of our sensing matrices here satisfies RIP..."

I realize this is a problem because RIP is eventually a simple and elegant result, but should we all, when mentioning this in papers, also mention that the RIP is a sufficient condition, that it has been shown to be too restrictive and finally and that there ways to numerically check other sufficient conditions (which are thought to be less restrictive) ? I am just asking :-)

Total Variation Projection with First Order Schemes by Jalal Fadili, Gabriel Peyré. The abstract reads:
This article proposes a new algorithm to compute the projection on the set of images whose total variation is bounded by a constant. The projection is computed through a dual formulation that is solved by first order non-smooth optimization methods. This yields an iterative algorithm that computes iterative soft thresholding of the dual vector fields. This projection algorithm can then be used as a building block in a variety of applications such as solving inverse problems under a total variation constraint, or for texture synthesis. Numerical results show that our algorithm competes favorably with state-of-the-art TV projection methods to solve denoising, texture synthesis, inpainting and deconvolution problems.
Restricted isometry property of matrices with independent columns and neighborly polytopes by random sampling by Radosław Adamczak, Alexander E. Litvak, Alain Pajor, Nicole Tomczak-Jaegermann. The abstract reads:
This paper considers compressed sensing matrices and neighborliness of a centrally symmetric convex polytope generated by vectors $\pm X_1,...,\pm X_N\in\R^n$, ($N\ge n$). We introduce a class of random sampling matrices and show that they satisfy a restricted isometry property (RIP) with overwhelming probability. In particular, we prove that matrices with i.i.d. centered and variance 1 entries that satisfy uniformly a sub-exponential tail inequality possess this property RIP with overwhelming probability. We show that such "sensing" matrices are valid for the exact reconstruction process of $m$-sparse vectors via $\ell_1$ minimization with $m\le Cn/\log^2 (cN/n)$. The class of sampling matrices we study includes the case of matrices with columns that are independent isotropic vectors with log-concave densities. We deduce that if $K\subset \R^n$ is a convex body and $X_1,..., X_N\in K$ are i.i.d. random vectors uniformly distributed on $K$, then, with overwhelming probability, the symmetric convex hull of these points is an $m$-centrally-neighborly polytope with $m\sim n/\log^2 (cN/n)$.
Performance bounds on compressed sensing with Poisson noise by Rebecca Willett and Maxim Raginsky. The abstract reads:
This paper describes performance bounds for compressed sensing in the presence of Poisson noise when the underlying signal, a vector of Poisson intensities, is sparse or compressible (admits a sparse approximation). The signal independent and bounded noise models used in the literature to analyze the performance of compressed sensing do not accurately model the effects of Poisson noise. However, Poisson noise is an appropriate noise model for a variety of applications, including low-light imaging, where sensing hardware is large or expensive, and limiting the number of measurements collected is important. In this paper, we describe how a feasible positivity-preserving sensing matrix can be constructed, and then analyze the performance of a compressed sensing reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which could be used as a measure of signal sparsity.
Finally found on the Rice page:


Compressed Sensing of time-varying signals by Daniele Angelosante, E. Grossi, Gorgio Giannakis. The abstract reads:

Compressed sensing (CS) lowers the number of measurements required for reconstruction and estimation of signals that are sparse when expanded over a proper basis. Traditional CS approaches deal with time-invariant sparse signals, meaning that, during the measurement process, the signal of interest does not exhibit variations. However, many signals encountered in practice are varying with time as the observation window increases (e.g., video imaging, where the signal is sparse and varies between different frames). The present paper develops CS algorithms for time-varying signals, based on the least-absolute shrinkage and selection operator (Lasso) that has been popular for sparse regression problems. The Lasso here is tailored for smoothing time-varying signals, which are modeled as vector valued discrete time series. Two algorithms are proposed: the Group-Fused Lasso, when the unknown signal support is time-invariant but signal samples are allowed to vary with time; and the Dynamic Lasso, for the general class of signals with time-varying amplitudes and support. Performance of these algorithms is compared with a sparsity-unaware Kalman smoother, a support-aware Kalman smoother, and the standard Lasso which does not account for time variations. The numerical results amply demonstrate the practical merits of the novel CS algorithms.

Also while not exactly CS, here is:

A continuation and expansion on the work of low rank matrices recovery that uses previous work low rank matrix completion we have mentioned here before:

Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization by John Wright, Yi Ma, Arvind Ganesh, and Shankar Rao. The abstract reads:
Principal component analysis is a fundamental operation in computational data analysis, with myriad applications ranging from web search to bioinformatics to computer vision and image analysis. However, its performance and applicability in real scenarios are limited by a lack of robustness to outlying or corrupted observations. This paper considers the idealized “robust principal component analysis” problem of recovering a low rank matrix A from corrupted observations D = A + E. Here, the error entries E can be arbitrarily large (modeling grossly corrupted observations common in visual and bioinformatic data), but are assumed to be sparse. We prove that most matrices A can be efficiently and exactly recovered from most error sign-and-support patterns, by solving a simple convex program. Our result holds even when the rank of A grows nearly proportionally (up to a logarithmic factor) to the dimensionality of the observation space and the number of errors E grows in proportion to the total number of entries in the matrix. A by-product of our analysis is the first proportional growth results for the related but somewhat easier problem of completing a low-rank matrix from a small fraction of its entries. We propose an algorithm based on iterative thresholding that, for large matrices, is significantly faster and more scalable than general-purpose solvers. We give simulations and real-data examples corroborating the theoretical results.


Sparse Representation for Computer Vision and Pattern Recognition, by John Wright, Yi Ma, Julien Mairal, Guillermo Sapiro, Thomas Huang, and Shuicheng Yan. The abstract reads:
Techniques from sparse signal representation are beginning to see significant impact in computer vision, often on non-traditional applications where the goal is not just to obtain a compact high-fidelity representation of the observed signal, but also to extract semantic information. The choice of dictionary plays a key role in bridging this gap: unconventional dictionaries consisting of, or learned from, the training samples themselves provide the key to obtaining state-of-the art results and to attaching semantic meaning to sparse signal representations. Understanding the good performance of such unconventional dictionaries in turn demands new algorithmic and analytical techniques. This review paper highlights a few representative examples of how the interaction between sparse signal representation and computer vision can enrich both fields, and raises a number of open questions for further study.

On the Role of Sparse and Redundant Representation in Image Processing by Michael Elad, Mario Figueiredo, and Yi Ma. The abstract reads:
Much of the progress made in image processing in the past decades can be attributed to better modeling of image content, and a wise deployment of these models in relevant applications. This path of models spans from the simple l2-norm smoothness, through robust, thus edge preserving, measures of smoothness (e.g. total variation), and till the very recent models that employ sparse and redundant representations. In this paper, we review the role of this recent model in image processing, its rationale, and models related to it. As it turns out, the field of image processing is one of the main beneficiaries from the recent progress made in the theory and practice of sparse and redundant representations.We discuss ways to employ these tools for various image processing tasks, and present several applications in which state-of-the-art results are obtained.


Credit for image: Opportunity Rover, portion of Navcam mosaic (Sol 1825; PIA 1185).
Image credit: NASA/JPL-Caltech

No comments:

Printfriendly