Friday, May 15, 2009

CS: RTIT, Dictionary Learning, Variable Splitting and Constrained Optimization solver.






Mark Neifeld, Jun Ke, Pawan Baheti, at University of Arizona and Peter Ilinykh, Pavel Shekhtmeyster at UCSD are looking into reducing the size of the current one pixel camera. Here is the presentation on the subject and the lab presentation on Compressive Imaging.



A reconstruction solver that can solve for not so sparse signals, this is interesting: Turbo-like Iterative Thresholding for SAR Image Recovery from Compressed Measurements by Lei Yu, Yi Yang and Hong Sun. The abstract reads:

Compressive sensing (CS) has attracted many researchers since it offers a novel paradigm that one can acquire signals at a sub-Nyquist rate, and provides an implicit application of imaging and compressing simultaneously for SAR. This paper focuses on the recovery algorithms for compressed measurements of SAR data. Considering the characteristics of the SAR images, leading their not-very-sparsity in common representation space, a Turbo-like Iterative Thresholding algorithm based on Residual shrinkage operator (RTIT) is proposed. This algorithm aims at recovering signals which can not be very sparsely represented through some representation spaces. Some numerical results are presented to illustrate the performance of the proposed RTIT algorithm.

In a different direction, the identification of a dictionary from training sample does not seem to need many samples, this is intriguing: Dictionary Identification - Sparse Matrix-Factorisation via ℓ1-Minimisation by Remi Gribonval, Karin Schnass. The abstract reads:

This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ1-minimisation. The problem can also be seen as factorising a d×N matrix Y = (y1 . . . yN), yn element of Rd of training signals into a d×K dictionary matrix \phi and a K×N coefficient matrix X = (x1 . . . xN), xn 2 RK, which is sparse. The exact question studied here is when a dictionary coefficient pair (\phi,X) can be recovered as local minimum of a (nonconvex) ℓ1-criterion with input Y = \phi X. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialised to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples N grows up to a logarithmic factor only linearly with the signal dimension, i.e. N = CK logK, in contrast to previous approaches requiring combinatorially many samples.

Finally, Mario Figueiredo just let me know that he and his co-authors have developed algorithms close to the split-Bregman that will eventually be used to solve CS problems. Now the question is: will they be faster than the Nesterov schemes ? In the meantime, here they are:


Although much research has been devoted to the problem of restoring Poissonian images, namely in the fields of medical and astronomical imaging, applying the state of the art regularizers (such as those based on wavelets or total variation) to this class of images is still an open research front. This paper proposes a new image deconvolution approach for images with Poisson statistical models, with the following building blocks: (a) a standard regularization/MAP criterion, combining the Poisson log-likelihood with a regularizer (log-prior) is adopted; (b) the resulting optimization problem (which is difficult, since it involves a non-quadratic and non-separable term plus a non-smooth term) is transformed into an equivalent constrained problem, via a variable splitting procedure; (c) this constrained problem is addressed using an augmented Lagrangian framework. The effectiveness of the resulting algorithm is illustrated in comparison with current state-of-the-art methods.

Fast Frame-Based Image Deconvolution Using Variable Splitting and Constrained Optimization by Mario Figueiredo, Jose M. Bioucas-Dias, Manya V. Afonso. The abstract reads:

We propose a new fast algorithm for solving one of the standard formulations of frame-based image deconvolution: an unconstrained optimization problem, involving an $\ell_2$ data-fidelity term and a non-smooth regularizer. Our approach is based on using variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The resulting algorithm efficiently uses a regularized version of the Hessian of the data fidelity term, thus exploits second order information. Experiments on a set of image deblurring benchmark problems show that our algorithm is clearly faster than previous state-of-the-art methods.
Credit: Arianespace, Herschel Launch, the end show the fairing removal.

No comments:

Printfriendly