Pages

Thursday, May 21, 2009

CS: A long post.




Today, we have a series of papers foundthrough Google and at the Rice repository. I'll come back tomorrow to yesterday's entry. In the meantime, we have:

Convergence Analysis of Iterative Thresholding Algorithms by Arian Maleki. The abstract reads:

There is a recent surge of interest in developing algorithmsfor finding sparse solutions of underdetermined systems of linear equations y = \phi x. In many applications, extremely large problem sizes are envisioned, with at least tens of thousands of equations and hundreds of thousands of unknowns. For such problem sizes, low computational complexity is paramount. The best studied `1 minimization algorithm is not fast enough to fulfill this need. Iterative thresholding algorithms have been proposed to address this problem. In this paper we want to analyze two of these algorithms theoretically, and give sufficient conditions under which they recover the sparsest solution.
and the attendant Tech Report.

When reading the next paper I could not stop remembering Stephane Mallat's presentation at ECTV'08 ( Sparse Super-Resolution with Space Matching Pursuits by Guoshen Yu , Stéphane Mallat.) The paper is Sparse Linear Representation by Halyun Jeong and Young-Han Kim. The abstract
This paper studies the question of how well a signal can be reprsented by a sparse linear combination of reference signals from an overcomplete dictionary. When the dictionary size is exponential in the dimension of signal, then the exact characterization of the optimal distortion is given as a function of the dictionary size exponent and the number of reference signals for the linear representation. Roughly speaking, every signal is sparse if the dictionary size is exponentially large, no matter how small the exponent is. Furthermore, an iterative method similar to matching pursuit that successively finds the best reference signal at each stage gives asymptotically optimal representations. This method is essentially equivalent to successive refinement for multiple descriptions and provides a simple alternative proof of the successive refinability of white Gaussian sources.
The Residual Method for Regularizing Ill-Posed Problems by Markus Grasmair, Markus Haltmeier, Otmar Scherzer. The abstract reads:
Although the residual method, or constrained regularization, is frequently used in applications, a detailed study of its properties is still missing. In particular, the questions of stability and convergence rates have hardly been treated in the literature. This sharply contrasts the progress of the theory of Tikhonov regularization, where for instance the notion of the Bregman distance has recently led to a series of new results for regularization in Banach spaces. The present paper intends to bridge the gap between the existing theories as far as possible. We develop a stability and convergence theory for the residual method in general topological spaces. In addition, we prove convergence rates in terms of (generalized) Bregman distances. Exemplarily, we apply our theory to compressed sensing. Here, we show the well-posedness of the method and derive convergence rates both for convex and non-convex regularization under rather weak conditions. It is for instance shown that in the non-convex setting the linear convergence of the regularized solutions already follows from the sparsity of the true solution and the injectivity of the (linear) operator in the equation to be solved.

Analying Least Squares and Kalman Filtered Compressed Sensing by Namrata Vaswani. The abstract reads:
In recent work, we studied the problem of causally reconstructing time sequences of spatially sparse signals, with unknown and slow time-varying sparsity patterns, from a limited number of linear “incoherent” measurements. We proposed a solution called Kalman Filtered Compressed Sensing (KF-CS). The key idea is to run a reduced order KF only for the current signal’s estimated nonzero coefficients’ set, while performing CS on the Kalman filtering error to estimate new additions, if any, to the set. KF may be replaced by Least Squares (LS) estimation and we call the resulting algorithm LS-CS. In this work, (a) we bound the error in performing CS on the LS error and (b) we obtain the conditions under which the KF-CS (or LS-CS) estimate converges to that of a genie-aided KF (or LS), i.e. the KF (or LS) which knows the true nonzero sets.

A Convergent Overlapping Domain Decomposition Method for Total Variation Minimization by Massimo Fornasier, Andreas Langer, Carola-Bibiane Schönlieb. The abstract reads:
This paper is concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles.
Found in the Rice Compressive Sensing repository:

Image Fusion by Compressive Sensing by Atul Divekar, Okan Ersoy. The abstract reads:
We propose a new method of image fusion that utilizes the recently developed theory of compressive sensing. Compressive sensing indicates that a signal that is sparse in an appropriate set of basis vectors may be recovered almost exactly from a few samples via l1-minimization if the system matrix satisfies some conditions. These conditions are satisfied with high probability for Gaussian-like vectors. Since zero-mean image patches satisfy Gaussian statistics, they are suitable for compressive sensing. We create a dictionary that relates high resolution image patches from a panchromatic image to the corresponding filtered low resolution versions. We first propose two algorithms that directly use this dictionary and its low resolution version to construct the fused image. To reduce the computational cost of l1-minimization, we use Principal Component Analysis to identify the orthogonal “modes” of co-occurrence of the low and high resolution patches. Any pair of co-occurring high and low resolution patches with similar statistical properties to the patches in the dictionary is sparse with respect to the principal component bases. Given a patch from a low resolution multispectral band image, we use l1-minimization to find the sparse representation of the low resolution patch with respect to the sample-domain principal components. Compressive sensing suggests that this is the same sparse representation that a high resolution image would have with respect to the principal components. Hence the sparse representation is used to combine the high resolution principal components to produce the high resolution fused image. This method adds high-resolution detail to a low-resolution multispectral band image keeping the same relationship that exists between the high and low resolution versions of the panchromatic image. This reduces the spectral distortion of the fused images and produces results superior to standard fusion methods such as the Brovey transform and principal component analysis.

This one paper is ibnteresting in that it gives some insight on the ability of certain reconstruction solvers when the basis is not canonical: Improved Iterative Curvelet Thresholding for Compressed Sensing by Jianwei Ma. The abstract reads:
A new theory named compressed sensing for simultaneous sampling and compression of signals has been becoming popular in the communities of signal processing, imaging and applied mathematics. In this paper, we present improved/accelerated iterative curvelet thresholding methods for compressed sensing reconstruction in the fields of remote sensing. Some recent strategies including Bioucas-Dias and Figueiredo's two-step iteration, Beck and Teboulle's fast method, and Osher et al's linearized Bregman iteration are applied to iterative curvelet thresholding in order to accelerate convergence. Advantages and disadvantages of the proposed methods are studied using the so-called pseudo-Pareto curve in the numerical experiments on single-pixel remote sensing and Fourier-domain random imaging.

And finally here is a presentation at the CS department at Caltech by Steve Flammia at the Perimeter Institute. The abstract of the talk is:
Everybody hates tomography. And with good reason! Experimentalists hate it because it is inefficient and difficult. Theorists hate it because it isn't very "quantum", and hence boring. But in part because of our current lack of meso-scale quantum computers capable of convincingly performing non-classical calculations, and the need to certify quantum state preparation, tomography seems like a necessary evil. In this talk, I will attempt to banish quantum state tomography to the Hell of Lost Paradigms where it belongs. I hope to achieve this by introducing several heuristics for learning quantum states more efficiently, in some cases exponentially so. One such heuristic runs in polynomial time and outputs a polynomial-sized classical approximation of the state (in matrix product state form.) Another takes advantage of the fact that most interesting states are close to pure states to get an (essentially) quadratic speedup using ideas from compressed sensing. Both algorithms come with rigorous error bounds.


Credit: NASA, Atlantis releases Hubble.

No comments:

Post a Comment