Pages

Monday, November 01, 2010

CS: Graphical Models for CS, The Effect of Spatial Coupling on Compressive Sensing, Taken's Embedding, Super Greedy Algo, Chaotic Sensing Matrix for Compressive Sensing and abstracts of a past meeting

Today, we have a fresh arrival of new papers/preprints and the abstracts of a conference that took place in Germany back in October. The first paper shows us how 'fast' this new AMP solvers are (I'd like to get my hands on one of those), then, we have a paper that seems to cross the Donoho-Tanner phase transition . Then we have some results on Taken's embedding, a super greedy algorithm (that might taker a stab at overcomplete dictionaries) and finally some papers that use chaotic systems to produce measurement matrices that seem very suited for compressive sensing.

This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized least-squares (known as LASSO or BPDN). We discuss how to derive fast approximate message passing algorithms to solve this problem. Surprisingly, the analysis of such algorithms allows to prove exact high-dimensional limit results for the LASSO risk.
This paper will appear as a chapter in a book on ‘Compressed Sensing’ edited by Yonina Eldar
and Gitta Kutynok.

Recently, it was observed that spatially-coupled LDPC code ensembles approach the Shannon capacity for a class of binary-input memoryless symmetric (BMS) channels. The fundamental reason for this was attributed to a "threshold saturation" phenomena derived by Kudekar, Richardson and Urbanke. In particular, it was shown that the belief propagation (BP) threshold of the spatially coupled codes is equal to the maximum a posteriori (MAP) decoding threshold of the underlying constituent codes. In this sense, the BP threshold is saturated to its maximum value. Moreover, it has been empirically observed that the same phenomena also occurs when transmitting over more general classes of BMS channels. In this paper, we show that the effect of spatial coupling is not restricted to the realm of channel coding. The effect of coupling also manifests itself in compressed sensing. Specifically, we show that spatially-coupled measurement matrices have an improved sparsity to sampling threshold for reconstruction algorithms based on verification decoding. For BP-based reconstruction algorithms, this phenomenon is also tested empirically via simulation. At the block lengths accessible via simulation, the effect is quite small and it seems that spatial coupling is not providing the gains one might expect. Based on the threshold analysis, however, we believe this warrants further study.
Takens' Embedding Theorem remarkably established that concatenating M previous outputs of a dynamical system into a vector (called a delay coordinate map) can be a one-to-one mapping of a low-dimensional attractor from the system state-space. However, Takens' theorem is fragile because even small imperfections can induce arbitrarily large errors in the attractor representation. We extend Takens' result to establish explicit, non-asymptotic sufficient conditions for a delay coordinate map to form a stable embedding in the restricted case of linear dynamical systems and observation functions. Our work is inspired by the field of Compressive Sensing (CS), where results guarantee that low-dimensional signal families can be robustly reconstructed if they are stably embedded by a measurement operator. However, in contrast to typical CS results, i) our sufficient conditions are independent of the size of the ambient state space (N), and ii) some system and measurement pairs have fundamental limits on the conditioning of the embedding (i.e., how close it is to an isometry), meaning that further measurements beyond some point add no further significant value. We use several simple simulations to explore the conditions of the main results, including the tightness of the bounds and the convergence speed of the stable embedding.

We study greedy-type algorithms such that at a greedy step we pick several dictionary elements contrary to a single dictionary element in standard greedy-type algorithms. We call such greedy algorithms {\it super greedy algorithms}. The idea of picking several elements at a greedy step of the algorithm is not new. Recently, we observed the following new phenomenon. For incoherent dictionaries these new type of algorithms (super greedy algorithms) provide the same (in the sense of order) upper bound for the error as their analogues from the standard greedy algorithms. The super greedy algorithms are computationally simpler than their analogues from the standard greedy algorithms. We continue to study this phenomenon.

Compressive Sensing (CS) is a new sampling theory which allows signals to be sampled at sub-Nyquist rate without loss of information. Fundamentally, its procedure can be modeled as a linear projection on one specific sensing matrix, which, in order to guarantee the information conservation, satisfies Restricted Isometry Property (RIP). Ordinarily, this matrix is constructed by the Gaussian random matrix or Bernoulli random matrix. In previous work, we have proved that the typical chaotic sequence - logistic map can be adopted to generate the sensing matrix for CS. In this paper, we show that Toeplitz-structured matrix constructed by chaotic sequence is sufficient to satisfy RIP with high probability. With the Toeplitz-structured Chaotic Sensing Matrix (TsCSM), we can easily build a filter with small number of taps. Meanwhile, we implement the TsCSM in compressive sensing of images.
Compressive sensing is a new methodology to capture signals at sub-Nyquist rate. To guarantee exact recovery from compressed measurements, one should choose specific matrix, which satisfies the Restricted Isometry Property (RIP), to implement the sensing procedure. In this letter, we propose to construct the sensing matrix with chaotic sequence following a trivial method and prove that with overwhelming probability, the RIP of this kind of matrix is guaranteed. Meanwhile, its experimental comparisons with Gaussian random matrix, Bernoulli random matrix and sparse matrix are carried out and show that the performances among these sensing matrix are almost equal.


# "Variational Inequalities and Morozov\'s Discrepancy Principle"
Anzengruber, Stephan
Variational Inequalities can be viewed as generalizations of classical source conditions and structural assumptions. In this talk we will show how, for non-linear ill-posed problems, Variational Inequalities can be combined with Morozov\'s Discrepancy principle to obtain convergence rates for Tikhonov Regularization with $\ell_p$ penalties, $p \geq 1$, which are well-known to promote sparsity.
# "Non-uniform sparse recovery with Gaussian matrices"
Ayaz, Ulas
Compressive sensing predicts that sufficiently sparse vectors can be recovered from highly incomplete information. Efficient recovery methods such as L1-minimization find the sparsest solution to certain systems of equations. Random matrices have become a popular choice for the measurement matrix. Indeed, near-optimal uniform recovery results have been shown for such matrices. In this note we focus on nonuniform recovery using Gaussian random matrices and `1-minimization. We provide a condition on the number of samples in terms of the sparsity and the signal length which guarantees that a fixed sparse signal can be recovered with a random draw of the matrix using L1- minimization. The constant 2 in the condition is optimal, and the proof is rather short compared to a similar result due to Donoho and Tanner. This is joint work with Holger Rauhut.
# "Regularization of linear integral equations with noisy data and noisy operator"
Bleyer, Ismael Rodrigo
Regularization methods for linear ill-posed problems $ Kf = g $ have been extensively investigated when there exists noisy measurements $g^{\delta}$ for the true data $g$. However, often also the operator is not known exactly. A common way to solve this problem is to use the regularized total least squares method. In our approach, we consider linear integral equations where we assume that both the kernel and the data are contaminated with noise, and use Tikhonov regularization for a stable reconstruction of both the kernel and the solution. Finally, we discuss computational aspects and develop an iterative shrinkage-thresholding algorithm for the reconstruction, and provide first numerical results.
# "Multilevel Preconditioning and Adaptive Sparse Solution of Inverse Problems"
Dahlke, Stephan
Multilevel Preconditioning and Adaptive Sparse Solution of Inverse Problems We are concerned with the efficient numerical solutaion of minimization problems in Hilbert spaces involving sparsity constraints. These optimization problems arise, e.g., in the context of inverse problems. We analyze a very efficient variant of the well-known iterative soft shrinkage algorithm for large or even infinite dimesnional problems. This algorithm is modified in the following way. Instead of prescribing a fixed threshold parameter, we use a decreasing thresholding strategy. Moreover, we use suitable adaptive schemes for the approximation of the infinite matrix-vector products. We also derive a block multiscale preconditioning technique which allows for local well-conditioning of the underlying matrices and for extending the concept of restricted isometry property to infinitely labelled matrices. The careful combination of these ingredients gives rise to numerical schemes that are guaranteed to cnverge with esponential rate, and which allow for the controlled inflation of the support size of the iterations. We also present numerical experiments that confirm the applicability of our approach. (Joint work with M. Fornasier and T. Raasch)
# "Linear Convergence Rates of l1-Regularization"
Haltmeier, Markus
In this talk we study the stable solution of ill-posed equations
Ax = y
where A : X → Y is a linear operator between Hilbert spaces X and Y . If this equation is ill-posed, then the solution does not depend continuously on the data y. In order to stably solve the equation, stabilization methods have to be applied. The most widely used stabilization technique is Tikhonov regularization
Τy(x) := ||Ax-y||2 + αR(x) → min.
Here R is a penalty functional and α \gt 0 is the regularization parameter.
In the recent years - motivated by the success of compresses sensing - the use of l1 penalty
R(x) = ||x||l1 := ∑λ ∈ Λ|φλ,x|
became very popular.(Here (φλ)λ ∈ Λ is some basis of X.) In this talk we study Tikhonov regularization with l1 penalty term. In particular, we show that the range condition (ran(A*) ∩ R(x+) ≠ ∅) together with a certain injectivity condition allows linear error estimates (convergence rates) between the solution x+ and minimizers of Τα,y. Moreover, we show that the range condition is even necessary for this linear convergence rate. This is talk is based on joint work with M. Grasmair and O. Scherzer.

* M. Grasmair, M. Haltmeier, and O. Scherzer. Necessary and sufficient conditions for linear convergence of l1-regularization. Reports of FSP S105 - \"Photoacoustic Imaging\" 18, University of Innsbruck, Austria, 2009.


# "Compressed Sensing in Infinite Dimensions"
Hansen, Anders C.
Compressed sensing is a great tool for solving inverse problems and it has a wide range of applications in signal processing and medical imaging. However, the current theory covers only problems in finite dimensions, and thus there are plenty of infinite dimensional inverse problems that are not covered by the existing results. In this talk I will show how the finite dimensional theory can be extended to include problems in infinite dimensions. This allows for recovery of much more general objects such as analog signals and infinite resolution images. The tools required come from probability, operator theory, optimization and geometry of Banach spaces. I\'ll give an introduction to what is already known (accompanied by numerical examples) and discuss some of the open questions.
# "Compressed sensing for ill-posed problems: recovery principles and accuracy estimates"
Herrholz, Evelyn
This talk is concerned with compressive sampling strategies and sparse recovery principles for linear and ill-posed problems. We provide compressed measurement models and accuracy estimates for sparse approximations of the solution of the underlying inverse problem. Thereby, we rely on Tikhonov variational and constraint optimization formulations. One essential difference to the classical compressed sensing framework is the incorporation of a joint sparsity measure allowing the treatment of infinite dimensional reconstruction spaces. The theoretical results are furnished with a numerical experiment.
# "New and improved Johnson-Lindenstrauss embeddings via the Restricted Isometry Property"
Krahmer, Felix
The Johnson-Lindenstrauss (JL) Lemma states that any set of p points in high dimensional Euclidean space can be embedded into O(δ ^{-2} log(p)) dimensions, without distorting the distance between any two points by more than a factor between 1 - δ and 1 + δ . We establish a new connection between the JL Lemma and the Restricted Isometry Property (RIP), a well-known concept in the theory of sparse recovery often used for showing the success of l_1-minimization. Consider an m x N matrix satisfying the (k, δ _k)-RIP with randomized column signs and an arbitrary set E of O(e^k) points in R^N. We show that with high probability, such a matrix with randomized column signs maps E into R^m without distorting the distance between any two points by more than a factor of 1 +/- 4 δ _k. Consequently, matrices satisfying the Restricted Isometry of optimal order provide optimal Johnson-Lindenstrauss embeddings up to a logarithmic factor in N. Moreover, our results yield the best known bounds on the necessary embedding dimension m for a wide class of structured random matrices. In particular, for partial Fourier and partial Hadamard matrices, our method optimizes the dependence of m on the distortion δ : We improve the recent bound m = O(δ ^{-4} log(p) log^4(N)) of Ailon and Liberty (2010) to m = O(δ ^{-2} log(p) log^4(N)), which is optimal up to the logarithmic factors in N. Our results also yield new Johnson-Lindenstrauss bounds for several deterministic matrices with randomized column signs. This is joint work with Rachel Ward.
# "Efficient Calculation of Sparse Solutions for Inverse Problems"
Lakhal, Aref
We present an efficient method to regularize ill-posed problems with sparsity as prior knowledge. We combine mollification methods with standard techniques of optimization to derive an efficient algorithm for solving inverse problems with L1- constrains. Numerical tests illustrate the robustness of the algorithm.
# "Exact recovery for linear ill-posed problems with l^1 regularization"
Lorenz, Dirk
Ill-posed problems are problems in which the solution does not depend continuously on the data. In this talk we consider Tikhonov regularization of such problems, and especially focus on sparse regularization. By combining techniques from ill-posed problems and sparse recovery we show, that it it is possible to reconstruct the exact support of the unknown solution even for ill-posed problems. We illustrate our results on the ill-posed problem of digital particle holography.
# "Generalization of the Neural Gas for Learning Sparse Codes"
Martinetz, Thomas
The Neural Gas is a very simple but efficient algorithm for vector quantization which has found widespread applications. We show how it can be extended to learning linear subspaces and be cast into the framework of sparse coding. It compares favorably to other approaches like k-SVD and MOD.
# "Exact Sparse Solutions of Underdetermined Linear Equations"
Pfetsch, Marc
In this talk I will review combinatorial optimization techniques to find a sparsest solution of underdetermined linear equations. The corresponding exact methods are based on branch-and-cut. Not surprisingly, these techniques are computationally much more involved than heuristics such as basis pursuit. The solutions of this and other heuristics will be compared to the exact values. I will try to show the limits and possibilities of the currently available exact approaches.
# "Compressive Sensing and Function Recovery in High Dimensions"
Rauhut, Holger
The talk reviews some basics on compressive sensing, in particular, on recovery of functions that are sparse in bounded orthonormal systems (e.g. the trigonometric system) from randomly chosen samples. We apply then compressive sensing techniques for the recovery of functions in high dimensions.
# "Iterative sparse recovery and regularization theory"
Teschke, Gerd
We shall be concerned with sparse recovery algorithms for inverse and ill-posed problems. In particular, we consider a projected steepest descent method, its stabilization and corresponding regularization results.
# "An Infeasible-Point Subgradient Method and Computational Comparison for l¹-Minimization"
Tillmann, Andreas
The l¹-minimization problem min{ ||x||_1 : Ax=b } , also known as Basis Pursuit (BP), has become important in Compressed Sensing due to its ability to sometimes yield the sparsest solution of the underdetermined system Ax=b. In the past few years, a lot of new algorithms solving (BP) or some (possibly regularized) variant of it have been developed. We introduce a subgradient scheme which -- unlike typical projected subgradient methods -- is not required to maintain feasibility of the iterates throughout the algorithmic progress. Convergence of this method (ISML1) to an optimal feasible point for (BP) can be proven under certain reasonable assumptions. In the second part of the talk we will present results of a computational comparison of various state-of-the-art l¹-solvers and our method. We also show how a new optimality check can speed up these solvers and at the same time dramatically improve the solution quality. (Joint work with D. Lorenz and M. Pfetsch)
# "Quality assessment of the l1 penalization method for sparse recovery"
Verhoeven, Caroline
The l1 penalization method is often used for sparse recovery in (linear) inverse problems. Most research on the effectiveness of this method focuses on measurement matrices with mutual incoherent columns or which satisfy the restricted isometry property. We will study matrices which do not satisfy these properties. Some typical recovery errors are computed in order to show how they are influenced by the sparsity of the desired solution, the noise level in the data, the number of data and the singular value spectrum of the matrix. These results are compared to known theoretical bounds. We also illustrate the ability of this method to predict relative errors on a magnetic tomography example. (joint work with I. Loris)
# "Regularization results for inverse problems with sparsity functional"
Wachsmuth, Gerd
We consider an optimization problem of the type \begin{equation} \left\{ \begin{aligned} \text{Minimize}\quad & F(u) = \frac12 \| \mathcal{S} u - z_\delta \|_{H}^2 + \alpha \, \| u \|_{L^2(\Omega)}^2 + \beta \, \| u \|_{L^1(\Omega)} \\ \text{such that}\quad & u \in U_\textup{ad} \subset L^2(\Omega). \end{aligned} \right.\tag{$\textbf{P}_{\alpha,\delta}$} \label{eq:P} \end{equation} Here, $\Omega \subset \mathbb{R}^n$ is a bounded domain, $H$ is some Hilbert space, $\mathcal{S} \in \mathcal{L}(L^2(\Omega), H)$ compact (e.g.\ the solution operator of an elliptic partial differential equation), $\alpha \gt 0$, and $\delta, \beta \ge 0$. The problem~\eqref{eq:P} can be interpreted as an inverse problem as well as an optimal control problem. Let us denote the solution with $u_{\alpha,\delta}$. The estimate $\| u_{\alpha,0} - u_{\alpha,\delta}\|_{L^2(\Omega)} \le \delta \, \alpha^{-1/2}$ for the error due to the noise level $\delta$ is well known for $\beta = 0$ and the proof can be extended to the case $\beta \gt 0$. A typical way to estimate the regularization error as $\alpha \searrow 0$ is via a source condition, e.g.\ $u_{0,0} = \mathcal{S}^\star w$ with some $w \in H$. This yields $\| u_{\alpha,0} - u_{0,0} \|_{L^2(\Omega)} \le C \, \alpha^{1/2}$. But if pointwise constraints are present ($U_\textup{ad} = \{ u \in L^2(\Omega) : u_a \le u \le u_b \}$), $u_{0,0}$ often is bang-bang, i.e.\ $u_{0,0}(x) \in \{u_a, 0, u_b\}$ a.e.\ in $\Omega$. Hence, $u_{0,0} \\notin H^1(\Omega)$ and by $\operatorname{range}(\mathcal{S}^\star) \subset H^1(\Omega)$ a source condition with $\mathcal{S^\star}$ can not hold. In this talk we present a new technique for deriving rates of the regularization error using a combination of a source condition and a regularity assumption on the adjoint variable $p_{0,0} = \mathcal{S}^\star(z_0 - \mathcal{S} u_{0,0})$. If the measure of $\big\{ \big| |p_{0,0}| - \beta \big| \le \varepsilon \big\} \le C \, \varepsilon$ for all $\varepsilon \ge 0$ it is possible to show $\| u_{\alpha,0} - u_{0,0} \|_{L^2(\Omega)} \le C \, \alpha^{1/2}$ without a source condition. We present examples showing that the error rates are sharp.
# "Generalized Tikhonov regularization with Bregman discrepany"
Worliczek, Nadja
This talk will deal with generalized Tikhonov regularization with Bregman distances as discrepany term. We give a short motivation for using them by means of statistical inversion theory where maximum a posteriori estimators of the solution of inverse problems are often gained by minimizing such functionals. Furthermore we will examine some basic properties of these sort of Tikhonov functionals.
# "Truncated Soft Shrinkage Iteration with Discrepancy Based Stopping Rule"
Zhariy, Mariya

No comments:

Post a Comment