Friday, January 14, 2011

CS: SMALL workshop presentations, NIPS videos and 8 papers....

Maria Jafari sent me the following:

Dear Igor,

The slides for the presentations are already available on the website at We are planning to have the posters and videos also available by the end of the week or beginning of next.week.

Thanks Maria. I look forward to the videos.Talking about videos, the NIPS videos are here. In a different direction, I just stumbled on Thomas Blumensath's page explaining Compressed Sensing: What, Why and How with his own examples using MRI. 

We also have another eight papers today, enough for some good reading over the week-end: Augmented Lagrangian Alternating Direction Method for Matrix Separation Based on Low-Rank Factorization by Yuan Shen, Zaiwen Wen; Yin Zhang. The abstract reads:
The matrix separation problem aims to separate a low-rank matrix and a sparse matrix from their sum. This problem has recently attracted considerable research attention due to its wide range of potential applications. Nuclear-norm minimization models have been proposed for matrix separation and proved to yield exact separations under suitable conditions. These models, however, typically require the calculation of a full or partial singular value decomposition (SVD) at every iteration that can become increasingly costly as matrix dimensions and rank grow. To improve scalability, in this paper we propose and investigate an alternative approach based on solving a non-convex, low-rank factorization model by an augmented Lagrangian alternating direction method. Numerical studies indicate that the effectiveness of the proposed model is limited to problems where the sparse matrix does not dominate the low-rank one in magnitude, though this limitation can be alleviated by certain data pre-processing techniques. On the other hand, extensive numerical results show that, within its applicability range, the proposed method in general has a much faster solution speed than nuclear-norm minimization algorithms, and often provides better recoverability.

The performance of principal component analysis (PCA) su ers badly in the presence of outliers. This paper proposes two novel approaches for robust PCA based on semide nite programming. The rst method, maximum mean absolute deviation rounding (MDR), seeks directions of large spread in the data while damping the e ect of outliers. The second method produces a low-leverage decomposition (LLD) of the data that attempts to form a low-rank model for the data by separating out corrupted observations. This paper also presents efficient computational methods for solving these SDPs. Numerical experiments con rm the value of these new techniques.
The attendant code is here..

Improved analysis of the subsampled randomized Hadamard transform by  Joel Tropp.The abstract reads:
This paper presents an improved analysis of a structured dimension-reduction map called the subsampled randomized Hadamard transform. This argument demonstrates that the map preserves the Euclidean geometry of an entire subspace of vectors. The new proof is much simpler than previous approaches, and |for the fi rst time|optimal constants in the estimate on the number of dimensions required for the embedding.

Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing
with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider
the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast with O(mnk) for classical algorithms. (ii) For a sparse
input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multi-processor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.
The attendant presentation is here.

In recent years, proximal splitting algorithms have been applied to various monocomponent signal and image recovery problems. In this paper, we address the case of multicomponent problems. We first provide closed form expressions for several important multicomponent proximity operators and then derive extensions of existing proximal algorithms to the multi-component setting. These results are applied to stereoscopic image recovery, multispectral image denoising, and image decomposition into texture and geometry components. This work was supported
In this paper, we investigate the theoretical guarantees of penalized $\lun$ minimization (also called Basis Pursuit Denoising or Lasso) in terms of sparsity pattern recovery (support and sign consistency) from noisy measurements with non-necessarily random noise, when the sensing operator belongs to the Gaussian ensemble (i.e. random design matrix with i.i.d. Gaussian entries). More precisely, we derive sharp non-asymptotic bounds on the sparsity level and (minimal) signal-to-noise ratio that ensure support identification for most signals and most Gaussian sensing matrices by solving the Lasso problem with an appropriately chosen regularization parameter. Our first purpose is to establish conditions allowing exact sparsity pattern recovery when the signal is strictly sparse. Then, these conditions are extended to cover the compressible or nearly sparse case. In these two results, the role of the minimal signal-to-noise ratio is crucial. Our third main result gets rid of this assumption in the strictly sparse case, but this time, the Lasso allows only partial recovery of the support. We also provide in this case a sharp $\ell_2$-consistency result on the coefficient vector. The results of the present work have several distinctive features compared to previous ones. One of them is that the leading constants involved in all the bounds are sharp and explicit. This is illustrated by some numerical experiments where it is indeed shown that the sharp sparsity level threshold identified by our theoretical results below which sparsistency of the Lasso is guaranteed meets that empirically observed.

This Letter reports a demonstration of off-axis compressed holography in low-light level imaging conditions. An acquisition protocol relying on a single exposure of a randomly undersampled diffraction map of the optical field, recorded in the high heterodyne gain regime, is proposed. The image acquisition scheme is based on compressed sensing, a theory establishing that near-exact recovery of an unknown sparse signal is possible from a small number of nonstructured measurements. Image reconstruction is further enhanced by introducing an off-axis spatial support constraint to the image estimation algorithm. We report accurate experimental recovering of holographic images of a resolution target in low-light conditions with a frame exposure of 5 \mu s, scaling down measurements to 9% of random pixels within the array detector.

I also found behind a paywall: Digital broadcast channel estimation with compressive sensing by Qi Chenhao Wu Lenan. The abstract reads:
In order to reduce the pilot number and improve spectral efficiency, recently emerged compressive sensing (CS) is applied to the digital broadcast channel estimation. According to the six channel profiles of the European Telecommunication Standards Institute(ETSI) digital radio mondiale (DRM) standard, the subspace pursuit (SP) algorithm is employed for delay spread and attenuation estimation of each path in the case where the channel profile is identified and the multipath number is known. The stop condition for SP is that the sparsity of the estimation equals the multipath number. For the case where the multipath number is unknown, the orthogonal matching pursuit (OMP) algorithm is employed for channel estimation, while the stop condition is that the estimation achieves the noise variance. Simulation results show that with the same number of pilots, CS algorithms outperform the traditional cubic-spline-interpolation-based least squares (LS) channel estimation. SP is also demonstrated to be better than OMP when the multipath number is known as a priori.
Finally, not related to CS per se, here is: Radiative Transfer of Seismic Waves by L. Margerin.


No comments: