Pages

Wednesday, March 02, 2011

CS: Summary Based Structures with Improved Sublinear Recovery for Compressed Sensing, Support-Predicted Modified-CS for Recursive Robust Principal Components' Pursuit, An Alternating Direction Algorithm for Matrix Completion with Nonnegative Factors

I obviously had to miss a few papers yesterday, here they are:

Summary Based Structures with Improved Sublinear Recovery for Compressed Sensing by M. Amin Khajehnejad, Juhwan Yoo, Animashree Anandkumar, Babak Hassibi. The abstract reads:
We introduce a new class of measurement matrices for compressed sensing, using low order summaries over binary sequences of a given length. We prove recovery guarantees for three reconstruction algorithms using the proposed measurements, including $\ell_1$ minimization and two combinatorial methods. In particular, one of the algorithms recovers $k$-sparse vectors of length $N$ in sublinear time $\text{poly}(k\log{N})$, and requires at most $\Omega(k\log{N}\log\log{N})$ measurements. The empirical oversampling constant of the algorithm is significantly better than existing sublinear recovery algorithms such as Chaining Pursuit and Sudocodes. In particular, for $10^3\leq N\leq 10^8$ and $k=100$, the oversampling factor is between 3 to 8. We provide preliminary insight into how the proposed constructions, and the fast recovery scheme can be used in a number of practical applications such as market basket analysis, and real time compressed sensing implementation.

This work proposes a causal and recursive algorithm for solving the "robust" principal components' analysis (PCA) problem. We primarily focus on robustness to correlated outliers. In recent work, we proposed a new way to look at this problem and showed how a key part of its solution strategy involves solving a noisy compressive sensing(CS) problem. However, if the support size of the outliers becomes too large, for a given dimension of the current PC space, then the number of "measurements" available for CS may become too small. In this work, we show how to address this issue by utilizing the correlation of the outliers to predict their support at the current time; and using this as "partial support knowledge" for solving Modified-CS instead of CS.

This paper introduces a novel algorithm for the nonnegative matrix factorization and completion problem, which aims to nd nonnegative matrices X and Y from a subset of entries of a nonnegative matrix M so that XY approximates M. This problem is closely related to the two existing problems: nonnegative matrix factorization and low-rank matrix completion, in the sense that it kills two birds with one stone. As it takes advantages of both nonnegativity and low rank, its results can be superior than those of the two problems alone. Our algorithm is applied to minimizing a non-convex constrained least-squares formulation and is based on the classic alternating direction augmented Lagrangian method. Preliminary convergence properties and numerical simulation results are presented. Compared to a recent algorithm for nonnegative random matrix factorization, the proposed algorithm yields comparable factorization through accessing only half of the matrix entries. On tasks of recovering incomplete grayscale and hyperspectral images, the results of the proposed algorithm have overall better qualities than those of two recent algorithms for matrix completion.

No comments:

Post a Comment