Wednesday, June 22, 2011

This Week in Compressive Sensing

Namrata Vaswani just let me know of her recent paper and code:

This work studies the recursive robust principal components' analysis (PCA) problem. Here, "robust" refers to robustness to both independent and correlated sparse outliers, although we focus on the latter. A key application where this problem occurs is in video surveillance where the goal is to separate a slowly changing background from moving foreground objects on-the-fly. The background sequence is well modeled as lying in a low dimensional subspace, that can gradually change over time, while the moving foreground objects constitute the correlated sparse outliers. In this and many other applications, the foreground is an outlier for PCA but is actually the "signal of interest" for the application; where as the background is the corruption or noise. Thus our problem can also be interpreted as one of recursively recovering a time sequence of sparse signals in the presence of large but spatially correlated noise.
This work has two key contributions. First, we provide a new way of looking at this problem and show how a key part of our solution strategy involves solving a noisy compressive sensing (CS) problem. Second, we show how we can utilize the correlation of the outliers to our advantage in order to even deal with very large support sized outliers. The main idea is as follows. The correlation model applied to the previous support estimate helps predict the current support. This prediction serves as "partial support knowledge" for solving the modified-CS problem instead of CS. The support estimate of the modified-CS reconstruction is, in turn, used to update the correlation model parameters using a Kalman filter (or any adaptive filter). We call the resulting approach "support-predicted modified-CS".
The Recursive Projected Compressive Sensing code is available and hosted on this website. The main idea of the approach reads:

Main idea:
This work studies the recursive robust principal components' analysis (PCA) problem. Here, ``robust" refers to robustness to both independent and correlated sparse outliers, although we focus on the latter. A key application where this problem occurs is in video surveillance where the goal is to separate a slowly changing background from moving foreground objects on-the-fly. The background sequence is well modeled as lying in a low dimensional subspace, that can gradually change over time, while the moving foreground objects constitute the correlated sparse outliers. In this and many other applications, the foreground is an outlier for PCA but is actually the ``signal of interest" for the application; where as the background is the corruption or noise. Thus our problem can also be interpreted as one of recursively recovering a time sequence of sparse signals in the presence of large but spatially correlated noise.


This work has two key contributions. First, we provide a new way of looking at this problem and show how a key part of our solution strategy involves solving a noisy compressive sensing (CS) problem. Second, we show how we can utilize the correlation of the outliers to our advantage in order to even deal with very large support sized outliers. The main idea is as follows. The correlation model applied to the previous support estimate helps predict the current support. This prediction serves as ``partial support knowledge" for solving the modified-CS problem instead of CS. The support estimate of the modified-CS reconstruction is, in turn, used to update the correlation model parameters using a Kalman filter (or any adaptive filter). We call the resulting approach ``support-predicted modified-CS".

From the paper, I note the following excerpt:
In this work we misuse terminology a little and use “compressive sensing” or “CS” to refer to the ℓ1 minimization problem.

Thanks you for acknowledging this. While we are on the L1 regularization subject, did you know that the Youtube Video editor enabled Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths, well you do now. You can learn more here on the Google Research blog and test it here.





Focusing back on comrpressive sensing, we have the following papers:

Efficient Incremental Analysis of On-Chip Power Grid via Sparse Approximation by Pei Sun and Xin Li, Ming-Yuan Ting. The abstract reads:

In this paper, a new sparse approximation technique is proposed for incremental power grid analysis. Our proposed method is motivated by the observation that when a power grid network is locally updated during circuit design, its response changes locally and, hence, the incremental “change” of the power grid voltage is almost zero at many internal nodes, resulting in a unique sparse pattern. An efficient Orthogonal Matching Pursuit (OMP) algorithm is adopted to solve the proposed sparse approximation problem. In addition, several numerical techniques are proposed to improve the numerical stability of the proposed solver, while simultaneously maintaining its high efficiency. Several industrial circuit examples demonstrate that when applied to incremental power grid analysis, our proposed approach achieves up to 130 runtime speed-up over the traditional Algebraic Multi-Grid (AMG) method, without surrendering any accuracy.

Matrix Co-Factorization on Compressed Sensing by Jiho Yoo and Seungjin Choi. The abstract reads:
In this paper we address the problem of matrix factorization on compressively-sampled measurements which are obtained by random projections. While this approach improves the scalability of matrix factorization, its performance is not satisfactory. We present a matrix co-factorization method where compressed measurements and a small number of uncompressed measurements are jointly decomposed, sharing a factor matrix. We evaluate the performance of three matrix factorization methods in terms of Cram´er-Rao bounds, including: (1) matrix factorization on uncompressed data (MF); (2) matrix factorization on compressed data (CSMF); (3) matrix co-factorization on compressed and uncompressed data (CS-MCF). Numerical experiments demonstrate that CS-MCF improves the performance of CS-MF, emphasizing the useful behavior of exploiting side information (a small number of uncompressed measurements).

Ramsey theory reveals the conditions when sparse coding on subsampled data is unique by Christopher J. Hillar, Friedrich T. Sommer. The abstract reads:

Sparse coding or dictionary learning has been widely used to reveal the sparse underlying structure of many kinds of sensory data. A related advance in signal processing is compressed sensing, a theory explaining how sparse data can be subsampled below the Nyquist-Shannon limit and then efficiently recovered from these subsamples. Here we study whether the conditions for recovery in compressed sensing are sufficient for dictionary learning to discover the original sparse causes of subsampled data. Using combinatorial Ramsey theory, we completely characterize when the learned dictionary matrix and sparse representations of subsampled data are unique (up to the natural equivalences of permutation and scaling). Surprisingly, uniqueness is shown to hold without any assumptions on the learned dictionaries or inferred sparse codes. Our result has implications for the learning of overcomplete dictionaries from subsampled data and has potential applications in data analysis and neuroscience. For instance, it identifies sparse coding as a possible learning mechanism for establishing lossless communication through severe bottlenecks, which might explain how different brain regions communicate through axonal fiber projections.


In this paper, we consider the problem of compressed sensing where the goal is to recover almost all the sparse vectors using a small number of fixed linear measurements. For this problem, we propose a novel partial hard-thresholding operator that leads to a general family of iterative algorithms. While one extreme of the family yields well known hard thresholding algorithms like ITI (Iterative Thresholding with Inversion) and HTP (Hard Thresholding Pursuit), the other end of the spectrum leads to a novel algorithm that we call Orthogonal Matching Pursuit with Replacement (OMPR). OMPR, like the classic greedy algorithm OMP, adds exactly one coordinate to the support at each iteration, based on the correlation with the current residual. However, unlike OMP, OMPR also removes one coordinate from the support. This simple change allows us to prove that OMPR has the best known guarantees for sparse recovery in terms of the Restricted Isometry Property (a condition on the measurement matrix). In contrast, OMP is known to have very weak performance guarantees under RIP. Given its simple structure, we are able to extend OMPR using locality sensitive hashing to get OMPR-Hash, the first provably sub-linear (in dimensionality) algorithm for sparse recovery. Our proof techniques are novel and flexible enough to also permit the tightest known analysis of popular iterative algorithms such as CoSaMP and Subspace Pursuit. We provide experimental results on large problems providing recovery for vectors of size up to million dimensions. We demonstrate that for large-scale problems our proposed methods are more robust and faster than existing methods.

Total Variation Minimization Based Compressive Wideband Spectrum Sensing for Cognitive Radios by Yipeng Liu, Qun Wan. The abstract reads:
Wideband spectrum sensing is a critical component of a functioning cognitive radio system. Its major challenge is the too high sampling rate requirement. Compressive sensing (CS) promises to be able to deal with it. Nearly all the current CS based compressive wideband spectrum sensing methods exploit only the frequency sparsity to perform. Motivated by the achievement of a fast and robust detection of the wideband spectrum change, total variation mnimization is incorporated to exploit the temporal and frequency structure information to enhance the sparse level. As a sparser vector is obtained, the spectrum sensing period would be shorten and sensing accuracy would be enhanced. Both theoretical evaluation and numerical experiments can demonstrate the performance improvement.


Efficient Two-Stage Group Testing Algorithms for DNA Screening by Michael Huber. The abstract reads:
Group testing algorithms are very useful tools for DNA library screening. Building on recent work by Levenshtein (2003) and Tonchev (2008), we construct in this paper new infinite classes of combinatorial structures, the existence of which are essential for attaining the minimum number of individual tests at the second stage of a two-stage disjunctive testing algorithm.


Sparse spike train deconvolution is a classical inverse problem which gave rise to many deterministic and stochastic algorithms since the mid-80’s. In the past decade, sparse approximation has been an intensive field of research, leading to the development of a number of algorithms including greedy strategies and convex relaxation methods. Spike train deconvolution can be seen as a specific sparse approximation problem, where the observation matrix contains highly correlated columns and where the focus is set on the exact recovery of the spike locations. The objective of this paper is to evaluate the performance of algorithms proposed in both fields in terms of detection statistics, with Monte-Carlo simulations of spike deconvolution problems.


Image Credit: NASA/JPL/Space Science Institute, N00172883.jpg was taken on June 18, 2011 and received on Earth June 20, 2011. The camera was pointing toward HELENE, and the image was taken using the CL1 and CL2 filters.

No comments:

Printfriendly