Phil Schniter sent me the following as a follow-up to yesterday's entry on intra-block correlation and sparse sensing matrices:
Thanks Phil !. The paper Phil mentions are:I wanted to let you know that we too have several papers that cover the topic of compressive sensing under "intra-block correlation and sparse sensing matrices". Although we pose our work in the context of "multiple measurement vector" compressive sensing, where the goal is to recover the signal matrix X from the noisy linear observations Y=A*X+N with known A matrix, the equivalent single-measurement-vector problem is y=blkdiag(A,...,A)*x+n with vectors y=vec(Y), x=vec(X), n=vec(N), where x is block-sparse when the columns of X share the same support, and where x exhibits intra-block-correlation when X exhibits correlation across rows. Moreover, the equivalent sensing matrix blkdiag(A,...,A) is "sparse" even when A is dense due to its block-diagonal nature.One of our papers http://arxiv.org/pdf/1111.5272v3.pdf treats the above "MMV" problem where the columns of X share the same support (i.e., x is block-sparse), while another http://arxiv.org/pdf/1205.4080v1.pdf treats the more difficult problem where the columns of X exhibit a slowly changing support, sometimes referred to as the "dynamic CS" problem. Next week, at the Statistical Signal Processing Workshop, we will present a general framework to learn and exploit many forms of structure encountered in compressive sensing that has been implemented as an extension to the sourceforge GAMPmatlab package :http://www2.ece.ohio-state.edu/~schniter/pdf/ssp12_mmv.pdf. The general theme of our work is that, using approximate message passing techniques, we get performance that meets or exceeds that of Bayesian or variational techniques, but with orders-of-magnitude reduction in complexity.Cheers,Phil
Efﬁcient High-Dimensional Inference in the Multiple Measurement Vector Problem by Justin Ziniel and Philip Schniter. The abstract reads:
In this work, a Bayesian approximate message passing algorithm is proposed for solving the multiple measurement vector (MMV) problem in compressive sensing, in which a collection of sparse signal vectors that share a common support are recovered from undersampled noisy measurements. The algorithm, AMP-MMV, is capable of exploiting temporal correlations in the amplitudes of non-zero coefﬁcients, and provides soft estimates of the signal vectors as well as the underlying support. Central to the proposed approach is an extension of recently developed approximate message passing techniques to the amplitudecorrelated MMV setting. Aided by these techniques, AMP-MMV offers a computational complexity that is linear in all problem dimensions. In order to allow for automatic parameter tuning, an expectation maximization algorithm that complements AMP-MMV is described. Finally, a detailed numerical study demonstrates the power of the proposed approach and its particular suitability for application to highdimensional problems.
Dynamic Compressive Sensing of Time-Varying Signals via Approximate Message Passing by Justin Ziniel and Philip Schniter. The abstract reads:
In this work the dynamic compressive sensing (CS) problem of recovering sparse, correlated, time varying signals from sub-Nyquist, non-adaptive, linear measurements is explored from a Bayesian perspective. While there has been a handful of previously proposed Bayesian dynamic CS algorithms in the literature, the ability to perform inference on high-dimensional problems in a computationally efﬁcient manner remains elusive. In response, we propose a probabilistic dynamic CS signal model that captures both amplitude and support correlation structure, and describe an approximate message passing algorithm that performs soft signal estimation and support detection with a computational complexity that is linear in all problem dimensions. The algorithm, DCS-AMP, can perform either causal ﬁltering or non-causal smoothing, and is capable of learning model parameters adaptively from the data through an expectation maximization learning procedure. We provide numerical evidence that DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety of operating conditions. We further describe the result of applying DCS-AMP to two real dynamic CS datasets, as well as a frequency estimation task, to bolster our claim that DCS-AMP is capable of offering state-of-the-art performance and speed on real-world high-dimensional problems.
A GENERALIZED FRAMEWORK FOR LEARNING AND RECOVERY OF STRUCTURED SPARSE SIGNALS by Justin Ziniel, Sundeep Rangan, and Philip Schniter. The abstract reads:
We report on a framework for recovering single- or multi-timestep sparse signals that can learn and exploit a variety of probabilistic forms of structure. Message passing-based inference and empirical Bayesian parameter learning form the backbone of the recovery procedure. We further describe an object-oriented software paradigm for implementing our framework, which consists of assembling modular software components that collectively deﬁne a desired statistical signal model. Lastly, numerical results for synthetic and real-world structured sparse signal recovery are provided.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.