Pages

Sunday, April 13, 2008

Compressed Sensing: Bayesian Experimental Design for Compressed Sensing, CS Kalman Filter, Sparse Approximate Solution of PDE and more.

After the landmark paper of Roummel Marcia and Rebecca Willett here is a new batch of similarly interesting papers. Laurent Duval forwarded me this paper entitled Bayesian Experimental Design for Compressed Sensing by Hannes Nickisch and Matthias Seeger. The abstract reads:

Compressed sensing (CS) can be addressed as instance of Bayesian experimental design, which is based on similar models than common estimation (or energy minimisation) methods, but fundamentally differs from the latter, in that estimates of uncertainty and correlations of latent variables are maintained. The Bayesian computations are approximated by the expectation propagation fixed point algorithm, which is scaled up to the application of interest here through a novel scheduling mechanism, exploiting the fact that marginal uncertainty estimates are available at all times. An important application of CS is the optimisation of architectures for measuring natural images. In a large study, we compare various CS reconstruction methods utilising random measurement filters from different ensembles to a number of techniques which sequentially search for these filters, including our own, and Bayesian projection optimisation [1]. We find that a simple heuristic of measuring wavelet coefficients in a fixed, top-down ordering significantly outperforms CS methods using random measurements; the approach of [1] performs even worse. In contrast, our Bayesian design method learns filters that outperform the wavelet heuristic. Our results show that the property of incoherence of a measurement design, which plays a central role in the “unstructured except for random sparsity” theoretical CS setting, bears no significance for measuring real natural images. Our framework is not restricted to sparse signals, other notions of signal or noise structure can easily be accommodated. We give concrete ideas how our method can be scaled up to large signal representations.

Also found on arxiv.org:

If one were to consider signals living on a manifold, one could think of a way to follow the evolution of that signal using Particle Filtering on Riemannian Manifolds. Before we get to that stage, Namrata Vaswani looks at Kalman Filtered Compressed Sensing. The abstract reads:
We consider the problem of reconstructing time sequences of spatially sparse signals (with unknown and time-varying sparsity patterns) from a limited number of linear “incoherent” measurements, in real-time (causally). The signals are sparse in some transform domain referred to as the sparsity basis. For a single spatial signal, the solution is provided by Compressed Sensing (CS). The question that we address is, for a sequence of sparse signals, can we do better than CS, if (a) the sparsity pattern of the signal’s transform coefficients’ vector changes slowly over time, and (b) a simple prior model on the temporal dynamics of its current non-zero elements is available. The overall idea of our solution is to use CS to estimate the support set of the initial signal’s transform vector. At future times, run a reduced order Kalman filter with the currently estimated support and estimate new additions to the support set by applying CS to the Kalman innovations or filtering error (whenever it is “large”).
It was just a question of time, I talked before about solving an integral equation using CS, but one should first consider solving PDEs. Sadegh Jokar, Volker Mehrmann, Marc Pfetsch and Harry Yserentant do just that in Sparse Approximate Solution of Partial Differential Equations. The abstract reads:
A new concept is introduced for the adaptive finite element discretization of partial differential equations that have a sparsely representable solution. Motivated by recent work on compressed sensing, a recursive mesh refinement procedure is presented that uses linear programming to find a good approximation to the sparse solution on a given refinement level. Then only those parts of the mesh are refined that belong to nonzero expansion coefficients. Error estimates for this procedure are refined and the behavior of the procedure is demonstrated via some simple elliptic
model problems.
one notes the appearance of the tree structure. Some of the authors ( Sadegh Jokar and Marc Pfetsch) have also written Exact and Approximate Sparse Solutions of Underdetermined Linear Equations. The abstract reads:
In this paper, we empirically investigate the NP-hard problem of finding sparsest solutions to linear equation systems, i.e., solutions with as few nonzeros as possible. This problem has received considerable interest in the sparse approximation and signal processing literature, recently. We use a branch-and-cut approach via the maximum feasible subsystem problem to compute optimal solutions for small instances and investigate the uniqueness of the optimal solutions. We furthermore discuss five (modifications of) heuristics for this problem that appear in different parts of the literature. For small instances, the exact optimal solutions allow us to evaluate the quality of the heuristics, while for larger instances we compare their relative performance. One outcome is that the so-called basis pursuit heuristic performs worse, compared to the other methods. Among the best heuristics are a method due to Mangasarian and a bilinear approach.

This was a preprint before but this is the final version: Computation and Relaxation of Conditions for Equivalence between l1 and l0 Minimization by Yoav Sharon, John Wright and Yi Ma. The abstract reads:
In this paper, we investigate the exact conditions under which the l1 and l0 minimizations arising in the context of sparse error correction or sparse signal reconstruction are equivalent. We present a much simplified condition for verifying equivalence, which leads to a provably correct algorithm that computes the exact sparsity of the error or the signal needed to ensure equivalence. Our algorithm is combinatorial in nature, but for moderate-sized matrices it produces the exact result in a reasonably short time. For l1-l0 equivalence problems involving tall encoding matrices (highly robust error correction) and or wide overcomplete dictionaries (sparse signal reconstruction from few measurements), our algorithm is exponentially faster than the only other algorithm known for this problem. We present an example application that requires such matrices, for which our algorithm can greatly assist with real system design. We also show how, if the encoding matrix is imbalanced, an optimal diagonal rescaling matrix can be computed by linear programming, so that the rescaled system enjoys the widest possible equivalence.


Credit: NASA/JPL/University of Arizona, Phobos, a Moon of Mars as photographed by HiRISE on March 23, 2008.

No comments:

Post a Comment