Monday, May 02, 2011

CS: The Long Compressive Sensing Entry of the Week

In this paper, we consider the problem of identifying the exact topology of an interconnected dynamical network from a limited number of measurements of the individual nodes. Within the network graph, we assume that interconnected nodes are coupled by a discrete-time convolution process, and we explain how, given observations of the node outputs, the problem of topology identification can be cast as solving a linear inverse problem. We use the term compressive observations in the case when there is a limited number of measurements available and thus the resulting inverse problem is highly underdetermined. Inspired by the emerging field of Compressive Sensing (CS), we then show that in cases where network interconnections are suitably sparse (i.e., the network contains sufficiently few links), it is possible to perfectly identify the topology from small numbers of node observations, even though this leaves a highly underdetermined set of linear equations. This can dramatically reduce the burden of data acquisition for problems involving network identification. The main technical novelty of our approach is in casting the identification problem as the recovery of a block-sparse signal x ∈ R N from the measurements b = Ax ∈ R M with M less than N, where the measurement matrix A is a block-concatenation of Toeplitz matrices. We discuss identification guarantees, introduce the notion of network coherence for the analysis of interconnected networks, and support our discussions with illustrative simulations.

Recent developments of new medical treatment techniques put challenging demands on ultrasound imaging systems in terms of both image quality and raw data size. Traditional sampling methods result in very large amounts of data, thus, increasing demands on processing hardware and limiting the exibility in the post-processing stages. In this paper, we apply Compressed Sensing (CS) techniques to analog ultrasound signals, following the recently developed Xampling framework. The result is a system with signi cantly reduced sampling rates which, in turn, means signi cantly reduced data size while maintaining the quality of the resulting images.

We propose a new subgradient method for the constrained minimization of convex functions. Common subgradient algorithms require an exact projection onto the feasible region in every iteration, therefore making such algorithms reasonable only for problems which admit a fast projection. In our method we use inexact projections and only require moving to within certain distances to the exact projections (which decrease in the course of the algorithm). In particular, and in stark contrast to the usual projected subgradient schemes, the iterates in our method can be infeasible throughout the whole procedure and still we are able to provide conditions which ensure convergence to an optimal feasible point under suitable assumptions. Additionally, we briefly sketch an application to nding the minimal-`1-norm solution to an underdetermined linear system, an important problem in Compressed Sensing, where it is also known as Basis Pursuit.
You may also check the attendant SPEAR (Sparse Exact and Approximate Recovery) project.



On Partially Sparse Recovery by A. S. Bandeira, Katya Scheinberg, Luis Nunes Vicente. The abstract
In this paper we consider the problem of recovering a partially sparse solution of an underdetermined system of linear equations by minimizing the `1-norm of the part of the solution vector which is known to be sparse. Such a problem is closely related to the classical problem in Compressed Sensing where the `1-norm of the whole solution vector is minimized. We introduce analogues of restricted isometry and null space properties for the recovery of partially sparse vectors and show that these new properties are implied by their original counterparts. We show also how to extend recovery under noisy measurements to the partially sparse case.

This paper proposes e cient algorithms for group sparse optimization with mixed `2;1-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity will lead to better signal recovery/feature selection. The `2;1-regularization promotes group sparsity, but the resulting problem, due to the mixed-norm structure and possible grouping irregularity, is considered more di cult to solve than the conventional `1-regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the `2;1-regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group con gurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good e ciency, and strong stability and robustness.

A Sparsity Preserving Stochastic Gradient Method for Composite Optimization by Qihang Lin, Xi Chen, Javier Pena. The abstract reads:
We propose new stochastic gradient algorithms for solving convex composite optimization
problems. In each iteration, our algorithms utilize a stochastic oracle of the gradient of the
smooth component in the objective function. Our algorithms are based on a stochastic version
of the estimate sequence technique introduced by Nesterov (Introductory Lectures on Convex
Optimization: A Basic Course, Kluwer, 2003). We establish convergence results for the expectation and variance as well as large deviation properties of the objective value of the iterates generated by our algorithm. When applied to sparse regression problems, our algorithms have the advantage of readily enforcing sparsity at all iterations. We present some numerical experiments on simulated data sets.



High-quality multidimensional NMR spectra can be obtained from rapidly recorded non-uniformly sampled (NUS) data. The inherent loss of the spectrum quality usually associated with NUS data is compensated by compressed sensing (CS); left spectrum: Nyquist–Shannon sampling, 22 h acquisition time, Fourier transform; right: CS non-linear sampling, 8.5 h acquisition time, Ip norm minimization.


In this paper, we propose a novel compressive sensing (CS) based approach for sparse target counting and positioning in wireless sensor networks. While this is not the first work on applying CS to count and localize targets, it is the first to rigorously justify the validity of the problem formulation. Moreover, we propose a novel greedy matching pursuit algorithm (GMP) that complements the well-known signal recovery algorithms in CS theory and prove that GMP can accurately recover a sparse signal with a high probability. We also propose a framework for counting and positioning targets from multiple categories, a novel problem that has never been addressed before. Finally, we perform a comprehensive set of simulations whose results demonstrate the superiority of our approach over the existing CS and non-CS based techniques.

No comments:

Printfriendly