Pages

Friday, October 07, 2011

Compressive Sensing Literature This Week

This week we have quite a few interesting papers, some posters and two talks. enjoy.


Suppose we wish to recover a signal x 2 Cn from m intensity measurements of the form |(x; z)|^2 , i = 1; 2; : : : ; m; that is, from data in which phase information is missing. We prove that if the vectors zi are sampled independently and uniformly at random on the unit sphere, then the signal x can be recovered exactly (up to a global phase factor) by solving a convenient semide nite program|a trace-norm minimization problem; this holds with large probability provided that m is on the order of n log n, and without any assumption about the signal what-soever. This novel result demonstrates that in some instances, the combinatorial phase retrieval problem can be solved by convex programming techniques. Finally, we also prove that our methodology is robust vis a vis additive noise.


Hamming Compressed Sensing by Tianyi ZhouDacheng Tao. The abstract reads:
Compressed sensing (CS) and 1-bit CS cannot directly recover quantized signals and require time consuming recovery. In this paper, we introduce \textit{Hamming compressed sensing} (HCS) that directly recovers a k-bit quantized signal of dimensional $n$ from its 1-bit measurements via invoking $n$ times of Kullback-Leibler divergence based nearest neighbor search. Compared with CS and 1-bit CS, HCS allows the signal to be dense, takes considerably less (linear) recovery time and requires substantially less measurements ($\mathcal O(\log n)$). Moreover, HCS recovery can accelerate the subsequent 1-bit CS dequantizer. We study a quantized recovery error bound of HCS for general signals and "HCS+dequantizer" recovery error bound for sparse signals. Extensive numerical simulations verify the appealing accuracy, robustness, efficiency and consistency of HCS.
The Hamming Compressive Sensing homepage is here. The implementation of the paper will be released soon.


We propose a joint source-channel-network coding scheme, based on compressive sensing principles, for wireless networks with AWGN channels (that may include multiple access and broadcast), with sources exhibiting temporal and spatial dependencies. Our goal is to provide a reconstruction of sources within an allowed distortion level at each receiver. We perform joint source-channel coding at each source by randomly projecting source values to a lower dimensional space. We consider sources that satisfy the restricted eigenvalue (RE) condition as well as more general sources for which the randomness of the network allows a mapping to lower dimensional spaces. Our approach relies on using analog random linear network coding. The receiver uses compressive sensing decoders to reconstruct sources. Our key insight is the fact that, compressive sensing and analog network coding both preserve the source characteristics required for compressive sensing decoding.


Consider a wireless sensor network with N sensor nodes measuring data which are correlated temporally or spatially. We consider the problem of reconstructing the original data by only transmitting M N sensor readings while guaranteeing that the reconstruction error is small. Assuming the original signal is “smooth” with respect to the network topology, our approach to gather measurements from a random subset of nodes and then interpolate with respect to the graph Laplacian eigenbasis, leveraging ideas from compressed sensing. We propose algorithms for both temporally and spatially correlated signals, and the performance of these algorithms is verified using both synthesized data and real world data. Significant savings are made in terms of energy resources, bandwidth, and query latency.

Sharp Support Recovery from Noisy Random Measurements by L1 minimization by Charles DossalMarie-Line Chabanol,  Gabriel Peyré,  Jalal Fadili. The abstract reads:
In this paper, we investigate the theoretical guarantees of penalized $\lun$ minimization (also called Basis Pursuit Denoising or Lasso) in terms of sparsity pattern recovery (support and sign consistency) from noisy measurements with non-necessarily random noise, when the sensing operator belongs to the Gaussian ensemble (i.e. random design matrix with i.i.d. Gaussian entries). More precisely, we derive sharp non-asymptotic bounds on the sparsity level and (minimal) signal-to-noise ratio that ensure support identification for most signals and most Gaussian sensing matrices by solving the Lasso problem with an appropriately chosen regularization parameter. Our first purpose is to establish conditions allowing exact sparsity pattern recovery when the signal is strictly sparse. Then, these conditions are extended to cover the compressible or nearly sparse case. In these two results, the role of the minimal signal-to-noise ratio is crucial. Our third main result gets rid of this assumption in the strictly sparse case, but this time, the Lasso allows only partial recovery of the support. We also provide in this case a sharp $\ell_2$-consistency result on the coefficient vector. The results of the present work have several distinctive features compared to previous ones. One of them is that the leading constants involved in all the bounds are sharp and explicit. This is illustrated by some numerical experiments where it is indeed shown that the sharp sparsity level threshold identified by our theoretical results below which sparsistency of the Lasso is guaranteed meets that empirically observed.

Performance of Orthogonal Matching Pursuit for Multiple Measurement Vectors by Jie Ding, Laming Chen, Yuantao Gu. The abstract reads:
In this paper, we consider orthogonal matching pursuit (OMP) algorithm for multiple measurement vectors (MMV) problem. The robustness of OMPMMV is studied under general perturbations---when the measurement vectors as well as the sensing matrix are incorporated with additive noise. The main result shows that although exact recovery of the sparse solutions is unrealistic in noisy scenario, recovery of the support set of the solutions is guaranteed under suitable conditions. Specifically, a sufficient condition is derived that guarantees exact recovery of the sparse solutions in noiseless scenario.

Exact Dynamic Support Tracking with Multiple Measurement Vectors using Compressive MUSIC by Jong Min Kim, Ok Kyun Lee, Jong Chul Ye. The abstract reads:
Dynamic tracking of sparse targets has been one of the important topics in array signal processing. Recently, compressed sensing (CS) approaches have been extensively investigated as a new tool for this problem using partial support information obtained by exploiting temporal redundancy. However, most of these approaches are formulated under single measurement vector compressed sensing (SMV-CS) framework, where the performance guarantees are only in a probabilistic manner. The main contribution of this paper is to allow \textit{deterministic} tracking of time varying supports with multiple measurement vectors (MMV) by exploiting multi-sensor diversity. In particular, we show that a novel compressive MUSIC (CS-MUSIC) algorithm with optimized partial support selection not only allows removal of inaccurate portion of previous support estimation but also enables addition of newly emerged part of unknown support. Numerical results confirm the theory.

Coding-Theoretic Methods for Sparse Recovery by Mahdi Cheraghchi. The abstract reads:
We review connections between coding-theoretic objects and sparse learning problems. In particular, we show how seemingly different combinatorial objects such as error-correcting codes, combinatorial designs, spherical codes, compressed sensing matrices and group testing designs can be obtained from one another. The reductions enable one to translate upper and lower bounds on the parameters attainable by one object to another. We survey some of the well-known reductions in a unified presentation, and bring some existing gaps to attention. New reductions are also introduced; in particular, we bring up the notion of minimum "L-wise distance" of codes and show that this notion closely captures the combinatorial structure of RIP-2 matrices. Moreover, we show how this weaker variation of the minimum distance is related to combinatorial list-decoding properties of codes.

Learning to relate images: Mapping units, complex cells and simultaneous eigenspaces by Roland Memisevic. The abstract reads:
A fundamental operation in many vision tasks, including motion understanding, stereopsis, visual odometry, or invariant recognition, is establishing correspondences between images or between images and data from other modalities. We present an analysis of the role that multiplicative interactions play in learning such correspondences, and we show how learning and inferring relationships between images can be viewed as detecting rotations in the eigenspaces shared among a set of orthogonal matrices. We review a variety of recent multiplicative sparse coding methods in light of this observation. We also review how the squaring operation performed by energy models and by models of complex cells can be thought of as a way to implement multiplicative interactions. This suggests that the main utility of including complex cells in computational models of vision may be that they can encode relations not invariances.

Group Lasso with Overlaps: the Latent Group Lasso approach by Guillaume Obozinski, Laurent Jacob, Jean-Philippe Vert. The abstract reads:
We study a norm for structured sparsity which leads to sparse linear predictors whose supports are unions of prede ned overlapping groups of variables. We call the obtained formulation latent group Lasso, since it is based on applying the usual group Lasso penalty on a set of latent variables. A detailed analysis of the norm and its properties is presented and we characterize conditions under which the set of groups associated with latent variables are correctly identi ed. We motivate and discuss the delicate choice of weights associated to each group, and illustrate this approach on simulated data and on the problem of breast cancer prognosis from gene expression data.


We have also several posters:









Two talks:

SPARSE CODING NETWORKS AND COMPRESSED SENSING IN NEURAL SYSTEMS (INC talk)
Thu, October 6, 12:30pm – 1:30pm
Where
San Diego Supercomputer Center, East Annex South Wing, Level B1, EB-129

Description
SPARSE CODING NETWORKS AND COMPRESSED SENSING IN NEURAL SYSTEMS Many recent results in the signal processing community have shown that signal models based on low-dimensional geometric structure such as sparsity (or manifolds) can be very powerful for many applications. For example, it is clear now that a whole host of inverse problems can be solved more effectively by taking advantage of this structure, with the recent example of compressed sensing (i.e., recovering signals from highly undersampled incoherent measurements) gaining significant attention. Interestingly, neural coding hypotheses based on these same sparse signal models have demonstrated an ability to account for observations such as receptive field properties in sensory systems. In this talk I will discuss our previous work on implementing sparse coding models in biophysically plausible architectures. We will show that beyond simply accounting for receptive field structure, these networks can account for observed response properties of V1 cells. Specifically, I will highlight our recent results showing that these models can account for a wide variety of non-classical receptive field effects reported in V1. I will also highlight our preliminary results and ongoing work drawing connections between neural computation and the results of compressed sensing. In particular, we will briefly discuss our contributions to the compressed sensing literature that can be used in conjunction with sparse coding networks to model two distinct systems: communication bottlenecks in sensory pathways (e.g., the optic nerve) and recurrent networks for high-capacity sequence memory. Institute for Neural Computation Chalk Talk Series When: Thursday October 6, 12:30-1:30 Where: San Diego Supercomputer Center, East Annex South Wing, Level B1, EB-129 http://inc.ucsd.edu/contactus.html Christopher Rozell Georgia Institute of Technology http://users.ece.gatech.edu/~crozell/



and

Colloquium - Atri Rudra
Oct 10, 2011
from 03:30 pm to04:30 pm, 333 IST Bldg., Penn State
Group testing was formalized by Dorfman in his 1943 paper and was originally used in WW-II to identify soldiers with syphilis. The main insight in this application is that blood samples from different soldiers can be combined to check if at least one of soldiers in the pool has the disease. Since then group testing has found numerous applications in many areas such as (computational) biology, combinatorics and (theoretical) computer science.

Theory of error-correcting codes, or coding theory, was born in the works of Shannon in 1948 and Hamming in 1950. Codes are ubiquitous in our daily life and have also found numerous applications in theoretical computer science in general and computational complexity in particular.

Kautz and Singleton connected these two areas in their 1964 paper by using "code concatenation" to design good group testing schemes. All of the (asymptotically) best know explicit constructions of group testing schemes use the code concatenation paradigm. In this talk, we will focus on the "decoding" problem for group testing: i.e. given the outcomes of the tests on the pools, identify the infected soldiers.
Recent applications of group testing in data stream algorithm require sub-linear time decoding, which is not guaranteed by the traditional constructions.

We will show that recent developments in list decoding of codes lead in a modular way to sub-linear time decodable group testing schemes that match the best known parameters of group testing schemes (which might not be efficiently decodable).

The talk will be self contained and is based on joint works with Piotr Indyk, Hung Ngo and Ely Porat.

Credit NASA.


No comments:

Post a Comment