Monday, March 02, 2009

CS: Wavelet Tour site, Spike trains, RaMP, Sub-Nyquist Sampling of Analog Signals , Circulant and Toeplitz in CS, Accelerating recovery

Back to our regularly scheduled programming on Compressive Sensing:

Gabriel Peyre just let me know of the site supporting Stephane Mallat's third edition of A Wavelet Tour of Signal Processing (the sparse way). The site is at:

It has all the matlab code associated with the figures in the book. This is a great resource.

Volkan Cevher also mentions to me two papers that he co-wrote. One on the Ramp property we have seen before and the other on neuron spike train. Now we have some hope of winning 10,000 euros from EPFL next time :-) Oops I have said too much to the roughly 500 people reading this blog daily. It's a shame. So here they are:

Compressive sensing recovery of spike trains using a structured sparsity model by Chinmay Hegde, Marco Duarte and Volkan Cevher.The abstract reads:

The theory of Compressive Sensing (CS) exploits a well-known concept used in signal compression – sparsity – to design new, efficient techniques for signal acquisition. CS theory states that for a length-N signal x with sparsity level K, M = O(K log(N/K)) random linear projections of x are sufficient to robustly recover x in polynomial time. However, richer models are often applicable in real-world settings that impose additional structure on the sparse nonzero coefficients of x. Many such models can be succinctly described as a union of K-dimensional subspaces. In recent work, we have developed a general approach for the design and analysis of robust, efficient CS recovery algorithms that exploit such signal models with structured sparsity. We apply our framework to a new signal model which is motivated by neuronal spike trains. We model the firing process of a single Poisson neuron with absolute refractoriness using a union of subspaces. We then derive a bound on the number of random projections M needed for stable embedding of this signal model, and develop a algorithm that provably recovers any neuronal spike train from M measurements. Numerical experimental results demonstrate the benefits of our model-based approach compared to conventional CS recovery techniques.
In the meantime, we can all hope to use a similar approach for the CRCNS - Collaborative Research in Computational Neuroscience biological / neuroscience datasets available for download here. A particular interest of mine is in the eye gazing data.

Recovery of Compressible Signals in Unions of Subspaces by Marco Duarte, Chinmay Hegde, Volkan Cevher, and Richard Baraniuk.The abstract reads:

Compressive sensing (CS) is an alternative to Shannon/Nyquist sampling for acquisition of sparse or compressible signals; instead of taking periodic samples, we measure inner products with M \lt N random vectors and then recover the signal via a sparsity-seeking optimization or greedy algorithm. Initial research has shown that by leveraging stronger signal models than standard sparsity, the number of measurements required for recovery of an structured sparse signal can be much lower than that of standard recovery. In this paper, we introduce a new framework for structured compressible signals based on the unions of subspaces signal model, along with a new sufficient condition for their recovery that we dub the restricted amplification property (RAmP). The RAmP is the natural counterpart to the restricted isometry property (RIP) of conventional CS. Numerical simulations demonstrate the validity and applicability of our new framework using wavelet-tree compressible signals as an example.

We also have three papers from Arxiv:

From Theory to Practice: Sub-Nyquist Sampling of Sparse Wideband Analog Signals by Moshe Mishali, Yonina Eldar. The abstract reads:
Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of spectrum-blind sub-Nyquist sampling of multiband signals. The Fourier transform of such signals occupy only a small portion of a wide spectrum, with unknown frequency support. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We suggest a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then lowpass filtered and sampled uniformly at low rate. We derive necessary and sufficient conditions ensuring perfect recovery from the proposed system. Reconstruction relies on recent ideas developed in the context of analog compressed sensing, and is comprised of a digital step which recovers the spectral support. Our approach enables baseband processing, namely generating a low rate sequence corresponding to any information band of interest from the given samples, without going through the high Nyquist rate. Numerical simulations demonstrate robustness as well as several further hardware simplifications. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is sensitive to parameter choice, has a high computational load, and is restricted to multitone signals. In addition, both these methods do not allow baseband processing. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of state-of-the-art analog conversion technologies such as interleaved converters.
An attendant presentation can be found here.

Circulant and Toeplitz matrices in compressed sensing
by Holger Rauhut. The abstract reads:
Compressed sensing seeks to recover a sparse vector from a small number of linear and non-adaptive measurements. While most work so far focuses on Gaussian or Bernoulli random measurements we investigate the use of partial random circulant and Toeplitz matrices in connection with recovery by $\ell_1$-minimization. In contrast to recent work in this direction we allow the use of an arbitrary subset of rows of a circulant and Toeplitz matrix. Our recovery result predicts that the necessary number of measurements to ensure sparse reconstruction by $\ell_1$-minimization with random partial circulant or Toeplitz matrices scales linearly in the sparsity up to a $\log$-factor in the ambient dimension. This represents a significant improvement over previous recovery results for such matrices. As a main tool for the proofs we use a new version of the non-commutative Khintchine inequality.

and Accelerating gradient projection methods for $\ell_1$-constrained signal recovery by steplength selection rules by Ignace Loris, Mario Bertero, Christine De Mol, Ricardo Zanella, Luca Zanni. The abstract reads:
We propose a new gradient projection algorithm that compares favorably with the fastest algorithms available to date for $\ell_1$-constrained sparse recovery from noisy data, both in the compressed sensing and inverse problem frameworks. The method exploits a line-search along the feasible direction and an adaptive steplength selection based on recent strategies for the alternation of the well-known Barzilai-Borwein rules. The convergence of the proposed approach is discussed and a computational study on both well-conditioned and ill-conditioned problems is carried out for performance evaluations in comparison with five other algorithms proposed in the literature.
I would expect this algorithm to eventually be on the GPU at some point as one of the author has done similar work recently (V. Ruggiero, T. Serafini, R. Zanella, L. Zanni, Iterative regularization algorithms for constrained image deblurring on graphics processors)

Credit Photo: Olivier Godard, Mount Rainier, WA.

No comments: