Pages

Sunday, February 07, 2010

CS: This week's longest entry. New Preprints, a Job and two Talks.

I know, I know... for most of you today is Sunday, but for those readers close to the International Date line, it's already Monday and it's in the middle of the night, so what better way than to start this week now by reading Nuit Blanche and all the good stuff that comes with it. Enjoy!

Once again some goodness coming out Rice, today it's Spectral Compressive Sensing by Marco Duarte, and Rich Baraniuk. The abstract reads:
Compressive sensing (CS) is a new approach to simultaneous sensing and compression of sparse and compressible signals. A great many applications feature smooth or modulated signals that can be modeled as a linear combination of a small number of sinusoids; such signals are sparse in the frequency domain. In practical applications, the standard frequency domain signal representation is the discrete Fourier transform (DFT). Unfortunately, the DFT coefficients of a frequency-sparse signal are themselves sparse only in the contrived case where the sinusoid frequencies are integer multiples of the DFT's fundamental frequency. As a result, practical DFT-based CS acquisition and recovery of smooth signals does not perform nearly as well as one might expect. In this paper, we develop a new spectral compressive sensing (SCS) theory for general frequency-sparse signals. The key ingredients are an over-sampled DFT frame, a signal model that inhibits closely spaced sinusoids, and classical sinusoid parameter estimation algorithms from the field of spectrum estimation. Using peridogram and eigen-analysis based spectrum estimates (e.g., MUSIC), our new SCS algorithms significantly outperform the current state-of-the-art CS algorithms while providing provable bounds on the number of measurements required for stable recovery.
The attendant toolbox is available at: http://dsp.rice.edu/scs


The Google webcrawler just found the following postdoc position at EPFL. I'll add it to the compressive sensing jobs page. Here is the ad:

In the context of a new ERC award, several PhD and Post-Doc positions are available in the field of mathematical signal processing at the Laboratory of Audiovisual Communications, Ecole Polytechnique Fédérale de Lausanne (EPFL) http://lcavwww.epfl.ch/.

Abstract of project: The subject of signal representations with Fourier and wavelet bases is central to signal processing and communications. Non-linear approximation methods in such bases are key for problems like denoising, compression and inverse problems.
Recently, the idea that signals that are sparse in some domain can be acquired at low sampling density has generated strong interest, under various names like compressed sensing, compressive sampling and sparse sampling.
We aim to study the central problem of acquiring continuous-time signals for discrete-time processing and reconstruction using the methods of sparse sampling. Solving this involves developing theory and algorithms for sparse sampling, both in continuous and discrete time. In addition, in order to acquire physical signals, we plan to develop a sampling theory for signals obeying physical laws, like the wave and diffusion equation, and light fields.
Together, this will lead to a sparse sampling theory and framework for signal processing and communications, with applications from analog-to-digital conversion to new compression methods, to super-resolution data acquisition and to inverse problems in imaging. In sum, we aim to develop the theory and algorithms for sparse signal processing, with impact on a broad range of applications.

Please send applications with a CV, list of publications, and the names of 3 references for
Post-doctoral positions to :
EPFL/IC/LCAV
Jacqueline Aeberhard
Station 14
1015 Lausanne
jacqueline.aeberhard@epfl.ch
Before March15th, 2010
Phd positions to :
http://phd.epfl.ch/page57750-en.html
Before April 15th, 2010


Laurent Duval just mentioned to me this announcement for a seminar in Paris this coming week:
Le prochaine exposé du séminaire d'analyse et probabilités de Paris-Dauphine aura lieu ce

mardi 9 février.

Josselin Garnier, de l'Université de Paris 7, parlera de "Passive sensor imaging using cross correlations of ambient noise signals / Imagerie passive par exploitation du bruit de fond". Le séminaire a lieu à 11h en salle A 707 (7e étage de la nouvelle aile).
It reminds me a lot of the Imaging with Nature concept (that I should add to the list of Technologies that do not Exist). While Laurent Daudet mentioned to me this other seminar (still in Paris):
Maxime Dahan
Le Jeudi 11 Février 2010 de 14h30 à 15h30 (Amphithéâtre Boreau, ESPCI)
Fluorescence imaging using compressed sensing
Which brings us back to the concept of Spectral Compressive Sensing by Marco Duarte, and Rich Baraniuk as introduced at the beginning of this entry. Thanks Laurent and Laurent.


There are three new papers on the Rice Compressed Sensing Repository:

Collaborative spectrum sensing from sparse observations using matrix completion for cognitive radio networks by Jia (Jasmine) Meng, Wotao Yin, Husheng Li, Ekram Houssain, , Zhu HanThe abstract reads:

In cognitive radio, spectrum sensing is a key component to detect spectrum holes (i.e., channels not used by any primary users). Collaborative spectrum sensing among the cognitive radio nodes is expected to improve the ability of checking complete spectrum usage states. Unfortunately, due to power limitation and channel fading, available channel sensing information is far from being sufficient to tell the unoccupied channels directly. Aiming at breaking this bottleneck, we apply recent matrix completion techniques to greatly reduce the sensing information needed. We formulate the collaborative sensing problem as a matrix completion subproblem and a joint-sparsity reconstruction subproblem. Results of numerical simulations that validated the effectiveness and robustness of the proposed approach are presented. In particular, in noiseless cases, when number of primary user is small, exact detection was obtained with no more than 8% of the complete sensing information, whilst as number of primary user increases, to achieve a detection rate of 95.55%, the required information percentage was merely 16.8%.
Ahah! here is something that is bound to have an impact into making compressive mainstream. As you recall, the connection between normal coded aperture and compressive sensing coded aperture was made clear through the use of Toeplitz matrices (see this entry and this one), today we have the following Practical compressive sensing with Toeplitz and circulant matrices by Wotao Yin, Simon Morgan, Junfeng Yang, Yin Zhang. The abstract reads:
Compressive sensing encodes a signal into a relatively small number of incoherent linear measurements. In theory, the optimal incoherence is achieved by completely random measurement matrices. However, such matrices are difficult and/or costly to implement in hardware realizations. After summarizing recent results of how random Toeplitz and circulant matrices can be easily (or even naturally) realized in various applications, we introduce fast algorithms for reconstructing signals from incomplete Toeplitz and circulant measurements. We present computational results showing that Toeplitz and circulant matrices are not only as effective as random matrices for signal encoding, but also permit much faster signal decoding.
The full site hosting the soon to be available code but also most of the paper in a visually attractive fashion is at:


I'll keep you posted as to when the code gets to be available.
Accurate edge information can significantly improve image recovery quality and speed, but such information is encoded in the CS measurements of an image. To take advantage of edge information in CS recovery, EdgeCS alternatively performs CS reconstruction and edge detection in a way that each benefits from the latest solution of the other. EdgeCS is fast and returns high-quality images. It exactly recovers the 256 × 256 Shepp-Logan phantom from merely 7 radial lines (or 3.03% k-space), which is impossible for most existing algorithms. It accurately reconstructs a 512×512 magnetic resonance image from 21% noisy samples. Moreover, it is also able to reconstruct complex-valued images. Each took about 30 seconds on an ordinary laptop. The algorithm can be easily ported to GPUs for a speedup of more than 10 folds.
My webcrawler found also the following three papers: Compressive through-focus imaging by Oren Mangoubi and Edwin A. Marengo. The abstract reads:
Optical sensing and imaging applications often suffer from a combination of low resolution object reconstructions and a large number of sensors (thousands), which depending on the frequency can be quite expensive or bulky. A key objective in optical design is to minimize the number of sensors (which reduces cost) for a given target resolution level (image quality) and permissible total sensor array size (compactness). Equivalently, for a given imaging hardware one seeks to maximize image quality, which in turn means fully exploiting the available sensors as well as all priors about the properties of the sought-after objects such as sparsity properties, and other, which can be incorporated into data processing schemes for object reconstructions. In this paper we propose a compressive-sensing-based method to process through-focus optical field data captured at a sensor array. This method applies to both two-dimensional (2D) and three-dimensional (3D) objects. The proposed approach treats in-focus and out-of-focus data as projective measurements for compressive sensing, and assumes that the objects are sparse under known linear transformations applied to them. This prior allows reconstruction via familiar compressive sensing methods based on 1-norm minimization. The proposed compressive throughfocus imaging is illustrated in the reconstruction of canonical 2D and 3D objects, using either coherent or incoherent light. The obtained results illustrate the combined use of through-focus imaging and compressive sensing techniques, and also shed light onto the nature of the information that is present in in-focus and out-of-focus images.

Distilled Sensing: Adaptive Sampling for Sparse Detection and Estimation by Jarvis Haupt, Rui Castro and Robert Nowak. The abstract reads:
Adaptive sampling results in dramatic improvements in the recovery of sparse signals in white Gaussian noise. A sequential adaptive sampling-and-refinement procedure called distilled sensing (DS) is proposed and analyzed. DS is a form of multi-stage experimental design and testing. Because of the adaptive nature of the data collection, DS can detect and localize far weaker signals than possible from non-adaptive measurements. In particular, reliable detection and localization (support estimation) using non-adaptive samples is possible only if the signal amplitudes grow logarithmically with the problem dimension. Here it is shown that using adaptive sampling, reliable detection is possible provided the amplitude exceeds a constant, and localization is possible when the amplitude exceeds any arbitrarily slowly growing function of the dimension.

We consider the problem of deciding whether a highly incomplete signal lies within a given subspace. This problem, Matched Subspace Detection, is a classical, well studied problem when the signal is completely observed. Highdimensional testing problems in which it may be prohibitive or impossible to obtain a complete observation motivate this work. The signal is represented as a vector in Rn, but we only observe m much less than n of its elements.We show that reliable detection is possible, under mild incoherence conditions, as long as m is slightly greater than the dimension of the subspace in question.

The next papers are a compilation of several papers that were presented at SAMPTA'09. I do not think I have covered all of them before ( I maybe wrong) but here they are for completeness:

Quantization for Compressed Sensing Reconstruction by John Z. Sun, Vivek K. Goyal. The abstract reads:
Quantization is an important but often ignored consideration in discussions about compressed sensing. This paper studies the design of quantizers for random measurements of sparse signals that are optimal with respect to mean-squared error of the lasso reconstruction. We utilize recent results in high-resolution functional scalar quantization and homotopy continuation to approximate the optimal quantizer. Experimental results compare this quantizer to other practical designs and show a noticeable improvement in the operational distortion-rate performance.

Analysis of HighDimensional Signal Data by Manifold Learning and Convolutions by Mijail Guillemard, Armin Iske. The abstract reads:
A novel concept for the analysis of high-dimensional signal data is proposed. To this end, customized techniques from manifold learning are combined with convolution transforms, being based on wavelets. The utility of the resulting method is supported by numerical examples concerning low-dimensional parameterizations of scale modulated signals and solutions to the wave equation at varying initial conditions.

Linear Signal Reconstruction from Jittered Sampling by Alessandro Nordio, Carla-Fabianna Chiasserini, Emanuele Viterbo. The abstract reads:
This paper presents an accurate and simple method to evaluate the performance of AD/DA converters affected by clock jitter, which is based on the analysis of the mean square error (MSE) between the reconstructed signal and the original one. Using an approximation of the linear minimum MSE (LMMSE) filter as reconstruction technique, we derive analytic expressions of the MSE. Through asymptotic analysis, we evaluate the performance of digital signal reconstruction as a function of the clock jitter, number of quantization bits, signal bandwidth and sampling rate.

Finite Range Scalar Quantization for Compressive Sensing by Jason Laska, Petros Boufounos, Richard Baraniuk. The abstract reads:
Analog-to-digital conversion comprises of two fundamental discretization steps: sampling and quantization. Recent results in compressive sensing (CS) have overhauled the conventional wisdom related to the sampling step, by demonstrating that sparse or compressible signals can be sampled at rates much closer to their sparsity rate, rather than their bandwidth. This work further overhauls the conventional wisdom related to the quantization step by demonstrating that quantizer overflow can be treated differently in CS and by exploiting the tradeoff between quantization error and overflow. We demonstrate that contrary to classical approaches that avoid quantizer overflow, a better finite-range scalar quantization strategy for CS is to amplify the signal such that the finite range quantizer overflows at a pre-determined rate, and subsequently reject the overflowed measurements from the reconstruction. Our results further suggest a simple and effective automatic gain control strategy which uses feedback from the saturation rate to control the signal gain.

A method for generalized sampling and reconstruction of finite-rate-of-innovation signals by Chandra Sekhar Seelamantula, Michael Unser. The abstract reads:
We address the problem of generalized sampling and reconstruction of finite-rate-of-innovation signals. Specifically, we consider the problem of sampling streams of Dirac impulses and propose a two-channel method that enables fast, local reconstruction under some suitable conditions. We also specify some acquisition kernels and give the associated reconstruction formulas. It turns out that these kernels can also be combined into one kernel, which can be employed in the single-channel sampling scenario. The single-kernel approach carries over all the advantages of the two-channel counterpart. Simulation results are presented to validate the theoretical calculations

The Generalized Annihilation Property: A Tool For Solving Finite Rate of Innovation Problems by Thierry Blu. The abstract reads:
We describe a property satisfied by a class of nonlinear systems of equations that are of the form F(Omega)X = Y. Here F(Omega) is a matrix that depends on an unknown K-dimensional vector Omega, X is an unknown K-dimensional vector and Y is a vector of N \ge K) given measurements. Such equations are encountered in superresolution or sparse signal recovery problems known as “Finite Rate of Innovation” signal reconstruction. We show how this property allows to solve explicitly for the unknowns Omega and X by a direct, non-iterative, algorithm that involves the resolution of two linear systems of equations and the extraction of the roots of a polynomial and give examples of problems where this type of solutions has been found useful.


Distributed Sensing of Signals Under a Sparse Filtering Model by Ali Hormati, Olivier Roy, Yue M. Lu, Martin Vetterli. The abstract reads:
We consider the task of recovering correlated vectors at a central decoder based on fixed linear measurements obtained by distributed sensors. Two different scenarios are considered: In the case of universal reconstruction, we look for a sensing and recovery mechanism that works for all possible signals, whereas in the case of almost sure reconstruction, we allow to have a small set (with measure zero) of unrecoverable signals. We provide achievability bounds on the number of samples needed for both scenarios. The bounds show that only in the almost sure setup can we effectively exploit the signal correlations to achieve effective gains in sampling efficiency. In addition, we propose an efficient and robust distributed sensing and reconstruction algorithm based on annihilating filters.

An “algebraic” reconstruction of piecewise-smooth functions from integral measurements by Dima Batenkov, Niv Sarig, Yosef Yomdin. The abstract reads:
This paper presents some results on a well-known problem in Algebraic Signal Sampling and in other areas of applied mathematics: reconstruction of piecewise-smooth functions from their integral measurements (like moments, Fourier coefficients, Radon transform, etc.). Our results concern reconstruction (from the moments) of signals in two specific classes: linear combinations of shifts of a given function, and “piecewise D-finite functions” which satisfy on each continuity interval a linear differential equation with polynomial coefficients.


Average Case Analysis of Multichannel Basis Pursuit by Holger Rauhut, Yonina Eldar. The abstract reads:
We consider the recovery of jointly sparse multichannel signals from incomplete measurements using convex relaxation methods. Worst case analysis is not able to provide insights into why joint sparse recovery is superior to applying standard sparse reconstruction methods to each channel individually. Therefore, we analyze an average case by imposing a probability model on the measured signals. We show that under a very mild condition on the sparsity and on the dictionary characteristics, measured for example by the coherence, the probability of recovery failure decays exponentially in the number of channels. This demonstrates that most of the time, multichannel sparse recovery is indeed superior to single channel methods.


Orthogonal Matching Pursuit with random dictionaries by Pawel Bechler, Przemyslaw Wojtaszczyk. The abstract reads:
In this paper we investigated the efficiency of the Orthogonal Matching Pursuit for random dictionaries. We concentrate on dictionaries satisfying Restricted Isometry Property. We introduce a stronger Homogenous Restricted Isometry Property which is satisfied with overwhelming probability for random dictionaries used in compressed sensing. We also present and discuss some open problems about OMP.


A short note on non-convex compressed sensing by Rayan Saab, Özgür Yilmaz. The abstract reads:
In this note, we summarize the results we recently proved in [14] on the theoretical performance guarantees of the decoders l_p. These decoders rely on `p minimization with p element of (0; 1) to recover estimates of sparse and compressible signals from incomplete and inaccurate measurements. Our guarantees generalize the results of [2] and [16] about decoding by l_p minimization with p = 1, to the setting where p 2 (0; 1) and are obtained under weaker sufficient conditions. We also present novel extensions of our results in [14] that follow from the recent work of DeVore et al. in [8]. Finally, we show some insightful numerical experiments displaying the trade-off in the choice of p element of (0; 1] depending on certain properties of the input signal.

Credit: Figure from the paper Practical compressive sensing with Toeplitz and circulant matrices by Wotao Yin, Simon Morgan, Junfeng Yang, Yin Zhang. The site hosting this figure is at:

3 comments:

  1. check out the conference papers here - looks like a lot of interesting papers on CS and nearby fields.

    http://lts4www.epfl.ch/~frossard/publications.php

    ReplyDelete
  2. MS seems to have made a nice search engine for academic research:

    http://academic.research.microsoft.com/

    ReplyDelete
  3. Thanks Anonymous,

    Pascal Frossard is on my webcrawler's list already. His paper show up here with regular frequency.

    Igor.

    ReplyDelete