Pages

Wednesday, May 09, 2012

Compressive Sensing This Week

This blog just passed the 1,111,111 pageviews since July 2010.  The tag for the stream conference at Berkeley is #berkelystreaming, the tag for the iTWIST conference in Marseille is #itwist2012. Suresh tells us that the presentation at the Berkeley meeting will be put on videos. Pierre is the only one tweeting from iTWIST.

In the meantime, we have a slew of new and interesting papers this week, enjoy!


Compressed sensing is the art of reconstructing a sparse vector from its inner products with respect to a small set of randomly chosen measurement vectors. It is usually assumed that the ensemble of measurement vectors is in isotropic position in the sense that the associated covariance matrix is proportional to the identity matrix. In this paper, we establish bounds on the number of required measurements in the anisotropic case, where the ensemble of measurement vectors possesses a non-trivial covariance matrix. Essentially, we find that the required sampling rate grows proportionally to the condition number of the covariance matrix. In contrast to other recent contributions to this problem, our arguments do not rely on any restricted isometry properties (RIP's), but rather on ideas from convex geometry which have been systematically studied in the theory of low-rank matrix recovery. This allows for a simple argument and slightly improved bounds, but may lead to a worse dependency on noise (which we do not consider in the present paper).

Non-convex constraints have recently proven a valuable tool in many optimisation problems. In particular sparsity constraints have had a significant impact on sampling theory, where they are used in Compressed Sensing and allow structured signals to be sampled far below the rate traditionally prescribed. Nearly all of the theory developed for Compressed Sensing signal recovery assumes that samples are taken using linear measurements. In this paper we instead address the Compressed Sensing recovery problem in a setting where the observations are non-linear. We show that, under conditions similar to those required in the linear setting, the Iterative Hard Thresholding algorithm can be used to accurately recover sparse or structured signals from few non-linear observations. Similar ideas can also be developed in a more general non-linear optimisation framework. In the second part of this paper we therefore present related result that show how this can be done under sparsity and union of subspaces constraints, whenever a generalisation of the Restricted Isometry Property traditionally imposed on the Compressed Sensing system holds.

Reconstruction of biochemical reaction networks is a central topic in systems biology which raises crucial theoretical challenges in system identification. Nonlinear Ordinary Differential Equations (ODEs) that involve polynomial and rational functions are typically used to model biochemical reaction networks. Such nonlinear models make the problem of determining the connectivity of biochemical networks from time-series experimental data quite difficult. In this paper, we present a network reconstruction algorithm that can deal with model descriptions under the form of polynomial and rational functions. Rather than identifying the parameters of linear or nonlinear ODEs characterised by pre-defined equation structures, our methodology allows us to determine the nonlinear ODEs structure together with their associated reaction constants. To solve the network reconstruction problem, we cast it as a Compressive Sensing (CS) problem and use Bayesian Sparse Learning (BSL) algorithms as an efficient way to obtain its solution.

Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The design of a telemonitoring system via a low-power wireless body-area network for ambulatory use is highly desirable. As an emerging technique, compressed sensing (CS) shows great promise in compressing data with low power consumption. However, due to some specific characteristics of FECG recordings such as non-sparsity and strong noise contamination, current CS algorithms generally fail in this application. In this work we utilize the block sparse Bayesian learning (bSBL) framework, a recently developed framework solving the CS problems. To illustrate the ability of the bSBL methods, we apply it to two representative FECG datasets. In one dataset the fetal heartbeat signals are visible, while in the other dataset are barely visible. The experiment results show that the bSBL framework is capable of compressing FECG raw recordings and successfully reconstructing them. These successes rely on two unique features of the bSBL framework; one is the ability to reconstruct less-sparse but structured signals, and the other is the ability to learn and exploit correlation structure of signals to improve performance. These two abilities of the framework greatly enhance the potential use of bSBL in telemonitoring of other physiological signals.

Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization by Martin Slawski, Matthias Hein. The abstract reads:
Least squares fitting is in general not useful for high-dimensional linear models, in which the number of predictors is of the same or even larger order of magnitude than the number of samples. Theory developed in recent years has coined a paradigm according to which sparsity-promoting regularization is regarded as a necessity in such setting. Deviating from this paradigm, we show that non-negativity constraints on the regression coefficients may be similarly effective as explicit regularization. For a broad range of designs with Gram matrix having non-negative entries, we establish bounds on the $\ell_2$-prediction error of non-negative least squares (NNLS) whose form qualitatively matches corresponding results for $\ell_1$-regularization. Under slightly stronger conditions, it is established that NNLS followed by hard thresholding performs excellently in terms of support recovery of an (approximately) sparse target, in some cases improving over $\ell_1$-regularization. A substantial advantage of NNLS over regularization-based approaches is the absence of tuning parameters, which is convenient from a computational as well as from a practitioner's point of view. Deconvolution of positive spike trains is presented as application.

We study the problem of sampling a random signal with sparse support in frequency domain. Shannon famously considered a scheme that instantaneously samples the signal at equispaced times. He proved that the signal can be reconstructed as long as the sampling rate exceeds twice the bandwidth (Nyquist rate). Candes, Romberg, Tao introduced a scheme that acquires ` instantaneous samples of the signal at random times. They proved that the signal can be uniquely and efficiently reconstructed, provided the sampling rate exceeds the frequency support of the signal, times logarithmic factors. In this paper we consider a probabilistic model for the signal, and a sampling scheme inspired by the idea of spatial coupling in coding theory. Namely, we propose to acquire non-instantaneous samples at random times. Mathematically, this is implemented by acquiring a small random subset of Gabor coefficients. We show empirically that this scheme achieves correct reconstruction as soon as the sampling rate exceeds the frequency support of the signal, thus reaching the information theoretic limit.

Adaptive feature specific spectroscopy for rapid chemical identification by D. V. Dinakarababu, D. R. Golish, and Mike Gehm. The abstract reads:
Spectroscopic chemical classification based on adaptive, feature-specific measurements has been implemented and demonstrated to provide significant performance gain over traditional systems. The measurement scheme and the decision model are discussed. A prototype system with a digital micro-mirror device as the adaptive element has been constructed and validates the theoretical findings and simulation results.

Accelerated Contrast-Enhanced Whole-Heart Coronary MRI Using Low-Dimensional-Structure Self Learning and Thresholding by Mehmet Akc¸akaya, Tamer A. Basha, Raymond H. Chan, Hussein Rayatzadeh, Kraig V. Kissinger, Beth Goddu, Lois A. Goepfert, Warren J. Manning, and Reza Nezafat. The abstract reads:
We sought to evaluate the efficacy of prospective random undersampling and low-dimensional-structure self-learning and thresholding reconstruction for highly accelerated contrast-enhanced whole-heart coronary MRI. A prospective random undersampling scheme was implemented using phase ordering to minimize artifacts due to gradient switching and was compared to a randomly undersampled acquisition with no profile ordering. This profile-ordering technique was then used to acquire contrast-enhanced whole-heart coronary MRI in 10 healthy subjects with 4-fold acceleration. Reconstructed images and the acquired zero-filled images were compared for depicted vessel length, vessel sharpness, and subjective image quality on a scale of 1 (poor) to 4 (excellent). In a pilot study, contrast-enhanced whole-heart coronary MRI was also acquired in four patients with suspected coronary artery disease with 3-fold acceleration. The undersampled images were reconstructed using low-dimensional-structure self-learning and thresholding, which showed significant improvement over the zero-filled images in both objective and subjective measures, with an overall score of 3.6 6 0.5. Reconstructed images in patients were all diagnostic. Low-dimensional-structure self-learning and thresholding reconstruction allows contrast-enhanced whole-heart coronary MRI with acceleration as high as 4-fold using clinically available five-channel phased-array coil.

They even make the cover if the issue.



The framework of computing Higher Order Cyclostationary Statistics (HOCS) from an incoming signal has proven useful in a variety of applications over the past half century, from Automatic Modulation Recognition (AMR) to Time Di erence of Arrival (TDOA) estimation. Much more recently, a theory known as Compressive Sensing (CS) has emerged that enables the e cient acquisition of high-bandwidth (but sparse) signals via nonuniform low-rate sampling protocols. While most work in CS has focused on reconstructing the high-bandwidth signals from nonuniform low-rate samples, in this work, we consider the task of inferring the modulation of a communications signal directly in the compressed domain, without requiring signal reconstruction. We show that the HOCS features used for AMR are compressible in the Fourier domain, and hence, that AMR of various linearly modulated signals is possible by estimating the same HOCS features from nonuniform compressive samples. We provide analytical support for the accurate approximation of HOCS features from nonuniform samples and derive practical rules for classi cation of modulation type using these samples based on simulated data 


Jason McEwen let us know through Twitter of a new paper of wavelet on the sphere: Exact Wavelets on the Ball by Boris Leistedt and Jason D. McEwen. The abstract reads:

We develop an exact wavelet transform on the three-dimensional ball (i.e. on the solid sphere), which we name the flaglet transform. For this purpose we first construct an exact harmonic transform on the radial line using damped Laguerre polynomials and develop a corresponding quadrature rule. Combined with the spherical harmonic transform, this approach leads to a sampling theorem on the ball and a novel three-dimensional decomposition which we call the Fourier-Laguerre transform. We relate this new transform to the well-known Fourier-Bessel decomposition and show that band-limitness in the Fourier-Laguerre basis is a sufficient condition to compute the Fourier-Bessel decomposition exactly. We then construct the flaglet transform on the ball through a harmonic tiling, which is exact thanks to the exactness of the Fourier-Laguerre transform (from which the name flaglets is coined). The corresponding wavelet kernels have compact localisation properties in real and harmonic space and their angular aperture is invariant under radial translation. We introduce a multiresolution algorithm to perform the flaglet transform rapidly, while capturing all information at each wavelet scale in the minimal number of samples on the ball. Our implementation of these new tools achieves floating point precision and is made publicly available. We perform numerical experiments demonstrating the speed and accuracy of these libraries and illustrate their capabilities on a simple denoising example.




Remote Senisng via L_1 Minimization by MAX HUGEL , HOLGER RAUHUT, AND THOMAS STROHMER. The abstract reads:

We consider the problem of detecting the locations of targets in the far field by sending probing signals from an antenna array and recording the reflected echoes. Drawing on key concepts from the area of compressive sensing, we use an `1-based regularization approach to solve this, in general ill-posed, inverse scattering problem. As common in compressed sensing, we exploit randomness, which in this context comes from choosing the antenna locations at random. With n antennas we obtain n2 measurements of a vector x 2 CN representing the target locations and reflectivities on a discretized grid. It is common to assume that the scene x is sparse due to a limited number of targets. Under a natural condition on the mesh size of the grid, we show that an s-sparse scene can be recovered via `1-minimization with high probability if n 2 Cs log2(N). The reconstruction is stable under noise and under passing from sparse to approximately sparse vectors. Our theoretical findings are con rmed by numerical simulations.

Distributed Sparse Approximation for Frog Sound Classification by Bo Wei, Mingrui Yang, Rajib Kumar Rana, Chun Tung Chou, and Wen Hu. The abstract reads:
Sparse approximation has now become a buzzword for classification in numerous research domains. We propose a distributed sparse approximation method based on `1 minimization for frog sound classi cation, which is tailored to the resource constrained wireless sensor networks. Our pilot study demonstrates that `1 minimization can run on wireless sensor nodes producing satisfactory classification accuracy

Efficient Background Subtraction for Tracking in Embedded Camera Networks by Yiran Shen, Wen Hu, Mingrui Yang, Junbin Liu,Chun Tung Chou ,Bo Wei. The abstract reads::
Background subtraction is often the rst step in many computer vision applications such as object localisation and tracking. It aims to segment out moving parts of a scene that represents object of interests. In the eld of computer vision, researchers have dedicated their e orts to improving the robustness and accuracy of such segmentations but most of their methods are computationally intensive, making them non-viable options for our targeted embedded camera platform whose energy and processing power is significantly more constrained. To address this problem as well as maintain an acceptable level of performance, we introduce Compressive Sensing (CS) to the widely used Mixture of Gaussian to create a new background subtraction method. The results show that our method not only can decrease the computation signi cantly (a factor of 7 in a DSP setting) but remains comparably accurate.

Efficient Cross-Correlation Via Sparse Representation In Sensor Networks by Prasant Misra, Wen Hu, Mingrui Yang, Sanjay Jha. The abstract reads:
Cross-correlation is a popular signal processing technique used in numerous localization and tracking systems for obtaining reliable range information. However, a practical efficient implementation has not yet been achieved on resource constrained wireless sensor network platforms. We propose cross-correlation via sparse representation: a new framework for ranging based on `1-minimization. The key idea is to compress the signal samples on the mote platform by efficient random projections and transfer them to a central device, where a convex optimization process estimates the range by exploiting its sparsity in our proposed correlation domain. Through sparse representation theory validation, extensive empirical studies and experiments on an end-to-end acoustic ranging system implemented on resource limited off-the-shelf sensor nodes, we show that the proposed framework, together with the proposed correlation domain achieved up to two order of magnitude better performance compared to naive approaches such as working on DCT domain and downsampling. Furthermore, compared to cross-correlation results, 30-40% measurements are sufficient to obtain precise range estimates with an additional bias of only 2-6 cm for high accuracy application requirements, while 5% measurements are adequate to achieve approximately 100 cm precision for lower accuracy applications.

Complexity Analysis of the Lasso Regularization Path by Julien Mairal, Bin Yu. The abstract reads:
The regularization path of the Lasso can be shown to be piecewise linear, making it possible to "follow" and explicitly compute the entire path. We analyze in this paper this popular strategy, and prove that its worst case complexity is exponential in the number of variables. We then oppose this pessimistic result to an (optimistic) approximate analysis: We show that an approximate path with at most O(1/sqrt(epsilon)) linear segments can always be obtained, where every point on the path is guaranteed to be optimal up to a relative epsilon-duality gap. We complete our theoretical analysis with a practical algorithm to compute these approximate paths.


Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators by Patrick L. Combettes and Jean-Christophe Pesquet. The abstract reads:
We propose a primal-dual splitting algorithm for solving monotone inclusions involving a mixture of sums, linear compositions, and parallel sums of set-valued and Lipschitzian operators. An important feature of the algorithm is that the Lipschitzian operators present in the formulation can be processed individually via explicit steps, while the set-valued operators are processed individually via their resolvents. In addition, the algorithm is highly parallel in that most of its steps can be executed simultaneously. This work brings together and notably extends various types of structured monotone inclusion problems and their solution methods. The application to convex minimization problems is given special attention.



Image Credit: NASA/JPL/Space Science Institute, W00074105.jpg was taken on May 06, 2012 and received on Earth May 07, 2012. The camera was pointing toward TITAN at approximately 760,141 kilometers away, and the image was taken using the CL1 and CL2 filters.




No comments:

Post a Comment