Friday, November 11, 2011

Compressive Sensing Literature This Week (part two)

For whatever reason, this entry looks very new to me. I mean by that the subjects, solvers etc do not fit the usual run of the mill compressive sensing entry. It's just a feeling, nothing more. Enjoy!

First LinkedIn now provides some statistics on their groups. The Compressive Sensing Group Stats are here.

Jeremy Vila and Phil Schniter have updated the EM-BG-AMP page with examples and interesting reasons as to why you should use this solver:

Advantages of EM-BG-AMP
There are plenty of compressive-sensing algorithms available. Why use EM-BG-AMP? Here are a few advantages of our approach.
  • Excellent recovery performance over a wide range of signal/matrix types.
  • Very good complexity scaling with problem dimensions
  • No required tuning parameters.
timelogn"/ timelogn"/

Gael Varoquaux sent me the following:

Hi Igor,
I was reading your "compressive sensing; the big picture" page. I wanted to mention that the Mairal et al algorithm for "dictionary learning for sparse coding" is implemented in the scikit-learn:
the Mairal et al algorithm is called 'MiniBatchDictonaryLearning':
Also, in the list of sparse-coding solvers, you could add the scikit-learn: we have a bunch of L1 and greedy solvers, and the list doesn't have much Python on it yet :)
....Thanks for your blog, always a pleasure to read.
Thanks Gael for the heads-up.

Here is a fascinating paper that is mostly behind a paywall: Direct inference of protein–DNA interactions using compressed sensing methods by Mohammed AlQuraishi and Harley H. McAdams. The abstract reads:
Compressed sensing has revolutionized signal acquisition, by enabling complex signals to be measured with remarkable fidelity using a small number of so-called incoherent sensors. We show that molecular interactions, e.g., protein–DNA interactions, can be analyzed in a directly analogous manner and with similarly remarkable results. Specifically, mesoscopic molecular interactions act as incoherent sensors that measure the energies of microscopic interactions between atoms. We combine concepts from compressed sensing and statistical mechanics to determine the interatomic interaction energies of a molecular system exclusively from experimental measurements, resulting in a “de novo” energy potential. In contrast, conventional methods for estimating energy potentials are based on theoretical models premised on a priori assumptions and extensive domain knowledge. We determine the de novo energy potential for pairwise interactions between protein and DNA atoms from (i) experimental measurements of the binding affinity of protein–DNA complexes and (ii) crystal structures of the complexes. We show that the de novo energy potential can be used to predict the binding specificity of proteins to DNA with approximately 90% accuracy, compared to approximately 60% for the best performing alternative computational methods applied to this fundamental problem. This de novo potential method is directly extendable to other biomolecule interaction domains (enzymes and signaling molecule interactions) and to other classes of molecular interactions.

Supporting information is here. The supplemental information (no paywall for that) starts with:

SI Methods
SI Methods describe the general methodology for de novo potential determination using compressed sensing, the determination of protein–DNA potentials, and the prediction of protein–DNAbinding sites and testing methodology. There are three sections. Section I contains the derivation of the general methodology for de novo potential determination using compressed sensing. The formulation in Section I is not specific to a particular application, but is applicable to a range of biological and chemical systems. The mathematical notation and terminology used throughout is in Section I.
This work brings together disparate concepts from information theory, statistical mechanics, and structural biology. The following background references may be useful to the reader:
• Linear and logistic regression (1).
• Compressed sensing (2, 3).
• Statistical mechanical ensembles (4, 5).
• Structural basis of protein–DNA interactions (6, 7).
Section II describes the application of the general methodology from Section I to the determination of a de novo protein–DNA potential. We reformulate the abstract constructs of the general methodology to the specifics of protein–DNA interactions and introduce several modifications that exploit the unique properties of protein–DNA interactions. Our choices for metaparameters and implementation details are also described in
Section II.
Section III contains a description of the use of the protein–DNA potentials described in Section II to predict protein–DNA-binding sites. We detail our structure-based approach toprotein–DNA-binding site prediction, the dataset used for training and testing, and the quantitative metrics used to compare results between our de novo potential method and previously published methods..
In a different direction, here is Neural Reconstruction with Approximate Message Passing (NeuRAMP) by Alyson K. FletcherSundeep RanganLav R. VarshneyAniruddha Bhargava. The abstract reads:
Many functional descriptions of spiking neurons assume a cascade structure where inputs are passed through an initial linear filtering stage that produces a low dimensional signal that drives subsequent nonlinear stages. This paper presents a novel and systematic parameter estimation procedure for such models and applies the method to two neural estimation problems: (i) compressed-sensing based neural mapping from multi-neuron excitation, and (ii) estimation of neural receptive fields in sensory neurons. The proposed estimation algorithm models the neurons via a graphical model and then estimates the parameters in the model using a recently-developed generalized approximate message passing (GAMP) method. The GAMP method is based on Gaussian approximations of loopy belief propagation. In the neural connectivity problem, the GAMP-based method is shown to be computational efficient, provides a more exact modeling of the sparsity, can incorporate nonlinearities in the output and significantly outperforms previous compressed-sensing methods. For the receptive field estimation, the GAMP method can also exploit inherent structured sparsity in the linear weights. The method is validated on estimation of linear nonlinear Poisson (LNP) cascade models for receptive fields of salamander retinal ganglion cells.
The GAMPMatlab package can be found here.

This one was another surprise, it's not too deep but it's a beginning: Opportunities and Challenges in Applying the Compressive Sensing Framework to Nuclear Science and Engineering by Matthew Mille, Lin SuBirsen YaziciX. George Xu. The abstract reads:
Compressive sensing is a 5-year old theory that has already resulted in an extremely large number of publications in the literature and that has the potential to impact every field of engineering and applied science that has to do with data acquisition and processing. This paper introduces the mathematics, presents a simple demonstration of radiation dose reduction in x-ray CT imaging, and discusses potential application in nuclear science and engineering. 
I liked reference 10:

10. I. Carron. “CS: These Technologies Do Not Exist: Time Modulated Photo-Multiplier Tubes” (2010)
The following showed up on my radar screen: Sparsity Based Feedback Design: A New Paradigm in Opportunistic Sensing by Sourabh Bhattacharya Tamer Basar. The abstract reads:
We introduce the concept of using compressive sensing techniques to provide feedback in order to control dynamical systems. Compressive sensing algorithms use l1- regularization for reconstructing data from a few measurement samples. These algorithms provide highly efficient reconstruction for sparse data. For data that is not sparse enough, the reconstruction technique produces a bounded error in the estimate. In a dynamical system, such erroneous stateestimation can lead to undesirable effects in the output of the plant. In this work, we present some techniques to overcome the aforementioned restriction. Our efforts fall into two main categories. First, we present some techniques to design feedback systems that sparsify the state in order to perfectly reconstruct it using compressive sensing algorithms. We study the effect of such sparsification schemes on the stability and regulation of the plant. Second, we study the characteristics of dynamical systems that produce sparse states so that compressive sensing techniques can be used for feedback in such scenarios without any additional modification in the feedback loop.

We have also two presentations:

Depeng Yang's PhD thesis: Turbo Bayesian Compressed Sensing. The abstract reads:
Compressed sensing (CS) theory specifies a new signal acquisition approach, potentially allowing the acquisition of signals at a much lower data rate than the Nyquist sampling rate. In CS, the signal is not directly acquired but reconstructed from a few measurements. One of the key problems in CS is how to recover the original signal from measurements in the presence of noise. This dissertation addresses signal reconstruction problems in CS. First, a feedback structure and signal recovery algorithm, orthogonal pruning pursuit (OPP), is proposed to exploit the prior knowledge to reconstruct the signal in the noise-free situation. To handle the noise, a noise-aware signal reconstruction algorithm based on Bayesian Compressed Sensing (BCS) is developed. Moreover, a novel Turbo Bayesian Compressed Sensing (TBCS) algorithm is developed for joint signal reconstruction by exploiting both spatial and temporal redundancy. Then, the TBCS algorithm is applied to a UWB positioning system for achieving mm-accuracy with low sampling rate ADCs. Finally, hardware implementation of BCS signal reconstruction on FPGAs and GPUs is investigated. Implementation on GPUs and FPGAs of parallel Cholesky decomposition, which is a key component of BCS, is explored. Simulation results on software and hardware have demonstrated that OPP and TBCS outperform previous approaches, with UWB positioning accuracy improved by 12.8x. The accelerated computation helps enable real-time application of this work.
and a talk: Design and Analysis of High-Performance Compressed Sensing Receivers
Date: Thursday, November 17, 2011
Time: 1:30 PM - 2:30 PM
Location: 180-101
Speaker: Azita Emami, Caltech

The JPL and Caltech Seminar Series

The Shannon-Nyquist sampling theorem is implicit in the design of most conventional signal-acquisition systems. As contemporary/emerging applications demand increased bandwidth and data resolution, realizing an ADC that meets system requirements can become a serious performance bottleneck. This is due to the unfavorable scaling relationship that exists between power consumption and sampling-rate. Recently, results from the field of Compressed Sensing (CS) have provided an alternative mean to realize signal acquisition systems. Specifically, CS enables sub-Nyquist rate signal acquisition provided the acquired signal is sparse in the sense that it depends on a number of degrees of freedom much lower than its bandwidth suggests.
In this talk details of design and implementation of integrated frontends for compressed sensing receivers are discussed. In particular, we will focus on challenges and requirements of such receivers in deep sub-micron CMOS technologies. We will also present simulation and measurements results that examine the robustness of CS-based receivers against noise, clock jitter, mismatch and nonlinearities. Our design methodology and parameter optimization will be explained via simulation that include precise hardware and noise models.
Finally measurement results from a fully integrated CS-based receiver in 90nm CMOS technology that achieves more than 2.0 GHz of instantaneous bandwidth with 12.5x sub-Nyquist rate will be presented.

No comments: