The compressive sensing literature this week includes some new sensors, some meetings and everything in between. Enjoy!

Laurent Duval let me know of the Program book for ASILOMAR 2011 and the compressive sensing session of the EUSIPCO 2011 conference. Thanks Laurent. Also of interest are a series of meetings during the Mathematics of Information September 2011 - June 2012 at IMA and this meeting on Wavelet and Sparsity.

Real-time high-speed volumetric imaging using compressive sampling optical coherence tomography by Mei Young, Evgeniy Lebed, Yifan Jian, Paul J. Mackenzie, Mirza Faisal Beg, and Marinko V. Sarunic. The abstract reads:

Volumetric imaging of the Optic Nerve Head (ONH) morphometry with Optical Coherence Tomography (OCT) requires dense sampling and relatively long acquisition times. Compressive Sampling (CS) is an emerging technique to reduce volume acquisition time with minimal image degradation by sparsely sampling the object and reconstructing the missing data in software. In this report, we demonstrated real-time CS-OCT for volumetric imaging of the ONH using a 1060nm Swept-Source OCT prototype. We also showed that registration and averaging of CS-recovered volumes enhanced visualization of deep structures of the sclera and lamina cribrosa. This work validates CS-OCT as a means for reducing volume acquisition time and for preserving high-resolution in volume-averaged images. Compressive sampling can be integrated into new and existing OCT systems without changes to the optics, requiring only software changes and post-processing of acquired data.Thanks Ori.

Open-target sparse sensing of biological agents using DNA microarray by Mojdeh Mohtashemi , David K Walburger , Matthew W Peterson , Felicia N Sutton , Haley B Skaer and James C Diggans.

BackgroundCurrent biosensors are designed to target and react to specific nucleic acid sequences or structural epitopes. These 'target-specific' platforms require creation of new physical capture reagents when new organisms are targeted. An 'open-target' approach to DNA microarray biosensing is proposed and substantiated using laboratory generated data. The microarray consisted of 12,900 25 bp oligonucleotide capture probes derived from a statistical model trained on randomly selected genomic segments of pathogenic prokaryotic organisms. Open-target detection of organisms was accomplished using a reference library of hybridization patterns for three test organisms whose DNA sequences were not included in the design of the microarray probes.

ResultsA multivariate mathematical model based on the partial least squares regression (PLSR) was developed to detect the presence of three test organisms in mixed samples. When all 12,900 probes were used, the model correctly detected the signature of three test organisms in all mixed samples (mean(R2)) = 0.76, CI = 0.95), with a 6% false positive rate. A sampling algorithm was then developed to sparsely sample the probe space for a minimal number of probes required to capture the hybridization imprints of the test organisms. The PLSR detection model was capable of correctly identifying the presence of the three test organisms in all mixed samples using only 47 probes (mean(R2)) = 0.77, CI = 0.95) with nearly 100% specificity.

ConclusionsWe conceived an 'open-target' approach to biosensing, and hypothesized that a relatively small, non-specifically designed, DNA microarray is capable of identifying the presence of multiple organisms in mixed samples. Coupled with a mathematical model applied to laboratory generated data, and sparse sampling of capture probes, the prototype microarray platform was able to capture the signature of each organism in all mixed samples with high sensitivity and specificity. It was demonstrated that this new approach to biosensing closely follows the principles of sparse sensing.

A General Theory of Concave Regularization for High Dimensional Sparse Estimation Problems by Cun-Hui Zhang, Tong Zhang. The abstract reads:

Concave regularization methods provide natural procedures for sparse recovery. However, they are difficult to analyze in the high dimensional setting. Only recently a few sparse recovery results have been established for some specific local solutions obtained via specialized numerical procedures. Still, the fundamental relationship between these solutions such as whether they are identical or their relationship to the global minimizer of the underlying nonconvex formulation is unknown. The current paper fills this conceptual gap by presenting a general theoretical framework showing that under appropriate conditions, the global solution of nonconvex regularization leads to desirable recovery performance; moreover, under suitable conditions, the global solution corresponds to the unique sparse local solution, which can be obtained via different numerical procedures. Under this unified framework, we present an overview of existing results and discuss their connections. The unified view of this work leads to a more satisfactory treatment of concave high dimensional sparse estimation procedures, and serves as guideline for developing further numerical procedures for concave regularization.

Intrinsic advantages of the w component and spherical imaging for wide-field radio interferometry by Jason D. McEwen, Yves Wiaux. The abstract reads:

Incorporating wide-field considerations in interferometric imaging is of increasing importance for next-generation radio telescopes. Compressed sensing techniques for interferometric imaging have been extended to wide fields recently, recovering images in the spherical coordinate space in which they naturally live. We review these techniques, highlighting: (i) how the effectiveness of the spread spectrum phenomenon, due to the w component inducing an increase of measurement incoherence, is enhanced when going to wide fields; and (ii) how sparsity is reduced by recovering images directly on the sphere. Both of these properties act to improve the quality of reconstructed images.

Presentations:

- A Hierarchical Re-weighted-l_1 Approach for Dynamic Sparse Signal Estimation by Adam Charles, Christopher Rozell
- Gradient Methods for Regularized Optimization by Stephen Wright
- Gradient Algorithms for Regularized Optimization by Stephen Wright
- How many measurements: the gap between tractability and intractability by Deanna Needell

Finally, here are some papers behind a paywall:

(Compressed) sensing and sensibility by Vijay S. Pande

BAN with Low Power Consumption Based on Compressed Sensing Point-to-Point Transmission by Shasha Li, Fengye Hu and Guofeng Li. The abstract reads:

A new transmission model, compressed sensing point-to-point transmission, is presented in this paper for the low power consumption of Body Area Networks. As a kind of novel information source coding and decoding technologies, compressed sensing reduces the redundancies in signal, compressing long signal to short one, then recovers original signal through corresponding recovery algorithm. It is shown by theory analysis and simulation results that, compressed sensing does not only reduce the power consumption in Body Area Networks, but also recovers original signal accurately. When sparsity is 16, more than 70% power is saved. In the end, distributed compressed sensing is introduced as future research work.

Depth map resolution enhancement for 2D/3D imaging system via compressive sensing by Juanjuan Han, Otmar Loffeld, and Klaus Hartmann. The abstract reads:

This paper introduces a novel approach for post-processing of depth map which enhances the depth map resolution in order to achieve visually pleasing 3D models from a new monocular 2D/3D imaging system consists of a Photonic mixer device (PMD) range camera and a standard color camera. The proposed method adopts the revolutionary inversion theory framework called Compressive Sensing (CS). The depth map of low resolution is considered as the result of applying blurring and down-sampling techniques to that of high-resolution. Based on the underlying assumption that the high-resolution depth map is compressible in frequency domain and recent theoretical work on CS, the high-resolution version can be estimated and furthermore reconstructed via solving non-linear optimization problem. And therefore the improved depth map reconstruction provides a useful help to build an improved 3D model of a scene. The experimental results on the real data are presented. In the meanwhile the proposed scheme opens new possibilities to apply CS to a multitude of potential applications on various multimodal data analysis and processing.Optical imaging based on compressive sensing by Shen Li, Cai-wen Ma, and Ai-li Xia. The abstract reads:

Compressive Sensing (CS) is a new sampling framework that provides an alternative to the well-known Shannon sampling theory. The basic idea of CS theory is that a signal or image, unknown but supposed to be sparse or compressible in some basis, can be subjected to fewer measurements than the nominal number of pixels, and yet be accurately reconstructed. By designing optical sensors to measure inner products between the scene and a set of test functions according to CS theory, we can use sophisticated computational methods to infer critical scene structure and content for significantly economizing the resources in data acquisition store and transmit. In this paper, we investigate how CS can provide new insights into optical imaging including optical devices. We first give a brief overview of the CS theory and reviews associated fast numerical reconstruction algorithms. Next, this paper explores the potential of several different physically realizable optical systems based on CS principles. In the end, we briefly discuss possible implication in the areas of data compression and optical imaging.

Compressed sensing of ECG bio-signals using one-bit measurement matrices, Allstot, Emily G. Chen, Andrew Y. Dixon, Anna M. R. Gangopadhyay, Daibashish Mitsuda, Heather Allstot, David J.. The abstract reads:

Compressed sensing (CS) is an emerging signal processing technique that enables sub-Nyquist sampling of sparse signals such as electrocardiogram (ECG), electromyogram (EMG), and electroencephalogram (EEG) bio-signals. Future CS signal processing systems will exploit significant time- and/or frequency-domain sparsity to achieve ultra-low-power bio-signal acquisition in the analog, digital, or mixed-signal domains. A measurement matrix of random values is key to one form of CS computation. It has been shown for ECG and EMG signals that signal-to-quantization noise ratios (SQNR) > 60 dB with compression factors up to 16X are achievable using uniform or Gaussian 6-bit random coefficients. In this paper, 1-bit random coefficients are shown also to give compression factors up to 16X with similar SQNR performance. This approach reduces hardware and saves energy concomitant with 1-bit versus 6-bit signal processing.

Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from

## No comments:

Post a Comment