Here are some of the papers that showed up on my radar screen, enjoy!

First there is this presentation of Henry Pfister entitled Achieving Universality in Practice via Spatially-Coupled LDPC Codes

Next we have a few open papers:

Bandit Theory meets Compressed Sensing for high dimensional Stochastic Linear Bandit by Alexandra Carpentier, Rémi Munos. The abstract reads:

We consider a linear stochastic bandit problem where the dimension $K$ of the unknown parameter $\theta$ is larger than the sampling budget $n$. In such cases, it is in general impossible to derive sub-linear regret bounds since usual linear bandit algorithms have a regret in $O(K\sqrt{n})$. In this paper we assume that $\theta$ is $S-$sparse, i.e.~has at most $S-$non-zero components, and that the space of arms is the unit ball for the $||.||_2$ norm. We combine ideas from Compressed Sensing and Bandit Theory and derive algorithms with regret bounds in $O(S\sqrt{n})$.

ADAPTIVE COMPRESSED SENSING FOR VIDEO ACQUISITION by Hassan Mansour and Ozgur Yılmaz. The abstract reads:

In this paper, we propose an adaptive compressed sensing scheme that utilizes a support estimate to focus the measurements on the large valued coefﬁcients of a compressible signal. We embed a “sparse-ﬁltering” stage into the measurement matrix by weighting down the contribution of signal coefﬁcients that are outside the support estimate. We present an application which can beneﬁt from the proposed sampling scheme, namely, video compressive acquisition. We demonstrate that our proposed adaptive CS scheme results in a signiﬁcant improvement in reconstruction quality compared with standard CS as well as adaptive recovery using weighted `1 minimization.

Blind Spectrum Detector for Cognitive Radio using Compressed Sensing by Ziad Khalaf , Amor Nafkha , Jacques Palicot. The abstract reads:

Based on the sparse property of the cyclic autocorrelation in the cyclic frequencies domain, this paper proposes a new blind spectrum sensing method which uses the compressed sensing technique in order to detect free bands in the radio spectrum. This new sensing method that presents a relative low complexity has the particularity to perform blind and robust detection with only few samples (short observation time) and without any knowledge about the cyclic frequency of the signal, in contrary to cyclostationary detection methods that are not robust when the sample size is small and might need some information about the signal in order to detect. ROC curves obtained by simulation show the superiority of the new proposed technique over cyclostationary detection under the same conditions, particularly the same observation time.

Approximate Message Passing under Finite Alphabet Constraints by Andreas Muller, Dino Sejdinovic, Robert PiechockiThe abstract reads:

In this paper we consider Basis Pursuit De-Noising (BPDN) problems in which the sparse original signal is drawn from a finite alphabet. To solve this problem we propose an iterative message passing algorithm, which capitalises not only on the sparsity but by means of a prior distribution also on the discrete nature of the original signal. In our numerical experiments we test this algorithm in combination with a Rademacher measurement matrix and a measurement matrix derived from the random demodulator, which enables compressive sampling of analogue signals. Our results show in both cases significant performance gains over a linear programming based approach to the considered BPDN problem. We also compare the proposed algorithm to a similar message passing based algorithm without prior knowledge and observe an even larger performance improvement.

Compressive Acquisition of Dynamic Scenes by Aswin C Sankaranarayanan, Pavan K Turaga, Rama Chellappa, Richard G Baraniuk. The abstract reads:

Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models difficult. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, and then reconstructing the image frames. We exploit the low-dimensional dynamic parameters (the state sequence) and high-dimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the dynamic part of the scene at each instant and accumulates measurements over time to estimate the static parameters. This enables us to lower the compressive measurement rate considerably. We validate our approach with a range of experiments involving both video recovery, sensing hyper-spectral data, and classification of dynamic scenes from compressive data. Together, these applications demonstrate the effectiveness of the approach.

On Detection-Directed Estimation Approach On Detection-Directed Estimation Approach for Noisy Compressive Sensing by Jaewook Kang, Heung-No Lee, Kiseon Kim. The abstract reads:

In this paper, we investigate a Bayesian sparse reconstruction algorithm called compressive sensing via Bayesian support detection (CS-BSD). This algorithm is quite robust against measurement noise and achieves the performance of a minimum mean square error (MMSE) estimator that has support knowledge beyond a certain SNR threshold. The key idea behind CS-BSD is that reconstruction takes a detection-directed estimation structure consisting of two parts: support detection and signal value estimation. Belief propagation (BP) and a Bayesian hypothesis test perform support detection, and an MMSE estimator finds the signal values belonging to the support set. CS-BSD converges faster than other BP-based algorithms, and it can be converted to a parallel architecture to become much faster. Numerical results are provided to verify the superiority of CS-BSD compared to recent algorithms.

On the Lagrangian Biduality of Sparsity Minimization Problems by Dheeraj Singaraju, Ehsan Elhamifar, Roberto Tron, Allen Y. Yang, S. Shankar Sastry. The abstract reads:

Recent results in Compressive Sensing have shown that, under certain conditions, the solution to an underdetermined system of linear equations with sparsity-based regularization can be accurately recovered by solving convex relaxations of the original problem. In this work, we present a novel primal-dual analysis on a class of sparsity minimization problems. We show that the Lagrangian bidual (i.e., the Lagrangian dual of the Lagrangian dual) of the sparsity minimization problems can be used to derive interesting convex relaxations: the bidual of the $\ell_0$-minimization problem is the $\ell_1$-minimization problem; and the bidual of the $\ell_{0,1}$-minimization problem for enforcing group sparsity on structured data is the $\ell_{1,\infty}$-minimization problem. The analysis provides a means to compute per-instance non-trivial lower bounds on the (group) sparsity of the desired solutions. In a real-world application, the bidual relaxation improves the performance of a sparsity-based classification framework applied to robust face recognition.

A General Framework of Dual Certificate Analysis for Structured Sparse Recovery Problems by Cun-Hui Zhang, Tong Zhang. The abstract reads:

This paper develops a general theoretical framework to analyze structured sparse recovery problems using the notation of dual certificate. Although certain aspects of the dual certificate idea have already been used in some previous work, due to the lack of a general and coherent theory, the analysis has so far only been carried out in limited scopes for specific problems. In this context the current paper makes two contributions. First, we introduce a general definition of dual certificate, which we then use to develop a unified theory of sparse recovery analysis for convex programming. Second, we present a class of structured sparsity regularization called structured Lasso for which calculations can be readily performed under our theoretical framework. This new theory includes many seemingly loosely related previous work as special cases; it also implies new results that improve existing ones even for standard formulations such as L1 regularization.

Compressed Beamforming Applied to B-Mode Ultrasound Imaging by Noam Wagner, Yonina C. Eldar, Arie Feuer, Zvi Friedman.The abstract reads:

Emerging sonography techniques often imply increasing in the number of transducer elements involved in the imaging process. Consequently, larger amounts of data must be acquired and processed by the beamformer. The significant growth in the amounts of data effects both machinery size and power consumption. Within the classical sampling framework, state of the art systems reduce processing rates by exploiting the bandpass bandwidth of the detected signals. It has been recently shown, that a much more significant sample-rate reduction may be obtained, by treating ultrasound signals within the Finite Rate of Innovation framework. These ideas follow the spirit of Xampling, which combines classic methods from sampling theory with recent developments in Compressed Sensing. Applying such low-rate sampling schemes to individual transducer elements, which detect energy reflected from biological tissues, is limited by the noisy nature of the signals. This often results in erroneous parameter extraction, bringing forward the need to enhance the SNR of the low-rate samples. In our work, we manage to achieve such SNR enhancement, by beamforming the sub-Nyquist samples obtained from multiple elements. We refer to this process as "compressed beamforming". Applying it to cardiac ultrasound data, we successfully image macroscopic perturbations, while achieving a nearly eight-fold reduction in sample-rate, compared to standard techniques.

Reconstruction of HARDI using Compressed Sensing and its Application to Contrast HARDI by Sudipto Dolui, Ivan C. Salgado Patarroyo, Oleg V. Michailovich., Yogesh Rathi. The abstract reads:

High angular resolution diffusion imaging (HARDI) is known to excel in delineating multiple diffusion ﬂows through a given location within the white matter of the brain. Unfortunately, many current methods of implementation of HARDI require collecting a relatively large number of diffusion-encoded images, which is in turn translated in prohibitively long acquisition times. As a possible solution to this problem, one can undersample HARDI data by using fewer diffusion-encoding gradients than it is prescribed by the classical sampling theory, while exploiting the tools of compressed sensing (CS). Accordingly, the goal of the present paper is twofold. First, the paper presents a novel CS-based framework for the reconstruction of HARDI data using a reduced set of diffusion-encoding gradients. As opposed to similar studies reported in the literature, the proposed method has been optimized for the Rician statistics of measurement noises, which are known to be prevalent in HARDI, and in fact, in MRI in general. Second, we introduce the concept of rotational invariant Fourier signatures (RIFS), and show how they can be used to generate a composite HARDI contrast, which we refer to as colour-HARDI (cHARDI). Finally, via a series of experiments with both simulated and in vivo MRI data, we demonstrate that the quality and informativeness of the proposed contrast deteriorates little, when used in conjunction with the proposed CS-based reconstruction framework. Thus the present work proposes a way to improve the time efﬁciency of HARDI, and shows its application to the computation of a new HARDI-based contrast which has a potential to improve the clinical value of this important imaging modality.

Sparse Binary Matrices of LDPC codes for Compressed Sensing by Weizhi Lu , Kidiyo Kpalma , Joseph Ronsin. The abstract reads:

Compressed sensing shows that one undetermined measurement matrix can losslessly compress sparse signals if this matrix satisfies Restricted Isometry Property (RIP). However, in practice there are still no explicit approaches to construct such matrices. Gaussian matrices and Fourier matrices are first proved satisfying RIP with high probabilities. Recently, sparse random binary matrices with lower computation load also expose comparable performance with Gaussian matrices. But they are all constructed randomly, and unstable in orthogonality. In this paper, inspired by these observations, we propose to construct structured sparse binary matrices which are stable in orthogonality. The solution lies in the algorithms that construct parity-check matrices of low-density parity-check (LDPC) codes. Experiments verify that proposed matrices significantly outperform aforementioned three types of matrices. And significantly, for this type of matrices with a given size, the optimal matrix for compressed sensing can be approximated and constructed according to some rules.

Parallel imaging technique using localized gradients (PatLoc) reconstruction using compressed sensing (CS)

by F-H. Lin, P. Vesanen, T. Witzel, R. Ilmoniemi, and J. Hennig. The abstract reads:

Parallel acquisition with localized gradient (PatLoc) is a new approach to further increase degrees of freedom in spatial encoding by using a combination of surface gradient coils [1]. Due to non-bijective encoding, PatLoc requires a radio-frequency coil array to uniquely localize the magnetization using the parallel MRI approach [1]. Preliminary results of PatLoc image reconstructions focused on the accelerated acquisitions not exceeding the number of RF coil in the array [2,3,4,5]. Recently, based on the assumption of image sparsity, compressed sensing (CS) has been proposed to achieve MRI acceleration using random sampling k-space and reconstructing vastly reduced data using a nonlinear algorithm [6]. In this study, we investigate the feasibility of reconstructing highly accelerated PatLoc images using CS. Specifically, we hypothesize that PatLoc can provide better reconstructed images compared to traditional orthogonal linear gradient systems because of higher degree of freedom in spatial encoding.

On the Beneﬁt of using Tight Frames for Robust Data Transmission and Compressive Data Gathering in Wireless Sensor Networks by Wei Chen, Miguel R. D. Rodrigues and Ian J. Wassell. The abstract reads:

Compressive sensing (CS), a new sampling paradigm, has recently found several applications in wireless sensor networks (WSNs). In this paper, we investigate the design of novel sensing matrices which lead to good expected-case performance - a typical performance indicator in practice - rather than the conventional worst-case performance that is usually employed when assessing CS applications. In particular, we show that tight frames perform much better than the common CS Gaussian matrices in terms of the reconstruction average mean squared error (MSE). We also showcase the beneﬁts of tight frames in two WSN applications, which involve: i) robustness to data sample losses; and ii) reduction of the communication cost.Spatially sparse source cluster modeling by compressive neuromagnetic tomography by Wei-Tang Chang, Aapo Nummenmaa, Jen-Chuen Hsieh, Fa-Hsuan Lin. The abstract reads:

Magnetoencephalography enables non-invasive detection of weak cerebral magnetic ﬁelds by utilizing superconducting quantum interference devices (SQUIDs). Solving the MEG inverse problem requires reconstructing the locations and orientations of the underlying neuronal current sources based on the extracranial measurements. Most inverse problem solvers explicitly favor either spatially more focal or diffuse current source patterns. Naturally, in a situation where both focal and spatially extended sources are present, such reconstruction methods may yield inaccurate estimates. To address this problem, we propose a novel ComprEssive Neuromagnetic Tomography (CENT) method based on the assumption that the current sources are compressible. The compressibility is quantiﬁed by the joint sparsity of the source representation in the standard source space and in a transformed domain. The purpose of the transformation sparsity constraint is to incorporate local spatial structure adaptively by exploiting the natural redundancy of the source conﬁgurations in the transform domain. By combining these complementary constraints of standard and transformed domain sparsity we obtain source estimates, which are not only locally smooth and regular but also form globally separable clusters. In this work, we use the ℓ1-norm as a measure of sparsity and convex optimization to yield compressive estimates in a computationally tractable manner. We study the Laplacian matrix (CENTL) and spherical wavelets (CENTW) as alternatives for the transformation in the compression constraint. In addition to the two prior constraints on the sources, we control the discrepancy between the modeled and measured data by restricting the power of residual error below a speciﬁed value. The results show that both CENTL and CENTW are capable of producing robust spatially regular source estimates with high computational efﬁciency. For simulated sources of focal, diffuse, or combined types, the CENT method shows better accuracy on estimating the source locations and spatial extents than the minimumℓ1-norm or minimum ℓ2-norm constrained inverse solutions. Different transformations yield different beneﬁts: By utilizing CENT with the Laplacian matrix it is possible to suppress physiologically atypical activations extending across two opposite banks of a deep sulcus. With the spherical wavelet transform CENT can improve the detection of two nearby yet not directly connected sources. As demonstrated by simulations, CENT is capable of reﬂecting the spatial extent for both focal and spatially extended current sources. The analysis of in vivo MEG data by CENT produces less physiologically inconsistent “clutter” current sources in somatosensory and auditory MEG measurements. Overall, the CENT method is demonstrated to be a promising tool for adaptive modeling of distributed neuronal currents associated with cognitive tasks.

The following papers are behind a paywall:

- On-board compression of hyperspectral satellite data using band-reordering by Gaucel, Jean-Michel; Thiebaut, Carole;Hugues, Romain; Camarero, Roberto ( of interest is the following study CNES Studies for On-Board Compression of High-Resolution Satellite Images by Carole Thiebaut and Roberto Camarero )
- Compressed hyperspectral image sensing with joint sparsity reconstruction by Liu, Haiying; Li, Yunsong; Zhang, Jing; Song, Juan; Lv, Pei
- Distributed video coding with compressive measurements by Tseng, Hsiao-Yun; Shen, Yun-Chung; Wu, Ja-Ling
- Decentralized cooperative compressed spectrum sensing for block sparse signals by Lu, Yang; Wang, Wenbo; Wang, Xing; Guo, Wenbin; Ai, Hua
- A specific measurement matrix in compressive imaging system by Wang, Fen; Wei, Ping; Ke, Jun
- An accelerated random coordinate descent algorithm for compressed sensing by Chen, Dongfang; Zhang, Ningtao
- Distinguishing ability analysis of compressed sensing radar imaging based on information theory model by Jiang, Hai; Zhang, Bingchen; Lin, Yueguan;Hong, Wen; Wu, Yirong
- Compressed Sensing for Low-dose 4D CBCT Image Reconstruction by Choi, K.; Li, T.; Suh, T.; Xing, L.

Image Credit: NASA/JPL/Space Science Institute, W00071921.jpg was taken on January 22, 2012 and received on Earth January 24, 2012. The camera was pointing toward SATURN at approximately 2,479,787 kilometers away, and the image was taken using the MT2 and CL2 filters.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment