Compressed sensing with preconditioning for sparse recovery with subsampled matrices of Slepian prolate functions by Laurent Gosse. The abstract reads:

Eﬃcient recovery of smooth functions which are s-sparse with respect to the base of so–called Prolate Spheroidal Wave Functions from a small number of random sampling points is considered. The main ingredient in the design of both the algorithms we propose here consists in establishing a uniform L∞ bound on the measurement ensembles which constitute the columns of the sensing matrix. Such a bound provides us with the Restricted Isometry Property for this rectangular random matrix, which leads to either the exact recovery property or the “best s-term approximation” of the original signal by means of the ℓ1 minimization program. The ﬁrst algorithm considers only a restricted number of columns for which the L∞ holds as a consequence of the fact that eigenvalues of the Bergman’s restriction operator are close to 1 whereas the second one allows for a wider system of PSWF by taking advantage of a preconditioning technique. Numerical examples are spread throughout the text to illustrate the results.

Robust Matrix Decomposition with Sparse Corruptions by Daniel Hsu, Sham M. Kakade, and Tong Zhang. The abstract reads:

Suppose a given observation matrix can be decomposed as the sum of a low-rank matrix and a sparse matrix, and the goal is to recover these individual components from the observed sum. Such additive decompositions have applications in a variety of numerical problems including system identiﬁcation, latent variable graphical modeling, and principal components analysis. We study conditions under which recovering such a decomposition is possible via a combination of `1 norm and trace norm minimization. We are speciﬁcally interested in the question of how many sparse corruptions are allowed so that convex programming can still achieve accurate recovery, and we obtain stronger recovery guarantees than previous studies. Moreover, we do not assume that the spatial pattern of corruptions is random, which stands in contrast to related analyses under such assumptions via matrix completion.

Group Sparsity with Overlapping Partition Functions by Gabriel Peyré, Jalal Fadili. The abstract reads:

This paper introduces a novel and versatile group sparsity prior for denoising and to regularize inverse problems. The sparsity is enforced through arbitrary block-localization operators, such as for instance smooth localized partition functions. The resulting blocks can have an arbitrary overlap, which is important to reduce visual artifacts thanks to the increased translation invariance of the prior. They are moreover not necessarily binary, and allow for non-integer block sizes. We develop two schemes, one primal and another primal-dual, originating from the non-smooth convex optimization realm, to efficiently solve a wide class of inverse problems regularized using this overlapping group sparsity prior. This scheme is flexible enough to handle both penalized and constrained versions of the optimization problems at hand. Numerical results on denoising and compressed sensing are reported and show the improvement brought by the overlap and the smooth partition functions with respect to classical group sparsity.

Sample Size and Power Analysis for Sparse Signal Recovery in Genome-Wide Association Studies by Jichun Xie, T. Tony Cai and Hongzhe Li. The summary reads:

Genome-wide association studies have successfully identiﬁed hundreds of novel genetic variants associated with many complex human diseases. However, there is a lack of rigorous work on evaluating the statistical power for identifying these variants. In this paper, we consider the problem of sparse signal identiﬁcation in genome-wide association studies and present two analytical frameworks for detailed analysis of the statistical power for detecting and identifying the disease-associated variants. We present an explicit sample size formula for achieving a given false non-discovery rate while controlling the false discovery rate based on an optimal false discovery rate procedure. The problem of sparse genetic variants recovery is also considered and a boundary condition is established in terms of sparsity and signal strength for almost exact recovery of disease-associated variants as well as nondisease-associated variants. A data-adaptive procedure is proposed to achieve this bound. These results provide important tools for sample size calculation and power analysis for large-scale multiple testing problems. The analytical results are illustrated with a genome-wide association study of neuroblastoma.

Lower Bounds for Sparse Recovery by Khanh Do Ba, Piotr Indyk, Eric Price, David P. Woodruﬀ. The abstract reads:

We consider the following k-sparse recovery problem: design an m × n matrix A, such that for any signal x, given Ax we can eﬃciently recover ˆx satisfying kx − xˆk1 ≤C mink-sparse x′ kx − x′k1 . It is known that there exist matrices A with this property that have only O(k log(n/k)) rows. In this paper we show that this bound is tight. Our bound holds even for the more general randomized version of the problem, where A is a random variable, and the recovery algorithm is required to work for any ﬁxed x with constant probability (over A).

Generating Functional Analysis of Iterative Algorithms for Compressed Sensing by Kazushi Mimura. The abstract reads:

It has been shown that approximate message passing algorithm is effective in reconstruction problems for compressed sensing. To evaluate dynamics of such an algorithm, the state evolution (SE) has been proposed. If an algorithm can cancel the correlation between the present messages and their past values, SE can accurately tract its dynamics via a simple one-dimensional map. In this paper, we focus on dynamics of algorithms which cannot cancel the correlation and evaluate it by the generating functional analysis (GFA), which allows us to study the dynamics by an exact way in the large system limit

On the Design of Deterministic Matrices for Fast Recovery of Fourier Compressible Functions by J. Bailey

Mark Iwen, C. V. Spencer. The abstract reads:

We present a general class of compressed sensing matrices which are then demonstrated to have associated sublinear-time sparse approximation algorithms. We then develop methods for constructing specialized matrices from this class which are sparse when multiplied with a discrete Fourier transform matrix. Ultimately, these considerations improve previous sampling requirements for deterministic sparse Fourier transform methods.

Reconstruction of Fractional Brownian Motion Signals From Its Sparse Samples Based on Compressive Sampling by Andriyan Bayu Suksmono. The abstract reads:

This paper proposes a new fBm (fractional Brownian motion) interpolation/reconstruction method from partially known samples based on CS (Compressive Sampling). Since 1/f property implies power law decay of the fBm spectrum, the fBm signals should be sparse in frequency domain. This property motivates the adoption of CS in the development of the reconstruction method. Hurst parameter H that occurs in the power law determines the sparsity level, therefore the CS reconstruction quality of an fBm signal for a given number of known subsamples will depend on H. However, the proposed method does not require the information of H to reconstruct the fBm signal from its partial samples. The method employs DFT (Discrete Fourier Transform) as the sparsity basis and a random matrix derived from known samples positions as the projection basis. Simulated fBm signals with various values of H are used to show the relationship between the Hurst parameter and the reconstruction quality. Additionally, US-DJIA (Dow Jones Industrial Average) stock index monthly values time-series are also used to show the applicability of the proposed method to reconstruct a real-world data.

Orthogonal Matching Pursuit: A Brownian Motion Analysis by Alyson K. Fletcher and Sundeep Rangan. The abstract reads:

A well-known analysis of Tropp and Gilbert shows that orthogonal matching pursuit (OMP) can recover a ksparse n-dimensional real vector from m = 4k log(n) noisefree linear measurements obtained through a random Gaussian measurement matrix with a probability that approaches one as n → ∞. This work strengthens this result by showing that a lower number of measurements, m = 2k log(n − k), is in fact sufﬁcient for asymptotic recovery. More generally, when the sparsity level satisﬁes kmin ≤ k ≤ kmax but is unknown, m = 2kmax log(n − kmin) measurements is sufﬁcient. Furthermore, this number of measurements is also sufﬁcient for detection of the sparsity pattern (support) of the vector with measurement errors provided the signal-to-noise ratio (SNR) scales to inﬁnity. The scaling m = 2k log(n − k) exactly matches the number of measurements required by the more complex lasso method for signal recovery with a similar SNR scaling.

Adaptive compressive learning for prediction of protein-protein interactions from primary sequence by Ya-Nan Zhang Xiao-Yong Pan, Huang Y, Hong-Bin Shen. The abstract reads:

Protein-protein interactions (PPIs) play an important role in biological processes. Although much effort has been devoted to the identification of novel PPIs by integrating experimental biological knowledge, there are still many difficulties because of lacking enough protein structural and functional knowledge. It is highly desired to develop methods based only on amino acid sequences for predicting PPIs. However, sequence-based predictors are often struggling with the high-dimensionality causing over-fitting and high computational complexity problems, as well as the redundancy of sequential feature vectors. In this paper, a novel computational approach based on compressed sensing theory is proposed to predict yeast Saccharomyces cerevisiae PPIs from primary sequence and has achieved promising results. The key advantage of the proposed compressed sensing algorithm is that it could compress the original high-dimensional protein sequential feature vector into a much lower but more condensed space taking the sparsity property of the original signal into account. What makes compressed sensing much more attractive in protein sequence analysis is its compressed signal can be reconstructed from far fewer measurements than what is usually considered necessary in traditional Nyquist sampling theory. Experimental results demonstrate that proposed compressed sensing method is powerful for analyzing noisy biological data and reducing redundancy in feature vectors. The proposed method represents a new strategy of dealing with high-dimensional protein discrete model and has great potentiality to be extended to deal with many other complicated biological systems.

Parallel Computation Toolbox for PC-Based Medical Image Reconstruction by Ahmed Abdel Salam, Norhan Ahmed Shaker, Fadwa Fawzy Fouad, and Yasser M. Kadah. The introduction reads:

Magnetic resonance imaging (MRI) has been used extensively for clinical purposes to depict anatomy because of its non-invasive nature to human body. It is always desirable to enhance the resolution of MR images in order to confirm the presence of any suspicious behavior inside the body while keeping the imaging time short. At present, MR imaging is often limited by high noise levels, significant imaging artifacts and/or long data acquisition (scan) times [1]. Advanced image reconstruction algorithms can mitigate these limitations and improve image quality. In this paper we aim to enhance image quality and shorten imaging time using Compressed sensing (CS) and parallel computing techniques.

Finally, there is a talk this week at University of Bonn, Hard Thresholding Pursuit: an algorithm for compressive sensing by Barbara Fuchs.

## No comments:

Post a Comment