Tuesday, November 26, 2013

The Long Post of the Week: Compressive Sensing Edition

Here are a few papers I have not covered since November 1st, enjoy:

Sparse Phase Retrieval: Uniqueness Guarantees and Recovery Algorithms

The problem of signal recovery from its Fourier transform magnitude, or equivalently, autocorrelation, is of paramount importance in various fields of engineering and has been around for over 100 years. In order to achieve this, additional structure information about the signal is necessary. In this work, we first provide simple and general conditions, which when satisfied, allow unique recovery almost surely. In particular, we focus our attention on sparse signals and show that most O(n)-sparse signals, i.e., signals with O(n) non-zero components, have distinct Fourier transform magnitudes (up to time-shift, time-reversal and global sign). Our results are a significant improvement over the existing identifiability results, which provide such guarantees for only O(n^{1/4})-sparse signals. 
Then, we exploit the sparse nature of the signals and develop a Two-stage Sparse Phase Retrieval algorithm (TSPR), which involves: (i) identifying the support, i.e., the locations of the non-zero components, of the signal (ii) identifying the signal values using the support knowledge. We show that the proposed algorithm can provably recover O(n^{1/2-\eps})-sparse signals (up to time-shift, time-reversal and global sign) with arbitrarily high probability in quadratic-time. To the best of our knowledge, state of the art phase retrieval algorithms provide such recovery guarantees for only O(n^{1/4})-sparse signals. Numerical experiments complement our theoretical analysis and verify the effectiveness of the proposed algorithms.

Matrix perturbation bounds with random noise and their applications

Matrix perturbation inequalities, such as Weyl's theorem (concerning the singular values) and the Davis-Kahan theorem (concerning the singular vectors), play essential roles in quantitative science; in particular, these bounds have found application in data analysis as well as related areas of engineering and computer science. In many situations, the perturbation is assumed to be random, and the original matrix (data) has certain structural properties (such as having low rank). We show that, in this scenario, classical perturbation results, such as Weyl and Davis-Kahan, can be improved significantly. We believe many of our new bounds are close to optimal. 
As an application, we give a uniform and simple analysis of many matrix reconstruction problems including hidden clique, hidden coloring, clustering, and Netflix-type problems. In certain cases, our method generalizes and improves existing results.

Toward a unified theory of sparse dimensionality reduction in Euclidean space

Let $\Phi\in\mathbb{R}^{m\times n}$ be a sparse Johnson-Lindenstrauss transform [Kane, Nelson, SODA 2012] with $s$ non-zeroes per column. For $T$ a subset of the unit sphere, $\varepsilon\in(0,1/2)$ given, we study settings for $m,s$ required to ensure $$ \mathop{\mathbb{E}}_\Phi \sup_{x\in T} \left|\|\Phi x\|_2^2 - 1 \right| lessthan \varepsilon , $$ i.e. so that $\Phi$ preserves the norm of every $x\in T$ simultaneously and multiplicatively up to $1+\varepsilon$. In particular, our most general theorem shows that it suffices to set $m = \tilde{\Omega}(\gamma_2^2(T) + 1)$ and $s = \tilde{\Omega}(1)$ as long as $s,m$ additionally satisfy a certain tradeoff condition that is governed by the geometry of $T$ (and as we show for several examples of interest, is easy to verify). Here $\gamma_2$ is Talagrand's functional, and we write $f = \tilde{\Omega}(g)$ to mean $f \ge Cg (\varepsilon^{-1}\log n)^c$ for some constants $C,cgreaterthan0$. 
Our result can be seen as an extension to sparse $\Phi$ of works of [Klartag, Mendelson, J. Funct. Anal. 2005], [Gordon, GAFA 1988], and [Mendelson, Pajor, Tomczak-Jaegermann, GAFA 2007], which were concerned with dense $\Phi$ having i.i.d. (sub)gaussian entries. Our work introduces a theory that qualitatively unifies several results related to the Johnson-Lindenstrauss lemma, subspace embeddings, and Fourier-based methods for obtaining matrices satisfying the restricted isometry property.

Recovery of Sparse Matrices via Matrix Sketching

In this paper, we consider the problem of recovering an unknown sparse matrix X from the matrix sketch Y = AX B^T. The dimension of Y is less than that of X, and A and B are known matrices. This problem can be solved using standard compressive sensing (CS) theory after converting it to vector form using the Kronecker operation. In this case, the measurement matrix assumes a Kronecker product structure. However, as the matrix dimension increases the associated computational complexity makes its use prohibitive. We extend two algorithms, fast iterative shrinkage threshold algorithm (FISTA) and orthogonal matching pursuit (OMP) to solve this problem in matrix form without employing the Kronecker product. While both FISTA and OMP with matrix inputs are shown to be equivalent in performance to their vector counterparts with the Kronecker product, solving them in matrix form is shown to be computationally more efficient. We show that the computational gain achieved by FISTA with matrix inputs over its vector form is more significant compared to that achieved by OMP.

Joint recovery algorithms using difference of innovations for distributed compressed sensing

Distributed compressed sensing is concerned with representing an ensemble of jointly sparse signals using as few linear measurements as possible. Two novel joint reconstruction algorithms for distributed compressed sensing are presented in this paper. These algorithms are based on the idea of using one of the signals as side information; this allows to exploit joint sparsity in a more effective way with respect to existing schemes. They provide gains in reconstruction quality, especially when the nodes acquire few measurements, so that the system is able to operate with fewer measurements than is required by other existing schemes. We show that the algorithms achieve better performance with respect to the state-of-the-art.

A Convex Optimization Approach to pMRI Reconstruction

In parallel magnetic resonance imaging (pMRI) reconstruction without using estimation of coil sensitivity functions, one group of algorithms reconstruct sensitivity encoded images of the coils first followed by the magnitude only image reconstruction, e.g. GRAPPA, and another group of algorithms jointly compute the image and sensitivity functions by regularized optimization which is a non-convex problem with local only solutions. For the magnitude only image reconstruction, this paper derives a reconstruction formulation, which is linear in the magnitude image, and an associated convex hull in the solution space of the formulated equation containing the magnitude of the image. As a result, the magnitude only image reconstruction for pMRI is formulated into a two-step convex optimization problem, which has a globally optimal solution. An algorithm based on split-bregman and nuclear norm regularized optimizations is proposed to implement the two-step convex optimization and its applications to phantom and in-vivo MRI data sets result in superior reconstruction performance compared with other algorithms.

Joint Sparsity Recovery for Spectral Compressed Sensing

Compressed Sensing (CS) is an effective approach to reduce the required number of samples for reconstructing a sparse signal in an a priori basis, but may suffer severely from the issue of basis mismatch. In this paper we study the problem of simultaneously recovering multiple spectrally-sparse signals that are supported on the same frequencies lying arbitrarily on the unit circle. We propose an atomic norm minimization problem, which can be regarded as a continuous counterpart of the discrete CS formulation and be solved efficiently via semidefinite programming. Through numerical experiments, we show that the number of samples per signal can be further reduced by harnessing the joint sparsity pattern of multiple signals. When the number of measurement vectors goes to infinity, we solve the problem via completion of a low-rank positive semidefinite Toeplitz covariance matrix, which exhibits superior performance.


PDEs with Compressed Solutions

Sparsity plays a central role in recent developments in signal processing, linear algebra, statistics, optimization, and other fields. In these developments, sparsity is promoted through the addition of an $L^1$ norm (or related quantity) as a constraint or penalty in a variational principle. We apply this approach to partial differential equations that come from a variational quantity, either by minimization (to obtain an elliptic PDE) or by gradient flow (to obtain a parabolic PDE). Addition of an $L^1$ term in the variational method leads to a modified PDE in which there is a subgradient term with a simple explicit form. The resulting parabolic PDEs are $L^1$ contractive, total variation diminishing, and positivity preserving. Also, the solutions have finite support and finite speed of propagation. Numerical solutions illustrate these analytic results and suggest additional properties of the solutions.

Dictionary-Learning-Based Reconstruction Method for Electron Tomography

Electron tomography usually suffers from so called missing wedge artifacts caused by limited tilt angle range. An equally sloped tomography (EST) acquisition scheme (which should be called the linogram sampling scheme) was recently applied to achieve 2.4-angstrom resolution. On the other hand, a compressive sensing-inspired reconstruction algorithm, known as adaptive dictionary based statistical iterative reconstruction (ADSIR), has been reported for x-ray computed tomography. In this paper, we evaluate the EST, ADSIR and an ordered-subset simultaneous algebraic reconstruction technique (OS-SART), and compare the ES and equally angled (EA) data acquisition modes. Our results show that OS-SART is comparable to EST, and the ADSIR outperforms EST and OS-SART. Furthermore, the equally sloped projection data acquisition mode has no advantage over the conventional equally angled mode in the context.

An RKHS Approach to Estimation with Sparsity Constraints

The investigation of the effects of sparsity or sparsity constraints in signal processing problems has received considerable attention recently. Sparsity constraints refer to the a priori information that the object or signal of interest can be represented by using only few elements of a predefined dictionary. Within this thesis, sparsity refers to the fact that a vector to be estimated has only few nonzero entries. One specific field concerned with sparsity constraints has become popular under the name Compressed Sensing (CS). Within CS, the sparsity is exploited in order to perform (nearly) lossless compression. Moreover, this compression is carried out jointly or simultaneously with the process of sensing a physical quantity. In contrast to CS, one can alternatively use sparsity to enhance signal processing methods. Obviously, sparsity constraints can only improve the obtainable estimation performance since the constraints can be interpreted as an additional prior information about the unknown parameter vector which is to be estimated. Our main focus will be on this aspect of sparsity, i.e., we analyze how much we can gain in estimation performance due to the sparsity constraints.

Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization

Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure for finding sparse solutions of underdetermined linear systems. This method has been shown to have strong theoretical guarantee and impressive numerical performance. In this paper, we generalize HTP from compressive sensing to a generic problem setup of sparsity-constrained convex optimization. The proposed algorithm iterates between a standard gradient descent step and a hard thresholding step with or without debiasing. We prove that our method enjoys the strong guarantees analogous to HTP in terms of rate of convergence and parameter estimation accuracy. Numerical evidences show that our method is superior to the state-of-the-art greedy selection methods in sparse logistic regression and sparse precision matrix estimation tasks.

Compressive Measurement Designs for Estimating Structured Signals in Structured Clutter: A Bayesian Experimental Design Approach

This work considers an estimation task in compressive sensing, where the goal is to estimate an unknown signal from compressive measurements that are corrupted by additive pre-measurement noise (interference, or clutter) as well as post-measurement noise, in the specific setting where some (perhaps limited) prior knowledge on the signal, interference, and noise is available. The specific aim here is to devise a strategy for incorporating this prior information into the design of an appropriate compressive measurement strategy. Here, the prior information is interpreted as statistics of a prior distribution on the relevant quantities, and an approach based on Bayesian Experimental Design is proposed. Experimental results on synthetic data demonstrate that the proposed approach outperforms traditional random compressive measurement designs, which are agnostic to the prior information, as well as several other knowledge-enhanced sensing matrix designs based on more heuristic notions.

An Inexact Proximal Path-Following Algorithm for Constrained Convex Minimization

Many scientific and engineering applications feature large-scale non-smooth convex minimization problems over convex sets. In this paper, we address an important instance of this broad class where we assume that the non-smooth objective is equipped with a tractable proximity operator and that the convex constraints afford a self-concordant barrier. We provide a new joint treatment of proximal and self-concordant barrier concepts and illustrate that such problems can be efficiently solved without lifting problem dimensions. We propose an inexact path-following algorithmic framework and theoretically characterize the worst case convergence as well as computational complexity of this framework, and also analyze its behavior when the proximal subproblems are solved inexactly. To illustrate our framework, we apply its instances to both synthetic and real-world applications and illustrate their accuracy and scalability in large-scale settings. As an added bonus, we describe how our framework can obtain points on the Pareto frontier of regularized problems with self-concordant objectives in a tuning free fashion.

Quasi-Linear Compressed Sensing

Inspired by significant real-life applications, in particular, sparse phase retrieval and sparse pulsation frequency detection in Asteroseismology, we investigate a general framework for compressed sensing, where the measurements are quasi-linear. We formulate natural generalizations of the well-known Restricted Isometry Property (RIP) towards nonlinear measurements, which allow us to prove both unique identifiability of sparse signals as well as the convergence of recovery algorithms to compute them efficiently. We show that for certain randomized quasi-linear measurements, including Lipschitz perturbations of classical RIP matrices and phase retrieval from random projections, the proposed restricted isometry properties hold with high probability. We analyze a generalized Orthogonal Least Squares (OLS) under the assumption that magnitudes of signal entries to be recovered decay fast. Greed is good again, as we show that this algorithm performs efficiently in phase retrieval and asteroseismology. For situations where the decay assumption on the signal does not necessarily hold, we propose two alternative algorithms, which are natural generalizations of the well-known iterative hard and soft-thresholding. While these algorithms are rarely successful for the mentioned applications, we show their strong recovery guarantees for quasi-linear measurements which are Lipschitz perturbations of RIP matrices.

Detection of Correlations with Adaptive Sensing

The problem of detecting correlations from samples of a high-dimensional Gaussian vector has recently received a lot of attention. In most existing work, detection procedures are provided with a full sample. However, following common wisdom in experimental design, the experimenter may have the capacity to make targeted measurements in an on-line and adaptive manner. In this work, we investigate such adaptive sensing procedures for detecting positive correlations. It it shown that, using the same number of measurements, adaptive procedures are able to detect significantly weaker correlations than their non-adaptive counterparts. We also establish minimax lower bounds that show the limitations of any procedure.

Increasing Compression Ratio of Low Complexity Compressive Sensing Video Encoder with Application-Aware Configurable Mechanism

With the development of embedded video acquisition nodes and wireless video surveillance systems, traditional video coding methods could not meet the needs of less computing complexity any more, as well as the urgent power consumption. So, a low-complexity compressive sensing video encoder framework with application-aware configurable mechanism is proposed in this paper, where novel encoding methods are exploited based on the practical purposes of the real applications to reduce the coding complexity effectively and improve the compression ratio (CR). Moreover, the group of processing (GOP) size and the measurement matrix size can be configured on the encoder side according to the post-analysis requirements of an application example of object tracking to increase the CR of encoder as best as possible. Simulations show the proposed framework of encoder could achieve 60X of CR when the tracking successful rate (SR) is still keeping above 90%.

Low-complexity Multiclass Encryption by Compressed Sensing, Part II: Known-Plaintext Attacks

Despite its intrinsic linearity, compressed sensing may be exploited to at least partially encrypt acquired signals from unintentional receivers: in the companion paper we have shown that the simplicity of its encoding allows the definition of a general, lightweight scheme in which transmitters distribute the same information to receivers of different classes enabled to recover it with different quality levels. In this investigation we quantify the robustness of such a scheme with respect to known-plaintext attacks. The odds of such an attack are shown by theoretical means, proving that the number of candidate encoding matrices matching a typical plaintext-ciphertext pair is astronomically large, thus making the search for the true encoding infeasible. These attacks are also simulated by applying compressed sensing to a variety of signals (speech, images and electrocardiographic traces) showing how this difficulty in extracting information on the true encoding matrix from a plaintext-ciphertext pair is reflected on the quality of the signals recovered by the attacker. The results clarify that, although not perfectly secure, CS grants a noteworthy level of security that may come at almost-zero cost and especially benefit resource-limited applications.

Mirror Prox Algorithm for Multi-Term Composite Minimization and Alternating Directions

In the paper, we develop a composite version of Mirror Prox algorithm for solving convex-concave saddle point problems and monotone variational inequalities of special structure, allowing to cover saddle point/variational analogies of what is usually called "composite minimization" (minimizing a sum of an easy-to-handle nonsmooth and a general-type smooth convex functions "as if" there were no nonsmooth component at all). We demonstrate that the composite Mirror Prox inherits the favourable (and unimprovable already in the large-scale bilinear saddle point case) $O(1/\epsilon)$ efficiency estimate of its prototype. We demonstrate that the proposed approach can be naturally applied to Lasso-type problems with several penalizing terms (e.g. acting together $\ell_1$ and nuclear norm regularization) and to problems of the structure considered in the alternating directions methods, implying in both cases methods with the $O(\epsilon^{-1})$ complexity bounds.

Off-The-Grid Spectral Compressed Sensing With Prior Information

Recent research in off-the-grid compressed sensing (CS) has demonstrated that, under certain conditions, one can successfully recover a spectrally sparse signal from a few time-domain samples even though the dictionary is continuous. In this paper, we extend off-the-grid CS to applications where some prior information about spectrally sparse signal is known. We specifically consider cases where a few contributing frequencies or poles, but not their amplitudes or phases, are known a priori. Our results show that equipping off-the-grid CS with the known-poles algorithm can increase the probability of recovering all the frequency components

The Squared-Error of Generalized LASSO: A Precise Analysis

We consider the problem of estimating an unknown signal $x_0$ from noisy linear observations $y = Ax_0 + z\in R^m$. In many practical instances, $x_0$ has a certain structure that can be captured by a structure inducing convex function $f(\cdot)$. For example, $\ell_1$ norm can be used to encourage a sparse solution. To estimate $x_0$ with the aid of $f(\cdot)$, we consider the well-known LASSO method and provide sharp characterization of its performance. We assume the entries of the measurement matrix $A$ and the noise vector $z$ have zero-mean normal distributions with variances $1$ and $\sigma^2$ respectively. For the LASSO estimator $x^*$, we attempt to calculate the Normalized Square Error (NSE) defined as $\frac{\|x^*-x_0\|_2^2}{\sigma^2}$ as a function of the noise level $\sigma$, the number of observations $m$ and the structure of the signal. We show that, the structure of the signal $x_0$ and choice of the function $f(\cdot)$ enter the error formulae through the summary parameters $D(cone)$ and $D(\lambda)$, which are defined as the Gaussian squared-distances to the subdifferential cone and to the $\lambda$-scaled subdifferential, respectively. The first LASSO estimator assumes a-priori knowledge of $f(x_0)$ and is given by $\arg\min_{x}\{{\|y-Ax\|_2}~\text{subject to}~f(x)\leq f(x_0)\}$. We prove that its worst case NSE is achieved when $\sigma\rightarrow 0$ and concentrates around $\frac{D(cone)}{m-D(cone)}$. Secondly, we consider $\arg\min_{x}\{\|y-Ax\|_2+\lambda f(x)\}$, for some $\lambda\geq 0$. This time the NSE formula depends on the choice of $\lambda$ and is given by $\frac{D(\lambda)}{m-D(\lambda)}$. We then establish a mapping between this and the third estimator $\arg\min_{x}\{\frac{1}{2}\|y-Ax\|_2^2+ \lambda f(x)\}$. Finally, for a number of important structured signal classes, we translate our abstract formulae to closed-form upper bounds on the NSE.

Compression Limits for Random Vectors with Linearly Parameterized Second-Order Statistics

The class of complex random vectors whose covariance matrix is linearly parameterized by a basis of Hermitian Toeplitz (HT) matrices is considered, and the maximum compression ratios that preserve all second-order information are derived - the statistics of the uncompressed vector must be recoverable from a set of linearly compressed observations. This kind of vectors typically arises when sampling wide-sense stationary random processes and features a number of applications in signal and array processing. 
Explicit guidelines to design optimal and nearly optimal schemes operating both in a periodic and non-periodic fashion are provided by considering two of the most common linear compression schemes: non-uniform sampling and random sampling. It is seen that the maximum compression ratios depend on the structure of the HT subspace where the covariance matrix of the uncompressed observations is known to be contained. Compression patterns attaining these maximum ratios are found for the case without structure as well as for the cases with circulant or banded structure.

Compressed matched filter for non-Gaussian noise

We consider estimation of a deterministic unknown parameter vector in a linear model with non-Gaussian noise. In the Gaussian case, dimensionality reduction via a linear matched filter provides a simple low dimensional sufficient statistic which can be easily communicated and/or stored for future inference. Such a statistic is usually unknown in the general non-Gaussian case. Instead, we propose a hybrid matched filter coupled with a randomized compressed sensing procedure, which together create a low dimensional statistic. We also derive a complementary algorithm for robust reconstruction given this statistic. Our recovery method is based on the fast iterative shrinkage and thresholding algorithm which is used for outlier rejection given the compressed data. We demonstrate the advantages of the proposed framework using synthetic simulations.


Robust Compressed Sensing and Sparse Coding with the Difference Map

In compressed sensing, we wish to reconstruct a sparse signal $x$ from observed data $y$. In sparse coding, on the other hand, we wish to find a representation of an observed signal $y$ as a sparse linear combination, with coefficients $x$, of elements from an overcomplete dictionary. While many algorithms are competitive at both problems when $x$ is very sparse, it can be challenging to recover $x$ when it is less sparse. We present the Difference Map, which excels at sparse recovery when sparseness is lower and noise is higher. The Difference Map out-performs the state of the art with reconstruction from random measurements and natural image reconstruction via sparse coding.

Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation

We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of l1-norm minimization using a standard linear programming algorithm is O(N^3). We show that this cost can be reduced to O(N^2) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach.

Subspace Thresholding Pursuit: A Reconstruction Algorithm for Compressed Sensing

We propose a new iterative greedy algorithm for reconstructions of sparse signals with or without noisy perturbations in compressed sensing (CS). The proposed algorithm, called subspace thresholding pursuit (STP) in this paper, is a simple combination of subspace pursuit and iterative hard thresholding which are two important greedy reconstruction algorithms. In view of theoretical guarantee, we show that STP with parameter $\mu=1$ can reconstruct arbitrary signals with bounded mean square error under sparsity defect and measurement error if the measurement matrix satisfies the restricted isometry property with $\delta_{3s}lessthan0.5340$. articularly, if $\delta_{3s}lessthan0.5340$, it can reconstruct any $s$-sparse signals perfectly in noiseless environment. In view of empirical performance, STP with proper $\mu$ can outperform other state-of-the-art reconstruction algorithms greatly when reconstructing Gaussian signals. Furthermore, when reconstructing constant amplitude signals with random signs (CARS signals), unlike other known iterative greedy algorithms which usually perform significantly worse than the well-known $\ell_1$ minimization, the proposed STP algorithm with proper $\mu$ can outperform the $\ell_1$ minimization in most practical cases. In addition, we propose a simple but effective method to solve the overfitting problem when the undersampling ratio is large. Finally, we generalize the idea of STP to other state-of-the-art algorithms and the modified algorithms have better empirical performance than the original ones.

Reconstruction of Complex-Valued Fractional Brownian Motion Fields Based on Compressive Sampling and Its Application to PSF Interpolation in Weak Lensing Survey

A new reconstruction method of complex-valued fractional Brownian motion (CV-fBm) field based on Compressive Sampling (CS) is proposed. The decay property of Fourier coefficients magnitude of the fBm signals/ fields indicates that fBms are compressible. Therefore, a few numbers of samples will be sufficient for a CS based method to reconstruct the full field. The effectiveness of the proposed method is showed by simulating, random sampling, and reconstructing CV-fBm fields. Performance evaluation shows advantages of the proposed method over boxcar filtering and thin plate methods. It is also found that the reconstruction performance depends on both of the fBm's Hurst parameter and the number of samples, which in fact is consistent with the CS reconstruction theory. In contrast to other fBm or fractal interpolation methods, the proposed CS based method does not require the knowledge of fractal parameters in the reconstruction process; the inherent sparsity is just sufficient for the CS to do the reconstruction. Potential applicability of the proposed method in weak gravitational lensing survey, particularly for interpolating non-smooth PSF (Point Spread Function) distribution representing distortion by a turbulent field is also discussed.

$L_{1/2}$ Regularization: Convergence of Iterative Half Thresholding Algorithm

In recent studies on sparse modeling, the nonconvex regularization approaches (particularly, $L_{q}$ regularization with $q\in(0,1)$) have been demonstrated to possess capability of gaining much benefit in sparsity-inducing and efficiency. As compared with the convex regularization approaches (say, $L_{1}$ regularization), however, the convergence issue of the corresponding algorithms are more difficult to tackle. In this paper, we deal with this difficult issue for a specific but typical nonconvex regularization scheme, the $L_{1/2}$ regularization, which has been successfully used to many applications. More specifically, we study the convergence of the iterative \textit{half} thresholding algorithm (the \textit{half} algorithm for short), one of the most efficient and important algorithms for solution to the $L_{1/2}$ regularization. As the main result, we show that under certain conditions, the \textit{half} algorithm converges to a local minimizer of the $L_{1/2}$ regularization, with an eventually linear convergence rate. The established result provides a theoretical guarantee for a wide range of applications of the \textit{half} algorithm. We provide also a set of simulations to support the correctness of theoretical assertions and compare the time efficiency of the \textit{half} algorithm with other known typical algorithms for $L_{1/2}$ regularization like the iteratively reweighted least squares (IRLS) algorithm and the iteratively reweighted $l_{1}$ minimization (IRL1) algorithm.

Nearly Optimal Sample Size in Hypothesis Testing for High-Dimensional Regression

We consider the problem of fitting the parameters of a high-dimensional linear regression model. In the regime where the number of parameters $p$ is comparable to or exceeds the sample size $n$, a successful approach uses an $\ell_1$-penalized least squares estimator, known as Lasso. Unfortunately, unlike for linear estimators (e.g., ordinary least squares), no well-established method exists to compute confidence intervals or p-values on the basis of the Lasso estimator. Very recently, a line of work \cite{javanmard2013hypothesis, confidenceJM, GBR-hypothesis} has addressed this problem by constructing a debiased version of the Lasso estimator. In this paper, we study this approach for random design model, under the assumption that a good estimator exists for the precision matrix of the design. Our analysis improves over the state of the art in that it establishes nearly optimal \emph{average} testing power if the sample size $n$ asymptotically dominates $s_0 (\log p)^2$, with $s_0$ being the sparsity level (number of non-zero coefficients). Earlier work obtains provable guarantees only for much larger sample size, namely it requires $n$ to asymptotically dominate $(s_0 \log p)^2$. 
In particular, for random designs with a sparse precision matrix we show that an estimator thereof having the required properties can be computed efficiently. Finally, we evaluate this approach on synthetic data and compare it with earlier proposals.

Deterministic Sequences for Compressive MIMO Channel Estimation

This paper considers the problem of pilot design for compressive multiple-input multiple-output (MIMO) channel estimation. In particular, we are interested in estimating the channels for multiple transmitters simultaneously when the pilot sequences are shorter than the combined channels. Existing works on this topic demonstrated that tools from compressed sensing theory can yield accurate multichannel estimation provided that each pilot sequence is randomly generated. Here, we propose constructing the pilot sequence for each transmitter from a small set of deterministic sequences. We derive a theoretical lower bound on the length of the pilot sequences that guarantees the multichannel estimation with high probability. Simulation results are provided to demonstrate the performance of the proposed method.

Multi-target Radar Detection within a Sparsity Framework

Traditional radar detection schemes are typically studied for single target scenarios and they can be non-optimal when there are multiple targets in the scene. In this paper, we develop a framework to discuss multi-target detection schemes with sparse reconstruction techniques that is based on the Neyman-Pearson criterion. We will describe an initial result in this framework concerning false alarm probability with LASSO as the sparse reconstruction technique. Then, several simulations validating this result will be discussed. Finally, we describe several research avenues to further pursue this framework.

Robust Sparse Signal Recovery for Compressed Sensing with Sampling and Dictionary Uncertainties

Compressed sensing (CS) shows that a signal having a sparse or compressible representation can be recovered from a small set of linear measurements. In classical CS theory, the sampling matrix and dictionary are assumed be known exactly in advance. However, uncertainties exist due to sampling distortion, finite grids of the parameter space of dictionary, etc. In this paper, we take a generalized sparse signal model, which simultaneously considers the sampling and dictionary uncertainties. Based on the new signal model, a new optimization model for robust sparse signal recovery is proposed. This optimization model can be deduced with stochastic robust approximation analysis. Both convex relaxation and greedy algorithm are used to solve the optimization problem. For the convex relaxation method, a sufficient condition for recovery by convex relaxation method and the uniqueness of solution are given too; For the greedy sparse algorithm, it is realized by the introduction of a pre-processing of the sensing matrix and the measurements. In numerical experiments, both simulated data and real-life ECG data based results show that the proposed method has a better performance than the current methods.


Simultaneous Greedy Analysis Pursuit for Compressive Sensing of Multi-Channel ECG Signals

This paper addresses compressive sensing for multi-channel ECG. Compared to the traditional sparse signal recovery approach which decomposes the signal into the product of a dictionary and a sparse vector, the recently developed cosparse approach exploits sparsity of the product of an analysis matrix and the original signal. We apply the cosparse Greedy Analysis Pursuit (GAP) algorithm for compressive sensing of ECG signals. Moreover, to reduce processing time, classical signal-channel GAP is generalized to the multi-channel GAP algorithm, which simultaneously reconstructs multiple signals with similar support. Numerical experiments show that the proposed method outperforms the classical sparse multi-channel greedy algorithms in terms of accuracy and the single-channel cosparse approach in terms of processing speed.

Distribution of Compressive Measurements Generated by Structurally Random Matrices

Structurally random matrices (SRMs) have been proposed as a practical alternative to fully random matrices (FRMs) for generating compressive sensing measurements. If the compressive measurements are transmitted over a communication channel, they need to be efficiently quantized and coded and hence knowledge of the measurements' statistics required. In this paper we study the statistical distribution of compressive measurements generated by various types of SRMs(and FRMs), give conditions for asymptotic normality and point out the implications for the measurements' quantization and coding. Simulations on real-world video signals confirm the theoretical findings and show that the signal randomization of SRMs yields a dramatic improvement in quantization properties.

Adaptive Hierarchical Data Aggregation using Compressive Sensing (A-HDACS) for Non-smooth Data Field

Compressive Sensing (CS) has been applied successfully in a wide variety of applications in recent years, including photography, shortwave infrared cameras, optical system research, facial recognition, MRI, etc. In wireless sensor networks (WSNs), significant research work has been pursued to investigate the use of CS to reduce the amount of data communicated, particularly in data aggregation applications and thereby improving energy efficiency. However, most of the previous work in WSN has used CS under the assumption that data field is smooth with negligible white Gaussian noise. In these schemes signal sparsity is estimated globally based on the entire data field, which is then used to determine the CS parameters. In more realistic scenarios, where data field may have regional fluctuations or it is piecewise smooth, existing CS based data aggregation schemes yield poor compression efficiency. In order to take full advantage of CS in WSNs, we propose an Adaptive Hierarchical Data Aggregation using Compressive Sensing (A-HDACS) scheme. The proposed schemes dynamically chooses sparsity values based on signal variations in local regions. We prove that A-HDACS enables more sensor nodes to employ CS compared to the schemes that do not adapt to the changing field. The simulation results also demonstrate the improvement in energy efficiency as well as accurate signal recovery.

Minimum $n$-Rank Approximation via Iterative Hard Thresholding

The problem of recovering a low $n$-rank tensor is an extension of sparse recovery problem from the low dimensional space (matrix space) to the high dimensional space (tensor space) and has many applications in computer vision and graphics such as image inpainting and video inpainting. In this paper, we consider a new tensor recovery model, named as minimum $n$-rank approximation (MnRA), and propose an appropriate iterative hard thresholding algorithm with giving the upper bound of the $n$-rank in advance. Particularly, we prove that the iterative sequence generated by the iterative hard thresholding algorithm is globally linearly convergent with the rate 1/2 under some conditions. Additionally, combining an effective heuristic for determining $n$-rank, we can also apply the proposed algorithm to solve MnRA when $n$-rank is unknown in advance. Some preliminary numerical results on randomly generated and real low-$n$-rank tensor completion problems are reported, which show the efficiency of the proposed algorithms.

Compressed Sensing for Energy-Efficient Wireless Telemonitoring: Challenges and Opportunities

As a lossy compression framework, compressed sensing has drawn much attention in wireless telemonitoring of biosignals due to its ability to reduce energy consumption and make possible the design of low-power devices. However, the non-sparseness of biosignals presents a major challenge to compressed sensing. This study proposes and evaluates a spatio-temporal sparse Bayesian learning algorithm, which has the desired ability to recover such non-sparse biosignals. It exploits both temporal correlation in each individual biosignal and inter-channel correlation among biosignals from different channels. The proposed algorithm was used for compressed sensing of multichannel electroencephalographic (EEG) signals for estimating vehicle drivers' drowsiness. Results showed that the drowsiness estimation was almost unaffected even if raw EEG signals (containing various artifacts) were compressed by 90%.

Determination of Multipath Security Using Efficient Pattern Matching

Multipath routing is the use of multiple potential paths through a network in order to enhance fault tolerance, optimize bandwidth use, and improve security. Selecting data flow paths based on cost addresses performance issues but ignores security threats. Attackers can disrupt the data flows by attacking the links along the paths. Denial-of-service, remote exploitation, and other such attacks launched on any single link can severely limit throughput. Networks can be secured using a secure quality of service approach in which a sender disperses data along multiple secure paths. In this secure multi-path approach, a portion of the data from the sender is transmitted over each path and the receiver assembles the data fragments that arrive. One of the largest challenges in secure multipath routing is determining the security threat level along each path and providing a commensurate level of encryption along that path. The research presented explores the effects of real-world attack scenarios in systems, and gauges the threat levels along each path. Optimal sampling and compression of network data is provided via compressed sensing. The probability of the presence of specific attack signatures along a network path is determined using machine learning techniques. Using these probabilities, information assurance levels are derived such that security measures along vulnerable paths are increased.

Joint Power and Admission Control: Non-Convex Approximation and An Efficient Polynomial Time Deflation Approach

In an interference limited network, joint power and admission control (JPAC) aims at supporting a maximum number of links at their specified signal to interference plus noise ratio (SINR) targets while using a minimum total transmission power. Various convex approximation deflation approaches have been developed for the JPAC problem. In this paper, we propose an efficient polynomial time non-convex approximation deflation approach for solving the problem. The approach is based on the non-convex $\ell_q$-minimization approximation of an equivalent sparse $\ell_0$-minimization reformulation of the JPAC problem where $q\in(0,1).$ We show that, for any instance of the JPAC problem, there exists a $\bar q\in(0,1)$ such that it can be exactly solved by solving its $\ell_q$-minimization approximation problem with any $q\in(0, \bar q]$. Further, we propose a potential reduction interior-point algorithm, which can return an $\epsilon$-KKT solution of the NP-hard $\ell_q$-minimization approximation problem in polynomial time. The returned solution can be used to check the simultaneous supportability of all links in the network and to guide an iterative link removal procedure, resulting in the polynomial time non-convex approximation deflation approach for the JPAC problem. Numerical simulations show that the proposed approach outperforms the existing convex approximation approaches in terms of the number of supported links and the total transmission power, particularly exhibiting a quite good performance in selecting which subset of links to support.


Non-Convex Compressed Sensing Using Partial Support Information

In this paper we address the recovery conditions of weighted $\ell_p$ minimization for signal reconstruction from compressed sensing measurements when partial support information is available. We show that weighted $\ell_p$ minimization with $0

Fast compressed sensing analysis for super-resolution imaging using L1-homotopy

Hazen P. Babcock, Jeffrey R. Moffitt, Yunlong Cao, and Xiaowei Zhuang  »View Author Affiliations

Abstract

In super-resolution imaging techniques based on single-molecule switching and localization, the time to acquire a super-resolution image is limited by the maximum density of fluorescent emitters that can be accurately localized per imaging frame. In order to increase the imaging rate, several methods have been recently developed to analyze images with higher emitter densities. One powerful approach uses methods based on compressed sensing to increase the analyzable emitter density per imaging frame by several-fold compared to other reported approaches. However, the computational cost of this approach, which uses interior point methods, is high, and analysis of a typical 40 µm x 40 µm field-of-view super-resolution movie requires thousands of hours on a high-end desktop personal computer. Here, we demonstrate an alternative compressed-sensing algorithm, L1-Homotopy (L1H), which can generate super-resolution image reconstructions that are essentially identical to those derived using interior point methods in one to two orders of magnitude less time depending on the emitter density. Moreover, for an experimental data set with varying emitter density, L1H analysis is ~300-fold faster than interior point methods. This drastic reduction in computational time should allow the compressed sensing approach to be routinely applied to super-resolution image analysis.




Compressed sensing with unknown sensor permutation

Valentin Emiya Antoine Bonnefoy, Rémi Gribonval, Laurent Daudet


Résumé : Compressed sensing is the ability to retrieve a sparse vector from a set of linear measurements. The task gets more difficult when the sensing process is not perfectly known. We address such a problem in the case where the sensors have been permuted, i.e., the order of the measurements is unknown. We propose a branch-and-bound algorithm that converges to the solution. The experimental study shows that our approach always retrieves the unknown permutation, while a simple convex relaxation strategy almost always fails. In terms of its time complexity, we show that the proposed algorithm converges quickly with respect to the combinatorial nature of the problem.





Résumé : We experiment a BRDF acquisiton method from a single picture, knowing geometry and illumination. To tackle such a severely underconstrained problem, we express the BRDF in a high dimensional basis, and perform the reconstruction using compressive sensing, looking for the most sparse solution to the linear problem of fitting the measurement image.

Learning Pairwise Graphical Models with Nonlinear Sufficient Statistics

We investigate a generic problem of learning pairwise exponential family graphical models with pairwise sufficient statistics defined by a global mapping function, e.g., Mercer kernels. This subclass of pairwise graphical models allow us to flexibly capture complex interactions among variables beyond pairwise product. We propose two $\ell_1$-norm penalized maximum likelihood estimators to learn the model parameters from i.i.d. samples. The first one is a joint estimator which estimates all the parameters simultaneously. The second one is a node-wise conditional estimator which estimates the parameters individually for each node. For both estimators, we show that under proper conditions the extra flexibility gained in our model comes at almost no cost of statistical and computational efficiency. We demonstrate the advantages of our model over state-of-the-art methods on synthetic and real datasets.

RandomBoost: Simplified Multi-class Boosting through Randomization

We propose a novel boosting approach to multi-class classification problems, in which multiple classes are distinguished by a set of random projection matrices in essence. The approach uses random projections to alleviate the proliferation of binary classifiers typically required to perform multi-class classification. The result is a multi-class classifier with a single vector-valued parameter, irrespective of the number of classes involved. Two variants of this approach are proposed. The first method randomly projects the original data into new spaces, while the second method randomly projects the outputs of learned weak classifiers. These methods are not only conceptually simple but also effective and easy to implement. A series of experiments on synthetic, machine learning and visual recognition data sets demonstrate that our proposed methods compare favorably to existing multi-class boosting algorithms in terms of both the convergence rate and classification accuracy.


Image Credit: NASA/JPL/Space Science Institute, W00085133.jpg was taken on November 23, 2013 and received on Earth November 24, 2013. The camera was pointing toward PROMETHEUS, and the image was taken using the MT2 and CL2 filters. 

Join the CompressiveSensing subreddit or the Google+ Community and post there !

No comments:

Printfriendly