Tuesday, February 14, 2012

Compressive Sensing This Week.

Today we have some good news for sensor networks, two papers on analysis type of approaches, manifold signal processing, compressive binary search, a search for the sparse elements of the nullspace and more. Enjoy!


Performance Analysis of $\ell_1$-synthesis with Coherent Frames by Yulong LiuShidong LiTiebin Mi. The abstract reads:
Signals with sparse representations in frames comprise a much more realistic model of nature, it is therefore highly desirable to extend the compressed sensing methodology to redundant dictionaries (or frames) as opposed to orthonormal bases only. In the generalized setting, the standard approach to recover the signal is known as $\ell_1$-synthesis (or Basis Pursuit). In this paper, we present the performance analysis of this approach in which the dictionary may be highly - and even perfectly - correlated. Our results do not depend on an accurate recovery of the coefficients. We demonstrate the validity of the results via several experiments.


Note on RIP-based Co-sparse Analysis by Lianlin Li. The abstract reads:
Over the past years, there are increasing interests in recovering the signals from undersampling data where such signals are sparse under some orthogonal dictionary or tight framework, which is referred to be sparse synthetic model. More recently, its counterpart, i.e., the sparse analysis model, has also attracted researcher's attentions where many practical signals which are sparse in the truly redundant dictionary are concerned. This short paper presents important complement to the results in existing literatures for treating sparse analysis model. Firstly, we give the natural generalization of well-known restricted isometry property (RIP) to deal with sparse analysis model, where the truly arbitrary incoherent dictionary is considered. Secondly, we studied the theoretical guarantee for the accurate recovery of signal which is sparse in general redundant dictionaries through solving l1-norm sparsity-promoted optimization problem. This work shows not only that compressed sensing is viable in the context of sparse analysis, but also that accurate recovery is possible via solving l1-minimization problem.

Signal Recovery on Incoherent Manifolds by Chinmay Hegde, Richard G. Baraniuk. The abstract reads:
Suppose that we observe noisy linear measurements of an unknown signal that can be modeled as the sum of two component signals, each of which arises from a nonlinear sub-manifold of a high dimensional ambient space. We introduce SPIN, a first order projected gradient method to recover the signal components. Despite the nonconvex nature of the recovery problem and the possibility of underdetermined measurements, SPIN provably recovers the signal components, provided that the signal manifolds are incoherent and that the measurement operator satisfies a certain restricted isometry property. SPIN significantly extends the scope of current recovery models and algorithms for low dimensional linear inverse problems and matches (or exceeds) the current state of the art in terms of performance.

Compressive binary search by Mark A. Davenport, Ery Arias-Castro. The abstract reads:
In this paper we consider the problem of locating a nonzero entry in a high-dimensional vector from possibly adaptive linear measurements. We consider a recursive bisection method which we dub the compressive binary search and show that it improves on what any nonadaptive method can achieve. We establish a non-asymptotic bound that applies to all methods, regardless of their computational complexity. Combined, these results show that the compressive binary search is within a double logarithmic factor of the optimal performance.

Achievable Angles Between two Compressed Sparse Vectors Under Norm/Distance Constraints Imposed by the Restricted Isometry Property: A Plane Geometry Approach by Ling-Hua Chang, Jwo-Yuh Wu. The abstract reads:
The angle between two compressed sparse vectors subject to the norm/distance constraints imposed by the restricted isometry property (RIP) of the sensing matrix plays a crucial role in the studies of many compressive sensing (CS) problems. Assuming that (i) u and v are two sparse vectors separated by an angle thetha, and (ii) the sensing matrix Phi satisfies RIP, this paper is aimed at analytically characterizing the achievable angles between Phi*u and Phi*v. Motivated by geometric interpretations of RIP and with the aid of the well-known law of cosines, we propose a plane geometry based formulation for the study of the considered problem. It is shown that all the RIP-induced norm/distance constraints on Phi*u and Phi*v can be jointly depicted via a simple geometric diagram in the two-dimensional plane. This allows for a joint analysis of all the considered algebraic constraints from a geometric perspective. By conducting plane geometry analyses based on the constructed diagram, closed-form formulae for the maximal and minimal achievable angles are derived. Computer simulations confirm that the proposed solution is tighter than an existing algebraic-based estimate derived using the polarization identity. The obtained results are used to derive a tighter restricted isometry constant of structured sensing matrices of a certain kind, to wit, those in the form of a product of an orthogonal projection matrix and a random sensing matrix. Follow-up applications to three CS problems, namely, compressed-domain interference cancellation, RIP-based analysis of the orthogonal matching pursuit algorithm, and the study of democratic nature of random sensing matrices are investigated.


D-ADMM: A Communication-Efficient Distributed Algorithm For Separable Optimization by João F. C. Mota, João M. F. Xavier, Pedro M. Q. Aguiar, Markus Püschel. The abstract reads:
We propose a distributed algorithm, named D-ADMM, for solving separable optimization problems in networks of interconnected nodes or agents. In a separable optimization problem, the cost function is the sum of all the agents' private cost functions, and the constraint set is the intersection of all the agents' private constraint sets. We require the private cost function and constraint set of a node to be known by that node only, during and before the execution of the algorithm. The application of our algorithm is illustrated with problems from signal processing and control, namely average consensus, compressed sensing, and support vector machines. It is well known that communicating in distributed environments is the most energy/time-demanding operation. Thus, algorithms using less communications are more prone to make networks live longer, e.g., sensor networks, or to execute faster, e.g., in supercomputing platforms. Through simulations for several network types and problems, we show that our algorithm requires less communications than the state-of-the-art algorithms.


Sensitivity Considerations in Compressed Sensing by Louis L. Scharf, Edwin K. P. Chong, Ali Pezeshki. The abstract reads:
In [1]–[4], we considered the question of basis mismatch in compressive sensing. Our motivation was to study the effect of mismatch between the mathematical basis (or frame) in which a signal was assumed to be sparse and the physical basis in which the signal was actually sparse. We were motivated by the problem of inverting a complex space-time radar image for the field of complex scatterers that produced the image. In this case there is no apriori known basis in which the image is actually sparse, as radar scatterers do not usually agree to place their ranges and Dopplers on any apriori agreed sampling grid. The consequence is that sparsity in the physical basis is not maintained in the mathematical basis, and a sparse inversion in the mathematical basis or frame does not match up with an inversion for the field in the physical basis. In [1]–[3], this effect was quantified with theorem statements about sensitivity to basis mismatch and with numerical examples for inverting time series records for their sparse set of damped complex exponential modes. These inversions were compared unfavorably to inversions using fancy linear prediction. In [4] and this paper, we continue these investigations by comparing the performance of sparse inversions of sparse images, using apriori selected frames that are mismatched to the physical basis, and by computing the Fisher information matrix for compressions of images that are sparse in a physical basis.

Saliency-guided compressive sensing approach to efficient laser range measurement by Shimon Schwartz, Alexander Wong, David A. Clausi. The abstract reads:
The acquisition of laser range measurements can be a time consuming process for situations where high spatial resolution is required. As such, optimizing the acquisition mechanism is of high importance for many range measurement applications. Acquiring such data through a dynamically small subset of measurement locations can address this problem. In such a case, the measured information can be regarded as incomplete, which necessitates the application of special reconstruction tools to recover the original data set. The reconstruction can be performed based on the concept of sparse signal representation. Recovering signals and images from their sub-Nyquist measurements forms the core idea of compressive sensing (CS). A new saliency-guided CS-based algorithm for improving the reconstruction of range image from sparse laser range measurements has been developed. This system samples the object of interest through an optimized probability density function derived based on saliency rather than a uniform random distribution. Particularly, we demonstrate a saliency-guided sampling method for simultaneously sensing and coding range image, which requires less than half the samples needed by conventional CS while maintaining the same reconstruction performance, or alternatively reconstruct range image using the same number of samples as conventional CS with a 16 dB improvement in signal-to-noise ratio. For example, to achieve a reconstruction SNR of 30 dB, the saliency-guided approach required 30% of the samples in compari-
son to the standard CS approach that required 90% of the samples in order to achieve similar performance.



Multi-Level Error-Resilient Neural Networks with Learning by Amir Hesam Salavati, Amin Karbasi. The abstract reads:
The problem of neural network association is to retrieve a previously memorized pattern from its noisy version using a network of neurons. An ideal neural network should include three components simultaneously: a learning algorithm, a large pattern retrieval capacity and resilience against noise. Prior works in this area usually improve one or two aspects at the cost of the third. Our work takes a step forward in closing this gap. More specifically, we show that by forcing natural constraints on the set of learning patterns, we can drastically improve the retrieval capacity of our neural network. Moreover, we devise a learning algorithm whose role is to learn those patterns satisfying the above mentioned constraints. Finally we show that our neural network can cope with a fair amount of noise.


Image Credit: NASA/JPL/Space Science Institute
N00181356.jpg was taken on February 12, 2012 and received on Earth February 12, 2012. The camera was pointing toward TITAN at approximately 3,695,492 kilometers away, and the image was taken using the CL1 and MT1 filters. 


1 comment:

Alireza said...

Really nice! Thank you!

Printfriendly