... That said, I'm writing you because I've recently published a paper on Geophysics which you might be interested in mentioning on Nuit Blanche. It involves the concept of using the eigenbasis of a matrix operator as the measurement basis for compressive sensing.This is what I understand, the eigenfunctions of the Helmholtz kernel are widespread (exponential) and the trick is to use the incoherence of this type of basis with the general signal seen in geophysics (concentrated wavelets in 1-d/2-d/3-d) to perform some compressed sensing. This is outstanding as it potentially opens the door to other ways of solving PDEs and other integral equations. Tim Lin and Felix Herrmann describe this in detail in Compressed wavefield extrapolation, the abstract reads:
I was looking for ways to improve the efficiency of explicit methods of wavefield propagation (via a matrix operator), working as a research assistant in seismic processing for my undergraduate degree in Physics at UBC. My professor Felix Herrmann and I started exploring the one-way propagator which is diagonalized in the eigenbasis of the Helmholtz operator, and it turns out that under certain boundary conditions the transforms into this eigenbasis greatly resemble the discrete (co)sine transform. Out of curiosity we "compressively sampled" the wavefield under a randomly chosen subset of this basis, applied the diagonalized operation, and then attempted to recover the signal with l1-regularization.
To our surprise, we found that we could successfully recover a propagated wavefield. This is upto the compressive-sensing defined limit. We're able to get away using only 30% of the eigenvectors of the original operator in the worst case. We believe with further research this can be a fresh alternative to "largest eigenvalue" or "largest SVD" based approach to compressing matrix operators. This is of course provided that the wavefield is suitably represented in a sparsifying basis such as curvelets.
An explicit algorithm for the extrapolation of one-way wavefields is proposed which combines recent developments in information theory and theoretical signal processing with the physics of wave propagation. Because of excessive memory requirements, explicit formulations for wave propagation have proven to be a challenge in 3-D. By using ideas from “compressed sensing”, we are able to formulate the (inverse) wavefield extrapolation problem on small subsets of the data volume ,thereby reducing the size of the operators. Compressed sensing entails a new paradigm for signal recovery that provides conditions under which signals can be recovered from incomplete samplings by nonlinear recovery methods that promote sparsity of the to-be-recovered signal. According to this theory, signals can successfully be recovered when the measurement basis is incoherent with the representation in which the wavefield is sparse. In this new approach, the eigenfunctions of the Helmholtz operator are recognized as a basis that is incoherent with curvelets that are known to compress seismic wavefields. By casting the wavefield extrapolation problem in this framework, wavefields can successfully be extrapolated in the modal domain, despite evanescent wave modes. The degree to which the wavefield can be recovered depends on the number of missing (evanescent) wave modes and on the complexity of the wavefield. A proof of principle for the “compressed sensing” method is given for wavefield extrapolation in 2-D, together with a pathway to 3-D during which the multiscale and multiangular properties of curvelets in relation to the Helmholtz operator are exploited. The results show that our method is stable, has reduced dip limitations and handles evanescent waves in inverse extrapolation.
Compressive sensing (CS) is a framework in which one attempts to measure a signal in a compressive mode, implying that fewer total measurements are required vis-`a-vis direct sampling methods. Compressive sensing exploits the fact that the signal of interest is compressible in some basis, and the CS measurements correspond to projections performed on the basis-function coefficients. In this paper we demonstrate that when a target is situated in the presence of a complicated background medium, the frequency-dependent multi-static fields scattered from the target may be measured in a CS manner; these scattered fields are projected onto the two-way Green’s function of the environment. With knowledge of the Green’s function, and using a mutual coherence principle, one may invert for the multi-static scattered fields of the target based on a relatively small number of CS measurements. This is termed in situ CS, because the medium in which the target is employed is used to constitute the CS projections, via the medium Green’s function. We develop a hierarchical statistical algorithm to jointly invert multiple in situ CS measurements performed across a range of frequencies. This framework is demonstrated based on simulated scattering data. In addition, we examine the “restricted isometry property” associated with proper CS projections, and discuss how the observed CS data may be used as a sufficient statistic for a detector or classifier, without the need to know the background Green’s function.In a different area, Henry Pfister, Fan Zhang released Compressed Sensing and Linear Codes over Real Numbers which makes a connection between compressed sensing and syndrome source coding. The abstract reads:
Compressed sensing (CS) is a relatively new area of signal processing and statistics that focuses on signal reconstruction from a small number of linear (e.g., dot product) measurements. In this paper, we analyze CS using tools from coding theory because CS can also be viewed as syndrome-based source coding of sparse vectors using linear codes over real numbers. While coding theory does not typically deal with codes over real numbers, there is actually a very close relationship between CS and error-correcting codes over large discrete alphabets. This connection leads naturally to new reconstruction methods and analysis. In some cases, the resulting methods provably require many fewer measurements than previous approaches.The presentation slides are here.
There is also Gradient Pursuit for Non-Linear Sparse Signal Modelling by Thomas Blumensath and Mike Davies. The abstract reads:
In this paper the linear sparse signal model is extended to allow more general, non-linear relationships and more general measures of approximation error. A greedy gradient based strategy is presented to estimate the sparse coefficients. This algorithm can be understood as a generalisation of the recently introduced Gradient Pursuit framework. Using the presented approach with the traditional linear model but with a different cost function is shown to outperform OMP in terms of recovery of the original sparse coefficients. A second set of experiments then shows that for the nonlinear model studied and for highly sparse signals, recovery is still possible in at least a percentage of cases.This solver is part of version 0.3 of the ‘Sparsify’ Matlab toolbox. Let us note the following use of this technique for coherent image reconstruction:
This experiment was motivated by coherent imaging applications. Assume complex valued data is measured and the images of interest are the magnitude and phase of the complex data. Further assume that the magnitude and phase is sparse in some transform domain. In compressed sensing for imaging applications, instead of observing the complex valued image data, a subset of coefficients is observed in another domain, such as, for example, the Fourier domain.
Marco F. Duarte, Shriram Sarvotham, Dror Baron, Michael Wakin and Richard Baraniuk just released Performace Limits for Jointly Sparse Signals via Graphical Models (and the attendant Poster). The abstract reads:
The compressed sensing (CS) framework has been proposed for efficient acquisition of sparse and compressible signals through incoherent measurements. In our recent work, we introduced a new concept of joint sparsity of a signal ensemble. For several specific joint sparsity models, we demonstrated distributed CS schemes. This paper considers joint sparsity via graphical models that link the sparse underlying coefficient vector, signal entries, and measurements. Our converse and achievable bounds establish that the number of measurements required in the noiseless measurement setting is closely related to the dimensionality of the sparse coefficient vector. Single signal and joint (single-encoder) CS are special cases of joint sparsity, and their performance limits fit into our graphical model framework for distributed (multi-encoder) CS.In A Direct Formulation for Sparse PCA Using Semidefinite Programming by Alexandre d'Aspremont, Laurent El Ghaoui, Michael I. Jordan, Gert Lanckriet ( DSPCA source code). I noted the following slides: In the first slide, one can find an unexpected way to compute the restricted isometry constant.
When looking at this formulation I could not help myself trying to draw conclusion between this transformation and a previous entry entitled:Trace: What is it good for ? - How Compressed Sensing relates to Dimensionality Reduction
In the big picture, the manifold signal processing is not as popular a subject (red arrows) as the one that touches on the reconstruction. Laurent Jacques and Christophe De Vleeschouwer
contribute to our enlightment by looking at the relationship between dictionaries and (smooth) manifolds in A Geometrical Study of Matching Pursuit Parametrization. The abstract reads:
This paper studies the effect of discretizing the parametrization of a dictionary used for Matching Pursuit decompositions of signals. Our approach relies on viewing the continuously parametrized dictionary as an embedded manifold in the signal space on which the tools of differential (Riemannian) geometry can be applied. The main contribution of this paper is twofold. First, we prove that if a discrete dictionary reaches a minimal density criterion, then the corresponding discrete MP (dMP) is equivalent in terms of convergence to a weakened hypothetical continuous MP. Interestingly, the corresponding weakness factor depends on a density measure of the discrete dictionary. Second, we show that the insertion of a simple geometric gradient ascent optimization on the atom dMP selection maintains the previous comparison but with a weakness factor at least two times closer to unity than without optimization. Finally, we present numerical experiments confirming our theoretical predictions for decomposition of signals and images on regular discretizations of dictionary parametrizations.
Alexandre d'Aspremont also released Subsampling Algorithms for Semidefinite Programming. The abstract reads:
We derive two stochastic gradient algorithms for semidefinite optimization using randomization techniques. One is based on the robust stochastic approximation method and uses random sparsifications of the current iterate to both accelerate eigenvalue computations and reduce memory requirements. The other relies on gradient sampling to reduce the per iteration cost of smooth semidefinite optimization algorithms.at the conclusion, here is an item of interest:
Overall, progress on these issues is likely to come from a better understanding of the measure concentration phenomenon on eigenvectors. At this point, a lot is known about concentration of eigenvalues of random matrices with independent coefficients but random matrix theory is somewhat silent on eigenvectors.My little understanding of this is: the sensitivity of eigenvalues to small changes in the value of matrix elements is directly related to how nearly colinear eigenvectors are from each other. Therefore, aren't bounds on eigenvalues of random matrices providing some type of statement of the pseudospectra of these matrices ? and if so, what types of results does it imply on the pseudospectra and by the same token on eigenvectors ? Inquiring mind wants to know.
I also noted the following paper as it tries to give some unifying framework to different types of cameras. It is stands it is a useful paper when it comes to understanding how compressed sensing can provide a new way of reconstructing 4D lightfields. Understanding camera trade-offs through a Bayesian analysis of light field projections by Anat Levin, William T. Freeman, and Fredo Durand. The abstract reads:
Computer vision has traditionally focused on extracting structure, such as depth, from images acquired using thin-lens or pinhole optics. The development of computational imaging is broadening this scope; a variety of unconventional cameras do not directly capture a traditional image anymore, but instead require the joint reconstruction of structure and image information. For example, recent coded aperture designs have been optimized to facilitate the joint reconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied by different strategies. This paper introduces a unified framework for analyzing computational imaging approaches. Each sensor element is modeled as an inner product over the 4D light field. The imaging task is then posed as Bayesian inference: given the observed noisy light field projections and a new prior on light field signals, estimate the original light field. Under common imaging conditions, we compare the performance of various camera designs using 2D light field simulations. This framework allows us to better understand the tradeoffs of each camera type and analyze their limitations.
Hi Igor,
ReplyDeleteI have trouble understanding the conclusion on the sparse PCA slide: what does it mean that "a finite dimensional matrix" satisfies the RIP property? Could you give an interpretation?