The folks at SLIM/UBC (Felix Herrmann, Yogi Erlangga and Tim Lin) are at it again: Solving the Helmoltz equation. A good way of introducing oneself to the Compressive Sensing approach is by reading the presentation entitled: Compressive sampling meets seismic imaging from which the two images are extracted. The abstract of the presentation reads:
Compressive sensing has led to fundamental new insights in the recovery of compressible signals from sub-Nyquist samplings. It is shown how jittered subsampling can be used to create favorable recovery conditions. Applications include mitigation of incomplete acquisitions and wavefield computations. While the former is a direct adaptation of compressive sampling, the latter application represents a new way of compressing wavefield extrapolation operators. Operators are not diagonalized but are compressively sampled reducing the computational costs.
A more in-depth analysis can be found in Compressive simultaneous full-waveform simulation by the same authors Felix Herrmann, Yogi Erlangga and Tim Lin. The abstract reads:
The fact that the numerical complexity of wavefield simulation is proportional to the size of the discretized model and acquisition geometry, and not to the complexity of the simulated wavefield, is the main impediment within seismic imaging. By turning simulation into a compressive sensing problem---where simulated data is recovered from a relatively small number of independent simultaneous sources---we remove this impediment by showing that compressively sampling a simulation is equivalent to compressively sampling the sources, followed by solving a reduced system. As in compressive sensing, this allows for a reduction in sampling rate and hence in simulation costs. We demonstrate this principle for the time-harmonic Helmholtz solver. The solution is computed by inverting the reduced system, followed by a recovery of the full wavefield with a sparsity promoting program. Depending on the wavefield's sparsity, this approach can lead to significant cost reductions, in particular when combined with the implicit preconditioned Helmholtz solver, which is known to converge even for decreasing mesh sizes and increasing angular frequencies. These properties make our scheme a viable alternative to explicit time-domain finite-differences.
In a different area of engineering, one still needs to solve the Helmoltz equation for target imaging as shown in three-dimensional sparse-aperture moving-target imaging by Matthew Ferrara, Julie Jackson, Mark Stuff. The abstract reads:
If a target’s motion can be determined, the problem of reconstructing a 3D target image becomes a sparse aperture imaging problem. That is, the data lies on a random trajectory in k-space, which constitutes a sparse data collection that yields very low-resolution images if backprojection or other standard imaging techniques are used. This paper investigates two moving-target imaging algorithms: the first is a greedy algorithm based on the CLEAN technique, and the second is a version of Basis Pursuit Denoising. The two imaging algorithms are compared for a realistic moving-target motion history applied to a Xpatch-generated backhoe data set.
Matthias Seeger produced a talk on Large Scale Approximate Inference and Experimental Design for Sparse Linear Models. The video of the talk can be found here.
A while ago, I drew some comparison between this model of the primary visual cortex (A Feedforward Architecture Accounts for Rapid Categorization) and compressive sensing. The authors just released the source code. This code provides a framework for reproducing the main experimental result described in: Thomas Serre, Aude Oliva and Tomaso Poggio , "A feedforward architecture accounts for rapid categorization", Proceedings of the National Academy of Sciences, vol. 104, no. 15, 2007. The description and installation instructions are here. One can download the code here.