Pages

Friday, December 11, 2009

CS: MIA, More SpaRSA, Kalman Filter-CS, Compressed sensing and sparse recovery in exploration seismology, Solving Helmholtz

Next week will start with the Mathematics and Image Analysis 2009 (MIA'09) meeting. I may see some of you there. In the meantime, here are a series of paper found on the interwebs, enjoy:
The convergence rate is analyzed for the SpaRSA algorithm (Sparse Reconstruction by Separable Approximation) for minimizing a sum $f (\m{x}) + \psi (\m{x})$ where $f$ is smooth and $\psi$ is convex, but possibly nonsmooth. It is shown that if $f$ is convex, then the error in the objective function at iteration $k$, for $k$ sufficiently large, is bounded by $a/(b+k)$ for suitable choices of $a$ and $b$. Moreover, if the objective function is strongly convex, then the convergence is $R$-linear. An improved version of the algorithm based on a cycle version of the BB iteration and an adaptive line search is given. The performance of the algorithm is investigated using applications in the areas of signal processing and image reconstruction.

KF-CS: Compressive Sensing on Kalman Filtered Residual by Namrata Vaswani. The abstract reads:
We consider the problem of recursively reconstructing time sequences of sparse signals (with unknown and time-varying sparsity patterns) from a limited number of linear incoherent measurements with additive noise. The signals are sparse in some transform domain referred to as the sparsity basis and the sparsity pattern is assumed to change slowly with time. The idea of our proposed solution, KF CS-residual (KF-CS) is to replace compressed sensing (CS) on the observation by CS on the Kalman filtered (KF) observation residual computed using the previous estimate of the support. We obtain the conditions under which the KF-CS estimate stabilizes to within a small error of the genie-aided KF (the KF that knows the support at each time), with high probability. Simulations comparing KF-CS to simple CS and least squares CS-residual (LS-CS) are shown.
Angshul Majumdar let me know of this paper: On compressive sensing applied to radar by Joachim H.G. Ender.The abstract reads:

Compressive sensing (CS) techniques offer a framework for the detection and allocation of sparse signals with a reduced number of samples. Today, modern radar systems operate with high bandwidths - demanding high sample rates according to the Shannon-Nyquist theorem - and a huge number of single elements for phased array antennas. Often only a small amount of target parameters is the final output, arising the question, if CS could not be a good mean to reduce data size, complexity, weight, power consumption and costs of radar systems. There is only a small number of publications addressing the application of CS to radar, leaving several open questions. This paper addresses some aspects as a further step to CS-radar by presenting generic system architectures and implementation considerations. It is not the aim of this paper to investigate numerically efficient algorithms but to point to promising applications as well as arising problems. Three possible applications are considered: pulse compression, radar imaging, and air space surveillance with array antennas. Some simulation results are presented and enriched by the evaluation of real data acquired by an experimental radar system of Fraunhofer FHR.

No investigation of efficient algorithms ? Hey, maybe I should be publishing papers from the "These technologies Do Not Exist" series ? :) Thanks Angshul !

The lectures on Compressed sensing and sparse recovery in exploration seismology are listed below. The abstract for the three lectures reads:
In this course, I will present how recent results from compressed sensing and sparse recovery apply to exploration seismology. During the first lecture, I will present the basic principles of compressive sensing; the importance of random jitter sampling and sparsifying transforms; and large-scale one-norm solvers. I will discuss the application of these techniques to missing trace interpolation. The second lecture will be devoted to coherent signal separation based on curvelet domain matched filtering and Bayesian separation with sparsity promotion. Applications of these techniques to the primary-multiple wavefield-separation problem on real data will be discussed as well. The third lecture will be devoted towards sparse recovery in seismic modeling and imaging and includes the problem of preconditioning the imaging operators, and the recovery from simultaneous source-acquired data.
Additional presentations on how to solve the Helmholtz equation can be found below:

Unified compressive sensing framework for simultaneous acquisition with primary estimation, by Tim Lin and Felix Herrmann. The abstract reads:
The central promise of simultaneous acquisition is a vastly improved crew efficiency during acquisition at the cost of additional post-processing to obtain conventional source-separated data volumes. Using recent theories from the field of compressive sensing, we present a way to systematically model the effects of simultaneous acquisition. Our formulation form a new framework in the study of acquisition design and naturally leads to an inversion-based approach for the separation of shot records. Furthermore, we show how other inversion-based methods, such as a recently proposed method from van Groenestijn and Verschuur (2009) for primary estimation, can be processed together with the demultiplexing problem to achieve a better result compared to a separate treatment of these problems.

Compressive simultaneous full-waveform simulation by Felix Herrmann, Tim Lin and Yogi A. Erlangga. The abstract:
The fact that the computational complexity of wavefield simulation is proportional to the size of the discretized model and acquisition geometry, and not to the complexity of the simulated wavefield, is a major impediment within seismic imaging. By turning simulation into a compressive sensing problem--where simulated data is recovered from a relatively small number of independent simultaneous sources--we remove this impediment by showing that compressively sampling a simulation is equivalent to compressively sampling the sources, followed by solving a reduced system. As in compressive sensing, this allows for a reduction in sampling rate and hence in simulation costs. We demonstrate this principle for the time-harmonic Helmholtz solver. The solution is computed by inverting the reduced system, followed by a recovery of the full wavefield with a sparsity promoting program. Depending on the wavefield's sparsity, this approach can lead to a significant cost reduction, in particular when combined with the implicit preconditioned Helmholtz solver, which is known to converge even for decreasing mesh sizes and increasing angular frequencies. These properties make our scheme a viable alternative to explicit time-domain finite-difference.

Compressive imaging by wavefield inversion with group sparsity by Felix Herrmann. The abstract:
Migration relies on multi-dimensional correlations between source- and residual wavefields. These multi-dimensional correlations are computationally expensive because they involve operations with explicit and full matrices that contain both wavefields. By leveraging recent insights from compressive sampling, we present an alternative method where linear correlation-based imaging is replaced by imaging via multidimensional deconvolutions of compressibly sampled wavefields. Even though this approach goes at the expense of having to solve a sparsity-promotion recovery program for the image, our wavefield inversion approach has the advantage of reducing the system size in accordance to transform-domain sparsity of the image. Because seismic images also exhibit a focusing of the energy towards zero offset, the compressive-wavefield inversion itself is carried out using a recent extension of one-norm solver technology towards matrix-valued problems. These so-called hybrid $(1,\,2)$-norm solvers allow us to penalize pre-stack energy away from zero offset while exploiting joint sparsity amongst near-offset images. Contrary to earlier work to reduce modeling and imaging costs through random phase-encoded sources, our method compressively samples wavefields in model space. This approach has several advantages amongst which improved system-size reduction, and more flexibility during subsequent inversions for subsurface properties.

Sub-Nyquist sampling and sparsity: getting more information from fewer samples by Felix Herrmann. The abstract reads:
Seismic exploration relies on the collection of massive data volumes that are subsequently mined for information during seismic processing. While this approach has been extremely successful in the past, the current trend of incessantly pushing for higher quality images in increasingly complicated regions of the Earth continues to reveal fundamental shortcomings in our workflows to handle massive high-dimensional data volumes. Two causes can be identified as the main culprits responsible for this barrier. First, there is the so-called ``curse of dimensionality'' exemplified by Nyquist's sampling criterion, which puts disproportionate strain on current acquisition and processing systems as the size and desired resolution of our survey areas continues to increase. Secondly, there is the recent ``departure from Moore's law'' that forces us to lower our expectations to compute ourselves out of this curse of dimensionality. In this paper, we offer a way out of this situation by a deliberate \emph{randomized} subsampling combined with structure-exploiting transform-domain sparsity promotion. Our approach is successful because it reduces the size of seismic data volumes without loss of information. Because of this size reduction both impediments are removed and we end up with a new technology where the costs of acquisition and processing are no longer dictated by the \emph{size of the acquisition} but by the transform-domain \emph{sparsity} of the end-product after processing.



Credit: NASA, Atlantis during STS-129.

No comments:

Post a Comment