Tuesday, November 09, 2010

CS: Imaging heavens and earth




 You do remember the PACS camera that is on-board Herschel and for which some compressed sensing encoding have been tried (I still have not heard of the results). Anyway, that camera was used to image the encounter between the EPOXI spacecraft and Hartley-2. talk about a sparse scene ! The small dot is the EPOXI spacecraft.

You probably recall the Integral Pixel Camera, well on of the authors, Nathan Jacobs, just came out with an instance of Imaging With Nature in: Using Cloud Shadows to Infer Scene Structure and Camera Calibration by Nathan Jacobs, Brian Bies, Robert Pless. The abstract reads:
We explore the use of clouds as a form of structured lighting to capture the 3D structure of outdoor scenes observed over time from a static camera. We derive two cues that relate 3D distances to changes in pixel intensity due to clouds shadows. The first cue is primarily spatial, works with low frame-rate time lapses, and supports estimating focal length and scene structure, up to a scale ambiguity. The second cue depends on cloud motion and has a more complex, but still linear, ambiguity. We describe a method that uses the spatial cue to estimate a depth map and a method that combines both cues. Results on time lapses of several outdoor scenes show that these cues enable estimating scene geometry and camera focal length.

My emphasis, I also noted from the paper something I was not aware of :
The methods we describe are a valuable addition to the emerging toolbox of automated outdoor-camera calibration techniques.
The shape from cloud site is here. If you want to see the code, send an email to Nathan!

The following papers and presentations are coming from Felix Herrmann's group. The most intriguing idea is the use of they seem to call "multiples" to probe further the soil. It's a way to use diffusion to increase the mixing of  the data and thereby producing "better" compressed measurements. It is no surprise that these effects are considered bad until today where compressive sensing give a theoretical reasoning as to why these multiples should be used.

Full-waveform inversion relies on the collection of large multi-experiment data volumes in combination with a sophisticated back-end to create high-fidelity inversion results. While improvements in acquisition and inversion have been extremely successful, the current trend of incessantly pushing for higher quality models in increasingly complicated regions of the Earth reveals fundamental shortcomings in our ability to handle increasing problem size numerically. Two main culprits can be identified. First, there is the so-called ``curse of dimensionality'' exemplified by Nyquist's sampling criterion, which puts disproportionate strain on current acquisition and processing systems as the size and desired resolution increases. Secondly, there is the recent ``departure from Moore's law'' that forces us to lower our expectations to compute ourselves out of this. In this paper, we address this situation by randomized dimensionality reduction, which we adapt from the field of compressive sensing. In this approach, we combine deliberate randomized subsampling with structure-exploiting transform-domain sparsity promotion. Our approach is successful because it reduces the size of seismic data volumes without loss of information. With this reduction, we compute Newton-like updates at the cost of roughly one gradient update for the fully-sampled wavefield.
Seismic imaging typically begins with the removal of multiple energy in the data, out of fear that it may introduce erroneous structure. However, seismic multiples have effectively seen more of the earth's structure, and if treated correctly can potential supply more information to a seismic image compared to primaries. Past approaches to accomplish this leave ample room for improvement; they either require extensive modification to standard migration techniques, rely too much on prior information, require extensive pre-processing, or resort to full-waveform inversion. We take some valuable lessons from these efforts and present a new approach balanced in terms of ease of implementation, robustness, efficiency and well-posedness, involving a sparsity-promoting inversion procedure using standard Born migration and a data-driven multiple modeling approach based on the focal transform.

During this presentation, I will talk about how recent results from compressive sensing and sparse recovery can be used to solve problems in exploration seismology where incomplete sampling is ubiquitous. I will also talk about how these ideas apply to dimensionality reduction of full-waveform inversion by randomly phase encoded sources.


Randomized full-waveform inversion: a dimensionality-reduction approach. by Peyman P. Moghaddam and Felix J. Herrmann. The abstract reads:
Full-waveform inversion relies on the collection of large multi-experiment data volumes in combination with a sophisticated back-end to create high-fidelity inversion results. While improvements in acquisition and inversion have been extremely successful, the current trend of incessantly pushing for higher quality models in increasingly complicated regions of the Earth reveals fundamental shortcomings in our ability to handle increasing problem sizes numerically. Two main culprits can be identified. First, there is the so-called ``curse of dimensionality'' exemplified by Nyquist's sampling criterion, which puts disproportionate strain on current acquisition and processing systems as the size and desired resolution increases. Secondly, there is the recent ``departure from Moore's law'' that forces us to develop algorithms that are amenable to parallelization. In this paper, we discuss different strategies that address these issues via randomized dimensionality reduction.
Stabilized estimation of primaries via sparse inversion by Tim Lin, and Felix J. Herrmann. The abstract reads:

Recent works on surface-related multiple removal include a direct estimation method proposed by van Groenestijn and Verschuur (2009), where under a sparsity assumption the primary impulse response is determined directly from a data-driven wavefield inversion process called Estimation of Primaries by Sparse Inversion (EPSI). The authors have shown that this approach is superior to traditional estimationsubtraction processes such as SRME on shallow bottom marine data, where by expanding the model to simultaneously invert for the near-offset traces, which are not directly available in most situation but are observable in the data multiples, a large improvement over Radon interpolation is demonstrated. One of the major roadblocks to the widespread adoption of EPSI is that one must have precise knowledge of a time-window that contains multiple-free primaries during each update. There is some anecdotal evidence that the inversion result is unstable under errors in the time-window length, a behaviour that runs contrary to the strengths of EPSI and diminishes its effectiveness for shallow-bottom marine data where multiples are closely spaced. Moreover, due to the nuances involved in regularizing the model impulse response in the inverse problem, the EPSI approach has an additional number of inversion parameters to choose and often also does not often lead to a stable solution under perturbations to these parameters. We show that the specific sparsity constraint on the EPSI updates lead to an inherently intractable problem, and that the time-window and other inversion variables arise as additional regularizations on the unknown towards a meaningful solution. We furthermore suggest a way to remove almost all of these parameters via a L0 to L1 convexification, which stabilizes the inversion while preserving the crucial sparsity assumption in the primary impulse response model. 
Randomized sampling strategies. by Felix J. Herrmann. The abstract reads:
Seismic exploration relies on the collection of massive data volumes that are subsequently mined for information during seismic processing. While this approach has been extremely successful in the past, the current trend towards higher quality images in increasingly complicated regions continues to reveal fundamental shortcomings in our workflows for high-dimensional data volumes. Two causes can be identified.. First, there is the so-called ``curse of dimensionality'' exemplified by Nyquist's sampling criterion, which puts disproportionate strain on current acquisition and processing systems as the size and desired resolution of our survey areas continues to increase. Secondly, there is the recent ``departure from Moore's law'' that forces us to lower our expectations to compute ourselves out of this curse of dimensionality. In this paper, we offer a way out of this situation by a deliberate randomized subsampling combined with structure-exploiting transform-domain sparsity promotion. Our approach is successful because it reduces the size of seismic data volumes without loss of information. As such we end up with a new technology where the costs of acquisition and processing are no longer dictated by the size of the acquisition but by the transform-domain sparsity of the end-product.


Full-waveform inversion relies on the collection of large multi-experiment data volumes in combination with a sophisticated back-end to create high-fidelity inversion results. While improvements in acquisition and inversion have been extremely successful, the current trend of incessantly pushing for higher quality models in increasingly complicated regions of the Earth continues to reveal fundamental shortcomings in our ability to handle the ever increasing problem size numerically. Two causes can be identified as the main culprits responsible for this barrier. First, there is the so-called ``curse of dimensionality'' exemplified by Nyquist's sampling criterion, which puts disproportionate strain on current acquisition and processing systems as the size and desired resolution of our survey areas continues to increase. Secondly, there is the recent ``departure from Moore's law'' that forces us to lower our expectations to compute ourselves out of this. In this paper, we address this situation by randomized dimensionality reduction, which we adapt from the field of compressive sensing. In this approach, we combine deliberate randomized subsampling with structure-exploiting transform-domain sparsity promotion. Our approach is successful because it reduces the size of seismic data volumes without loss of information. With this reduction, we compute Newton-like updates at the cost of roughly one gradient update for the fully-sampled wavefield.

Sub-Nyquist sampling and sparsity: getting more information from fewer samples by Felix J. Herrmann. The abstract reads:
Many seismic exploration techniques rely on the collection of massive data volumes. While this approach has been extremely successful in the past, current efforts toward higher resolution images in increasingly complicated regions of the Earth continue to reveal fundamental shortcomings in our workflows. Chiefly amongst these is the so-called ``curse of dimensionality'' exemplified by Nyquist's sampling criterion, which disproportionately strains current acquisition and processing systems as the size and desired resolution of our survey areas continues to increase. In this presentation, we offer an alternative sampling method leveraging recent insights from compressive sensing towards seismic acquisition and processing of severely under-sampled data. The main outcome of this approach is a new technology where acquisition and processing related costs are no longer determined by overly stringent sampling criteria, such as Nyquist. At the heart of our approach lies randomized incoherent sampling that breaks subsampling related interferences by turning them into harmless noise, which we subsequently remove by promoting transform-domain sparsity. Now, costs no longer grow significantly with resolution and dimensionality of the survey area, but instead depend on transform-domain sparsity only. Our contribution is twofold. First, we demonstrate by means of carefully designed numerical experiments that compressive sensing can successfully be adapted to seismic exploration. Second, we show that accurate recovery can be accomplished for compressively sampled data volumes sizes that exceed the size of conventional transform-domain data volumes by only a small factor. Because compressive sensing combines transformation and encoding by a single linear encoding step, this technology is directly applicable to acquisition and to dimensionality reduction during processing. In either case, sampling, storage, and processing costs scale with transform-domain sparsity. Many seismic exploration techniques rely on the collection of massive data volumes. While this approach has been extremely successful in the past, current efforts toward higher resolution images in increasingly complicated regions of the Earth continue to reveal fundamental shortcomings in our workflows. Chiefly amongst these is the so-called ``curse of dimensionality'' exemplified by Nyquist's sampling criterion, which disproportionately strains current acquisition and processing systems as the size and desired resolution of our survey areas continues to increase. In this presentation, we offer an alternative sampling method leveraging recent insights from compressive sensing towards seismic acquisition and processing of severely under-sampled data. The main outcome of this approach is a new technology where acquisition and processing related costs are no longer determined by overly stringent sampling criteria, such as Nyquist. At the heart of our approach lies randomized incoherent sampling that breaks subsampling related interferences by turning them into harmless noise, which we subsequently remove by promoting transform-domain sparsity. Now, costs no longer grow significantly with resolution and dimensionality of the survey area, but instead depend on transform-domain sparsity only. Our contribution is twofold. First, we demonstrate by means of carefully designed numerical experiments that compressive sensing can successfully be adapted to seismic exploration. Second, we show that accurate recovery can be accomplished for compressively sampled data volumes sizes that exceed the size of conventional transform-domain data volumes by only a small factor. Because compressive sensing combines transformation and encoding by a single linear encoding step, this technology is directly applicable to acquisition and to dimensionality reduction during processing. In either case, sampling, storage, and processing costs scale with transform-domain sparsity.

Credit: ESA/Herschel/HssO Consortium
Credit: NASA/JPL-Caltech/UMD

No comments:

Printfriendly