Pages

Monday, July 05, 2010

CS: BP relief well, RIP constant in complex setting, Optimal Coded Sampling for Temporal Super-Resolution, Holographic Microscopy, a post-doc


In response to my plea for monitoring the Deepwater Horizon well during the drilling of the relief well:
Felix Herrmann reacted to one of these entries by pointing to one of his recent papers: Randomized sampling and sparsity: getting more information from fewer samples .The abstract reads:
Many seismic exploration techniques rely on the collection of massive data volumes that are subsequently mined for information during processing. While this approach has been extremely successful in the past, current efforts toward higher-resolution images in increasingly complicated regions of the Earth continue to reveal fundamental shortcomings in our workflows. Chiefly amongst these is the so-called ``curse of dimensionality'' exemplified by Nyquist's sampling criterion, which disproportionately strains current acquisition and processing systems as the size and desired resolution of our survey areas continues to increase. In this paper, we offer an alternative sampling method leveraging recent insights from compressive sensing towards seismic acquisition and processing for data that are traditionally considered to be undersampled. The main outcome of this approach is a new technology where acquisition and processing related costs are no longer determined by overly stringent sampling criteria, such as Nyquist. At the heart of our approach lies randomized incoherent sampling that breaks subsampling related interferences by turning them into harmless noise, which we subsequently remove by promoting transform-domain sparsity. Now, costs no longer grow with resolution and dimensionality of the survey area, but instead depend on transform-domain sparsity only. Our contribution is twofold. First, we demonstrate by means of carefully designed numerical experiments that compressive sensing can successfully be adapted to seismic acquisition. Second, we show that accurate recovery can be accomplished for compressively sampled data volumes sizes that exceed the size of conventional transform-domain data volumes by only a small factor. Because compressive sensing combines transformation and encoding by a single linear encoding step, this technology is directly applicable to acquisition and to dimensionality reduction during processing. In either case, sampling, storage, and processing costs scale with transform-domain sparsity. We illustrate this principle by means of number of case studies.

The attendant presentation Sub-Nyquist sampling and sparsity: getting more information from fewer samples was mentioned here earlier. Thanks Felix. To see how important sensing will be in the drilling of the relief well, you really want to watch this video from BP that explains it pretty well. I note their use of the wireline (a sensor based polling capability) at a different time than when the drilling is taking place, i.e. little information is used from the drilling vibration themselves. Again, the issue is really trying to figure out, from the sea floor, if the well itself is deteriorating.

Waheed Bajwa just sent me the following:

Dear Igor:

... The reason I am writing to you is to share two things with you.

First, as a follower of your blog, I read with great interest your communication with Alex Dimakis on providing conditions that are checkable in polynomial time for compressed sensing related problems. In this regard, I wanted to share with you a recent preprint of ours (B., Calderbank, and Jafarpour) on model selection (sparsity pattern recovery) using one-step thresholding. The paper is an effort towards the goal of obtaining verifiable sufficient conditions that don't have the "square-root bottleneck" and I hope that you and the readers of your blog would provide some valuable feedback in this regard. The preprint is available on arXiv (http://arxiv.org/pdf/1006.0719).

The second thing I wanted to point to you (and through you, the readers of your blog) is the common mistake that I have been seen in a number of recent papers on compressed sensing regarding the RIP constant condition in the complex setting. Specifically, these papers state the condition \delta_{2k} \lt sqrt(2) - 1 for the complex case by citing E. Candes; the cited paper, however, never claims that \delta_{2k} \lt sqrt(2) - 1 also holds in the complex case. The polarization identity used in that paper is for the real case and if one uses the complex variant of this identity (see (2.26) in http://www.math.princeton.edu/~wbajwa/pubs/bajwa_dissertation.pdf) then the condition comes out to be \delta_{2k} \lt1/3. While this is a minute misrepresentation of the mathematical fact, it is a misrepresentation nonetheless and I hope that people would stop making this mistake in the future.

Thanks!

Best,
Waheed
Thanks Waheed for the clarification.

The new version of that paper Waheed mentions is: Why Gabor Frames? Two Fundamental Measures of Coherence and Their Role in Model Selection by Waheed Bajwa, Robert Calderbank, and Sina Jafarpour. The abstract reads:
The problem of model selection arises in a number of contexts, such as subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper studies non-asymptotic model selection for the general case of arbitrary (random or deterministic) design matrices and arbitrary nonzero entries of the signal. In this regard, it generalizes the notion of incoherence in the existing literature on model selection and introduces two fundamental measures of coherence—termed as the worst-case coherence and the average coherence—among the columns of a design matrix. It utilizes these two measures of coherence to provide an in-depth analysis of a simple, model-order agnostic one-step thresholding (OST) algorithm for model selection and proves that OST is feasible for exact as well as partial model selection as long as the design matrix obeys an easily verifiable property, which is termed as the coherence property. One of the key insights offered by the ensuing analysis in this regard is that OST can successfully carry out model selection even when methods based on convex optimization such as the lasso fail due to the rank deficiency of the submatrices of the design matrix. In addition, the paper establishes that if the design matrix has reasonably small worst-case and average coherence then OST performs near-optimally when either (i) the energy of any nonzero entry of the signal is close to the average signal energy per nonzero entry or (ii) the signal-to-noise ratio in the measurement system is not too high. Finally, two other key contributions of the paper are that (i) it provides bounds on the average coherence of Gaussian matrices and Gabor frames, and (ii) it extends the results on model selection using OST to low-complexity, model-order agnostic recovery of sparse signals with arbitrary nonzero entries. In particular, this part of the analysis in the paper implies that an Alltop Gabor frame together with OST can successfully carry out model selection and recovery of sparse signals irrespective of the phases of the nonzero entries even if the number of nonzero entries scales almost linearly with the number of rows of the Alltop Gabor frame.
I found two papers recently of interest: Optimal Coded Sampling for Temporal Super-Resolution by Amit Agrawal, Mohit Gupta, Ashok Veeraraghavan and Srinivasa Narasimhan. The abstract reads:
Conventional low frame rate cameras result in blur and/or aliasing in images while capturing fast dynamic events. Multiple low speed cameras have been used previously with staggered sampling to increase the temporal resolution. However, previous approaches are inefficient: they either use small integration time for each camera which does not provide light benefit, or use large integration time in a way that requires solving a big ill-posed linear system. We propose coded sampling that address these issues: using N cameras it allows N times temporal super-resolution while allowing N2 times more light compared to an equivalent high speed camera. In addition, it results in a well-posed linear system which can be solved independently for each frame, avoiding reconstruction artifacts and significantly reducing the computational time and memory. Our proposed sampling uses optimal multiplexing code considering additive Gaussian noise to achieve the maximum possible SNR in the recovered video. We show how to implement coded sampling on off-the-shelf machine vision cameras. We also propose a new class of invertible codes that allow continuous blur in captured frames, leading to an easier hardware implementation.
and Compressed Sensing for Digital Holographic Microscopy by Marcio M. Marim, Michael Atlan, Elsa D. Angelini, Jean-Christophe Olivo-Marin. The abstract reads:
This paper describes an original microscopy imaging framework successfully employing Compressed Sensing for digital holography, Our approach combines a sparsity minimization algorithm to reconstruct the image and digital holography to perform quadratureresolved random measurements of an optical field in a diffraction plane. Compressed Sensing is a recent theory establishing that near-exact recovery of an unknown sparse signal is possible from a small number of non-structured measurements. We demonstrate with practical experiments on holographic microscopy images of cerebral blood flow that our CS approach enables optimal reconstruction from a very limited number of measurements while being robust to high noise levels.

Finally, Jalal Fadili sent me the following Post-doc announcement:
Post-doc position, Adaptive penalized estimators with structuted sparsity
Applications are invited for a one-year post-doc on adaptive penalized estimators with structuted sparsity, starting from September 2010. This post-doc is funded by the ANR (french science foundation) project NatImages on mathematical modeling of natural images and textures, see :http://www.morphologicaldiversity.org/natimages/
Contact
To apply, candidates should address a detailed CV and other relevant information (e.g. reference letters) by email or by post to the principal investigator. Questions about the subject or the position should be addressed to :
Principal investigator: Jalal Fadili,
GREYC UMR CNRS 6072
ENSICAEN, 6, Bd du Mar´echal Juin
14050 Caen CEDEX, France
Email : Jalal.Fadili@greyc.ensicaen.fr
URL : |http://www.greyc.ensicaen.fr/~jfadili
Co-investigator: Christophe Chesneau,
LMNO UMR CNRS 6139
Universt´e de Caen Basse-Normandie, Campus II
14032 Caen CEDEX, France
Email : Christophe.Chesneau@math.unicaen.fr
URL : http://www.chesneau-stat.com
Project description
Structured sparsity goes beyond traditional sparsity by exploiting a priori information on the structure of the sought after signal (e.g. group or block structure). This is for example the case of natural image whose coefficients in an appropriate sparse representation domain tend to cluster, hence exhibiting a strong joint sparsity structure. In the very recent literature, estimators exploiting structured sparsity have been proposed and their performances and properties investigated. However, adaptativity in such estimators remains an unexplored area. The main focus of this post-doc will be structured sparsity for a wide variety of problems including, but not limited to, regression and inverse problems. Some specific milestones of this work will be :• For structured sparsity, adaptive and automatic choice of the model hyperparameters, e.g. block size, position and even anisotropy for group-structured sparsity.
• Study the theoretical performance of such estimators and their guarantees for both fixed and random dictionaries.
• There will be also an important part on optimization schemes for such models where the objectives are generally non smooth.
Candidate profile and pre-requisites
Candidates must hold a PhD in applied mathematics, statistics or signal and image processing. A good background would be a must in one or several of the following areas : functional statistics, applied harmonic analysis, non smooth optimisation (convex and non convex). Good programming skills are also required typically in Matlab, R, C or C++.

No comments:

Post a Comment