Pages

Monday, July 20, 2009

CS: MRI, Oil exploration, SLAM radio, Passive radar, OFMD, Downlink scheduling


I am doing some catch-up with the Rice Compressive Sensing repository, here are the papers I have not covered before:

Practical Nonconvex Compressive Sensing Reconstruction of Highly-Accelerated 3D Paralllel MR Angiograms by Joshua Trzasko, Clifton Haider, Armando Manduca. The abstract reads:

In this work, a nonconvex Compressive Sensing model targeted at true 3D reconstructions of highly undersampled MR angiograms acquired with parallel imaging is proposed. When combined with the Max-CAPR acquisition sequence, it is demonstrated that high quality, non-view-shared, 3D images of the contrast-filled neurovasculature can be acquired (at acceleration factors exceeding the number of coils) in just over 2 seconds and reconstructed in as few as 14 minutes on a high-performance workstation.


Here is a somewhat important paper as it examine the l_0 convergence.

Relaxed Conditions for Sparse Signal Recovery with General Concave Priors by Joshua Trzasko, Armando Manduca. The abstract reads:

The emerging theory of Compressive or Compressed Sensing challenges the convention of modern digital signal processing by establishing that exact signal reconstruction is possible for many problems where the sampling rate falls well below the Nyquist limit. Following the landmark works of Candes et al. and Donoho on the performance of ℓ1-minimization models for signal reconstruction, several authors demonstrated that certain nonconvex reconstruction models consistently outperform the convex ℓ1-model in practice at very low sampling rates despite the fact that no global minimum can be theoretically guaranteed. Nevertheless, there has been little theoretical investigation into the performance of these nonconvex models. In this work, a notion of weak signal recoverability is introduced and the performance of nonconvex reconstruction models employing general concave metric priors is investigated under this model. The sufficient conditions for establishing weak signal recoverability are shown to substantially relax as the prior functional is parameterized to more closely resemble the targeted ℓ0-model, offering new insight into the empirical performance of this general class of reconstruction methods. Examples of relaxation trends are shown for several different prior models.

Here is a new way of performing a SLAM like method using radio wave. This is interesting but I wish there was more detail on the strategies used by the emitter and the receiver. I wonder if we could use this sparse sensing approach this with a visual SLAM technique. Here it is: Compressive Cooperative Sensing and Mapping in Mobile Networks by Yasamin Mostofi and Pradeep Sen. The abstract reads:
In this paper we consider a mobile cooperative network that is tasked with building a map of the spatial variations of a parameter of interest, such as an obstacle map or an aerial map. We propose a new framework that allows the nodes to build a map of the parameter of interest with a small number of measurements. By using the recent results in the area of compressive sensing, we show how the nodes can exploit the sparse representation of the parameter of interest in the transform domain in order to build a map with minimal sensing. The proposed work allows the nodes to efficiently map the areas that are not sensed directly. To illustrate the performance of the proposed framework, we show how the nodes can build an aerial map or a map of obstacles with sparse sensing. We furthermore show how our proposed framework enables a novel non-invasive approach to mapping obstacles by using wireless channel measurements


Felix Herrmann and his group at UBC produced a new series of papers on inverse problems as found in the oil industry: Compressive imaging by wavefield inversion with group sparsity by Felix Herrmann. The abstract reads:

Migration relies on multi-dimensional correlations between source- and residual wavefields. These multi-dimensional correlations are computationally expensive because they involve operations with explicit and full matrices that contain both wavefields. By leveraging recent insights from compressive sampling, we present an alternative method where linear correlation-based imaging is replaced by imaging via multidimensional deconvolutions of compressibly sampled wavefields. Even though this approach goes at the expense of having to solve a sparsity-promotion recovery program for the image, our wavefield inversion approach has the advantage of reducing the system size in accordance to transform-domain sparsity of the image. Because seismic images also exhibit a focusing of the energy towards zero offset, the compressive-wavefield inversion itself is carried out using a recent extension of one-norm solver technology towards matrix-valued problems. These so-called hybrid (1, 2)-norm solvers allow us to penalize pre-stack energy away from zero offset while exploiting joint sparsity amongst near-offset images. Contrary to earlier work to reduce modeling and imaging costs through random phase-encoded sources, our method compressively samples wavefields in model space. This approach has several advantages amongst which improved system-size reduction, and more flexibility during subsequent inversions for subsurface properties.


Compressive-wavefield simulations by Felix Herrmann, Yogi Erlangga and Tim Lin. The abstract reads:
Full-waveform inversion’s high demand on computational resources forms, along with the non-uniqueness problem, the major impediment withstanding its widespread use on industrial-size datasets. Turning modeling and inversion into a compressive sensing problem—where simulated data are recovered from a relatively small number of independent simultaneous sources—can effectively mitigate this high-cost impediment. The key is in showing that we can design a sub-sampling operator that commutes with the time-harmonic Helmholtz system. As in compressive sensing, this leads to a reduction in simulation cost. Moreover, this reduction is commensurate with the transform-domain sparsity of the solution, implying that computational costs are no longer determined by the size of the discretization but by transform-domain sparsity of the solution of the CS problem which forms our data. The combination of this sub-sampling strategy with our recent work on implicit solvers for the Helmholtz equation provides a viable alternative to full-waveform inversion schemes based on explicit finite-difference methods.


Sub-Nyquist sampling and sparsity: getting more information from fewer samples by Felix Herrmann. The abstract reads:
Seismic exploration relies on the collection of massive data volumes that are subsequently mined for information during seismic processing. While this approach has been extremely successful in the past, the current trend of incessantly pushing for higher quality images in increasingly complicated regions of the Earth continues to reveal fundamental shortcomings in our workflows to handle massive high-dimensional data volumes. Two causes can be identified as the main culprits responsible for this barrier. First, there is the so-called “curse of dimensionality” exemplified by Nyquist’s sampling criterion, which puts disproportionate strain on current acquisition and processing systems as the size and desired resolution of our survey areas continues to increase. Secondly, there is the recent “departure from Moore’s law” that forces us to lower our expectations to compute ourselves out of this curse of dimensionality. In this paper, we offer a way out of this situation by a deliberate randomized subsampling combined with structure-exploiting transform-domain sparsity promotion. Our approach is successful because it reduces the size of seismic data volumes without loss of information. Because of this size reduction both impediments are removed and we end up with a new technology where the costs of acquisition and processing are no longer dictated by the size of the acquisition but by the transform-domain sparsity of the end-product after processing.

Higher dimensional blue-noise sampling schemes for curvelet-based seismic data recovery by Gang Tang, Reza Shahidi, Felix Herrmann, Jianwei Ma. The abstract reads:
In combination with compressive sensing, a successful reconstruction scheme called Curvelet-based Recovery by Sparsity promoting Inversion (CRSI) has been developed, and has proven to be useful for seismic data processing. One of the most important issues for CRSI is the sampling scheme, which can greatly affect the quality of reconstruction. Unlike usual regular undersampling, stochastic sampling can convert aliases to easy-to-eliminate noise. Some stochastic sampling methods have been developed for CRSI, e.g. jittered sampling, however most have only been applied to 1D sampling along a line. Seismic datasets are usually higher dimensional and very large, thus it is desirable and often necessary to develop higher dimensional sampling methods to deal with these data. For dimensions higher than one, few results have been reported, except uniform random sampling, which does not perform well. In the present paper, we explore 2D sampling methodologies for curvelet-based reconstruction, possessing sampling spectra with blue noise characteristics, such as Poisson Disk sampling, Farthest Point Sampling, and the 2D extension of jittered sampling. These sampling methods are shown to lead to better recovery and results are compared to the other more traditional sampling protocols.


Unified compressive sensing framework for simultaneous acquisition with primary estimation by Tim Lin, Felix Herrmann. The abstract reads:
The central promise of simultaneous acquisition is a vastly improved crew efficiency during acquisition at the cost of additional post-processing to obtain conventional source-separated data volumes. Using recent theories from the field of compressive sensing, we present a way to systematically model the effects of simultaneous acquisition. Our formulation form a new framework in the study of acquisition design and naturally leads to an inversion-based approach for the separation of shot records. Furthermore, we show how other inversion-based methods, such as a recently proposed method from van Groenestijn and Verschuur (2009) for primary estimation, can be processed together with the demultiplexing problem to achieve a better result compared to a separate treatment of these problems.


Sparse Channel Estimation for Mutlicarrier Underwater Acoustic Communications: From Subspace Methods to Compressed Sensing by Christian R. Berger, Shengli Zhou, Peter Willett. The abstract reads:
In this paper, we present various channel estimators that exploit the channel sparsity in a multicarrier underwater acoustic system, including subspace algorithms from the array precessing literature, namely root-MUSIC and ESPRIT, and recent compressed sensing algorithms in form of Orthogonal Matching Pursuit (OMP) and Basis Pursuit (BP). Numerical simulation and experimental data of an OFDM block-by-block receiver are used to evaluate the proposed algorithms in comparison to the conventional least-squares (LS) channel estimator. We observe that subspace methods can tolerate small to moderate Doppler effects, and outperform the LS approach when the channel is indeed sparse. On the other hand, compressed sensing algorithms uniformly outperform the LS and subspace methods. Coupled with a channel equalizer mitigating intercarrier interference, the compressed sensing algorithms can handle channels with significant Doppler spread.

Sparse Channel Estimation for OFDM: Over-Complete Dictionaries and Super-Resolution Methods by Christian R. Berger, Shengli Zhou, Peter Willett. The abstract reads:
Wireless multipath channels can often be characterized as sparse, i.e., the number of significant paths is small even when the channel delay spread is large. This can be taken advantage of when estimating the unknown channel frequency response using pilot assisted modulation. Other work has largely focused on the greedy orthogonal matching pursuit (OMP) algorithm, using a dictionary based on an equivalent finite impulse response filter to model the channel. This is not necessarily realistic, as the physical nature of the channel is continuous in time, while the equivalent filter taps are based on baseband sampling. In this paper, we consider sparse channel estimation using a continuous time path-based channel model. This can be linked to the direction finding problem from the array processing literature and solved using the well-known root-MUSIC and ESPRIT algorithms, which have no formal time resolution. In addition, we show that a dictionary with finer time resolution considerably improves the performance of OMP and the related Basis Pursuit (BP) algorithm.


Signal Extraction Using Compressed Sensing for Passive Radar with OFDM Signals by Christian R. Berger, Shengli Zhou, Peter Willett. The abstract reads:
Passive radar is a concept where possibly multiple non-cooperative illuminators are used in a multi-static setup. A freely available signal, like radio or television, is decoded and used to identify moving airborne targets based on their Doppler shift. New digital signals, like Digital Audio/Video Broadcast (DAB/DVB), are excellent candidates for this scheme, as they are widely available, can be easily decoded, and employ orthogonal frequency division multiplex (OFDM), a multicarrier transmission scheme based on channel equalization in the frequency domain using the Fast Fourier Transform (FFT). After successfully decoding the digital broadcast, the channel estimates can be used to estimate targets’ bi-static range and range-rate by separating different multi-path components by their delay and Doppler shift. While previous schemes have simply projected available measurements onto possible Doppler shifts, we employ Compressed Sensing, a type of sparse estimation. This way we can enhance separation between targets, and by-pass additional signal processing necessary to determine the actual target within a “blotch” of signal energy smeared across different delays and Doppler frequencies.


Compressed Sensing for OFDM/MIMO Radar by Christian R. Berger, Shengli Zhou, Peter Willett, Bruno Demissie, and Jorg Heckenbach. The asbtract reads:
In passive radar, two main challenges are: mitigating the direct blast, since the illuminators broadcast continuously, and achieving a large enough integration gain to detect targets. While the first has to be solved in part in the analog part of the processing chain, due to the huge difference of signal strength between the direct blast and weak target reflections, the second is about combining enough signal efficiently, while not sacrificing too much performance. When combining this setup with digital multicarrier waveforms like orthogonal frequency division multiplex (OFDM) in digital audio/video broadcast (DAB/DVB), this problem can be seen to be a version of multiple-input multiple output (MIMO) radar. We start with an existing approach, based on efficient fast Fourier transform (FFT) operation to detect target signatures, and show how this approach is related to a standard matched filter approach based on a piece-wise constant approximation of the phase rotation caused by Doppler shift. We then suggest two more applicable algorithms, one based on subspace processing and one based on sparse estimation. We compare these various approaches based on a detailed simulation scenario with two closing targets and experimental data recorded from a DAB network in Germany.

Downlink Scheduling Using Compressed Sensing by Sibi Raj Bhaskaran, Linda Davis, Alex Grant, Stephen Hanly, Paul Tune. The asbtract reads:

We propose a novel access technique for cellular downlink resource sharing. In particular, a distributed self selection procedure is combined with the technique of compressed sensing to identify a set of users who are getting simultaneous access to the downlink broadcast channel. The performance of the proposed method is analyzed, and its suitability as an alternate access mechanism is argued.


That's all for today.

No comments:

Post a Comment