Thursday, November 22, 2012

Approximate Message Passing and Compressive Sensing

Happy Thanksgiving y'all !
The Approximate Message Passing algorithm, an instance of belief propagation in graphical models, has taken the field of compressive sensing by storm. This can be witnessed in the increasing number of implementations listed at the end of the reconstruction solvers' list of the compressive sensing big picture page. It is not surprising as the algorithm is very fast and that compressive sensing is really in need of fast solvers. With this in mind and because of its simplicity, several folks are trying to implement it as close as possible to silicon through either an FPGA or a VLSI implementation [1,2] while others are using it to deal with problematic that could barely be handled with previous generation solvers [3,4,5]. All this is well, but reading the more applied papers [3,4], Felix reminds us of some of the caveats:
Evidently, this promise comes with the caveat that message-passing algorithms are specifically designed to solve sparse-recovery problems for Gaussian matrices while methods such as SPGL1 are versatile and solve BP for any A and b as long one is willing to spend a sufficient number of iterations to bring the residual down.
I also wonder if AMP will gain any additional insights from advances beyond belief propagation ?
[1] Implementing the Approximate Message Passing (AMP) Algorithm on a GPU by Lukas Cavigelli, Pascal Alexander Hager. The abstract reads:
We consider the recovery of sparse signals from a limited number of noisy observations using the AMP algorithm. In this paper, we present two fast implementations of this algorithm that run on a CPU and on a GPU and which can either be used for arbitrary unstructured measurement matrices or take advantage of the structure of a DCT matrix to give an even faster implementation. Our results show that for small problem sizes, the CPU based implementation is the fastest, but for large problem sizes, a GPU based implementation has the highest throughput.
Sparse signal recovery finds use in a variety of practical applications, such as signal and image restoration and the recovery of signals acquired by compressive sensing. In this paper, we present two generic VLSI architectures that implement the approximate message passing (AMP) algorithm for sparse signal recovery. The first architecture, referred to as AMP-M, employs parallel multiply-accumulate units and is suitable for recovery problems based on unstructured (e.g., random) matrices. The second architecture, referred to as AMP-T, takes advantage of fast linear transforms, which arise in many real-world applications. To demonstrate the effectiveness of both architectures, we present corresponding VLSI and FPGA implementation results for an audio restoration application. We show that AMP-T is superior to AMP-M with respect to silicon area, throughput, and power consumption, whereas AMP-M offers more flexibility.
[3] Accelerated large-scale inversion with message passing by Felix J. Herrmann. The summary reads:
To meet current-day challenges, exploration seismology increasingly relies on more and more sophisticated algorithms that require multiple paths through all data. This requirement leads to problems because the size of seismic data volumes is increasing exponentially, exposing bottlenecks in IO and computational capability. To overcome these bottlenecks, we follow recent trends in machine learning and compressive sensing by proposing a sparsity-promoting inversion technique that works on small randomized subsets of data only. We boost the performance of this algorithm significantly by modifying a state-of-the-art `1-norm solver to benefit from message passing, which breaks the build up of correlations between model iterates and the randomized linear forward model. We demonstrate the performance of this algorithm on a toy sparse-recovery problem and on a realistic reverse-time-migration example with random source encoding. The improvements in speed, memory use, and output quality are truly remarkable.
[4] APPROXIMATE MESSAGE PASSING MEETS EXPLORATION SEISMOLOGY by Felix J. Herrmann. The abstract reads:
Data collection, data processing, and imaging in exploration seismology increasingly hinge on large-scale sparsity promoting solvers to remove artifacts caused by efforts to reduce costs. We show how the inclusion of a “message term“ in the calculation of the residuals improves the convergence of these iterative solvers by breaking correlations that develop between the model iterate and the linear system that needs to be inverted. We compare this message-passing scheme to state-of-the-art solvers for problems in missing-trace interpolation and in dimensionality-reduced imaging with phase encoding.
[5] Compressive Video Sampling with Approximate Message Passing Decoding by JIANWEI MA,GERLIND PLONKA, M. YOUSUFF HUSSAINI. The abstract reads:
In this paper, we apply compressed sensing to video compression. Compressed sensing (CS) techniques exploit the observation that one needs much fewer random measurements than given by the Shannon-Nyquist sampling theory to recover an object if this object is compressible (i.e., sparse in the spatial domain or in a transform domain). In the CS framework, we can achieve sensing, compression and denoising simultaneously. We propose a fast and simple online encoding by application of pseudo-random downsampling of the two-dimensional fast Fourier transform to video frames. For off-line decoding, we apply a modification of the recently proposed approximate message passing (AMP) algorithm. The AMP method has been derived using the statistical concept of ’state evolution’, and it has been shown to considerably accelerate the convergence rate in special CS-decoding applications. We shall prove that the AMP method can be rewritten as a forward-backward splitting algorithm. This new representation enables us to give conditions that ensure convergence of the AMP method and to modify the algorithm in order to achieve higher robustness. The success of reconstruction methods for video decoding also essentially depends on the chosen transform, where sparsity of the video signals is assumed. We propose to incorporate the 3D dual-tree complex wavelet transform that possesses sufficiently good properties of directional selectivity and shift invariance while being computationally less expensive and less redundant than other directional 3D wavelet transforms.



Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly