Happy Thanksgiving y'all !
Evidently, this promise comes with the caveat that message-passing algorithms are specifically designed to solve sparse-recovery problems for Gaussian matrices while methods such as SPGL1 are versatile and solve BP for any A and b as long one is willing to spend a sufficient number of iterations to bring the residual down.
We consider the recovery of sparse signals from a limited number of noisy observations using the AMP algorithm. In this paper, we present two fast implementations of this algorithm that run on a CPU and on a GPU and which can either be used for arbitrary unstructured measurement matrices or take advantage of the structure of a DCT matrix to give an even faster implementation. Our results show that for small problem sizes, the CPU based implementation is the fastest, but for large problem sizes, a GPU based implementation has the highest throughput.
[2] VLSI Design of Approximate Message Passing for Signal Restoration and Compressive Sensing by P. Maechler, C. Studer, D. Bellasi, A. Maleki, A. Burg, N. Felber, H. Kaeslin, and R. G. Baraniuk. The abstract reads:
Sparse signal recovery finds use in a variety of practical applications, such as signal and image restoration and the recovery of signals acquired by compressive sensing. In this paper, we present two generic VLSI architectures that implement the approximate message passing (AMP) algorithm for sparse signal recovery. The first architecture, referred to as AMP-M, employs parallel multiply-accumulate units and is suitable for recovery problems based on unstructured (e.g., random) matrices. The second architecture, referred to as AMP-T, takes advantage of fast linear transforms, which arise in many real-world applications. To demonstrate the effectiveness of both architectures, we present corresponding VLSI and FPGA implementation results for an audio restoration application. We show that AMP-T is superior to AMP-M with respect to silicon area, throughput, and power consumption, whereas AMP-M offers more flexibility.
To meet current-day challenges, exploration seismology increasingly relies on more and more sophisticated algorithms that require multiple paths through all data. This requirement leads to problems because the size of seismic data volumes is increasing exponentially, exposing bottlenecks in IO and computational capability. To overcome these bottlenecks, we follow recent trends in machine learning and compressive sensing by proposing a sparsity-promoting inversion technique that works on small randomized subsets of data only. We boost the performance of this algorithm significantly by modifying a state-of-the-art `1-norm solver to benefit from message passing, which breaks the build up of correlations between model iterates and the randomized linear forward model. We demonstrate the performance of this algorithm on a toy sparse-recovery problem and on a realistic reverse-time-migration example with random source encoding. The improvements in speed, memory use, and output quality are truly remarkable.
Data collection, data processing, and imaging in exploration seismology increasingly hinge on large-scale sparsity promoting solvers to remove artifacts caused by efforts to reduce costs. We show how the inclusion of a “message term“ in the calculation of the residuals improves the convergence of these iterative solvers by breaking correlations that develop between the model iterate and the linear system that needs to be inverted. We compare this message-passing scheme to state-of-the-art solvers for problems in missing-trace interpolation and in dimensionality-reduced imaging with phase encoding.
In this paper, we apply compressed sensing to video compression. Compressed sensing (CS) techniques exploit the observation that one needs much fewer random measurements than given by the Shannon-Nyquist sampling theory to recover an object if this object is compressible (i.e., sparse in the spatial domain or in a transform domain). In the CS framework, we can achieve sensing, compression and denoising simultaneously. We propose a fast and simple online encoding by application of pseudo-random downsampling of the two-dimensional fast Fourier transform to video frames. For off-line decoding, we apply a modification of the recently proposed approximate message passing (AMP) algorithm. The AMP method has been derived using the statistical concept of ’state evolution’, and it has been shown to considerably accelerate the convergence rate in special CS-decoding applications. We shall prove that the AMP method can be rewritten as a forward-backward splitting algorithm. This new representation enables us to give conditions that ensure convergence of the AMP method and to modify the algorithm in order to achieve higher robustness. The success of reconstruction methods for video decoding also essentially depends on the chosen transform, where sparsity of the video signals is assumed. We propose to incorporate the 3D dual-tree complex wavelet transform that possesses sufficiently good properties of directional selectivity and shift invariance while being computationally less expensive and less redundant than other directional 3D wavelet transforms.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment