Pages

Thursday, April 02, 2015

AMP solvers and the Additive White Gaussian Noise Channel




Approximate message-passing decoder and capacity-achieving sparse superposition codes by Jean Barbier, Florent Krzakala
We study the approximate message-passing decoder for sparse superposition coding on the additive white Gaussian noise channel and extend our preliminary work. While this coding scheme asymptotically reach the Shannon capacity, we show that our iterative decoder is limited by a phase transition similar to what happen in LDPC codes. We present and study two solutions to this problem, that both allow to reach the Shannon capacity: i) a non constant power allocation and ii) the use of spatially coupled codes. We also present extensive simulations that suggest that spatial coupling is more robust and allows for better reconstruction at finite code lengths. Finally, we show empirically that the use of a fast Hadamard-based operator allows for an efficient reconstruction, both in terms of computational time and memory, and the ability to deal with large signals.
Here is an attendant video:
 
Recently, there was another approach using AMP solvers at the ITA 2014 meeting. That work came out in preprint recently: Capacity-achieving Sparse Superposition Codes via Approximate Message Passing Decoding by Cynthia Rush, Adam Greig, Ramji Venkataramanan
Sparse superposition codes were recently introduced by Barron and Joseph for reliable communication over the AWGN channel at rates approaching the channel capacity. The codebook is defined in terms of a Gaussian design matrix, and codewords are sparse linear combinations of columns of the matrix. In this paper, we propose an approximate message passing decoder for sparse superposition codes, whose decoding complexity scales linearly with the size of the design matrix. The performance of the decoder is rigorously analyzed and it is shown to asymptotically achieve the AWGN capacity with an appropriate power allocation. We provide simulation results to demonstrate the performance of the decoder at finite block lengths, and investigate the effects of various power allocations on the decoding performance. 
Attendant talk is here.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment