Monday, July 25, 2016

Onsager-corrected deep learning for sparse linear inverse problems

Thomas let me know on Twitter that the Great Convergence continues, Today we find out how we go about changing the iterative process of AMP and then learn coefficients of that process as in Deep Learning. It looks like the Learned AMP beats LISTA. Looking back at the few COLT presentations earlier (Saturday videos), one wonders how these solvers change the rule of thumbs on model depth and size. To be continued.... 


Deep learning has gained great popularity due to its widespread success on many inference problems. We consider the application of deep learning to the sparse linear inverse problem encountered in compressive sensing, where one seeks to recover a sparse signal from a small number of noisy linear measurements. In this paper, we propose a novel neural-network architecture that decouples prediction errors across layers in the same way that the approximate message passing (AMP) algorithm decouples them across iterations: through Onsager correction. Numerical experiments suggest that our "learned AMP" network significantly improves upon Gregor and LeCun's "learned ISTA" network in both accuracy and complexity.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly