Monday, October 08, 2018

A Neural Architecture for Bayesian CompressiveSensing over the Simplex via Laplace Techniques



Steffen just sent me the following:

Dear Igor,

I'm a long-time reader of your blog and wanted to share our recent paper on a relation between compressed sensing and neural network architectures. The paper introduces a new network construction based on the Laplace transform that results in activations such as ReLU and gating/threshold functions. It would be great if you could distribute the link on nuit-blanche. 
The paper/preprint is here:
https://ieeexplore.ieee.org/document/8478823
https://www.netit.tu-berlin.de/fileadmin/fg314/limmer/LimSta18.pdf 
Many thanks and best regards,
Steffen 
Dipl.-Ing. Univ. Steffen Limmer
Raum HFT-TA 412
Technische Universität Berlin
Institut für Telekommunikationssysteme
Fachgebiet Netzwerk-Informationstheorie
Einsteinufer 25, 10587 Berlin

Thanks Steffen ! Here is the paper:


This paper presents a theoretical and conceptual framework to design neural architectures for Bayesian compressive sensing of simplex-constrained sparse stochastic vectors. First we recast the problem of MMSE estimation (w.r.t. a pre-defined uniform input distribution over the simplex) as the problem of computing the centroid of a polytope that is equal to the intersection of the simplex and an affine subspace determined by compressive measurements. Then we use multidimensional Laplace techniques to obtain a closed-form solution to this computation problem, and we show how to map this solution to a neural architecture comprising threshold functions, rectified linear (ReLU) and rectified polynomial (ReP) activation functions. In the proposed architecture, the number of layers is equal to the number of measurements which allows for faster solutions in the low-measurement regime when compared to the integration by domain decomposition or Monte-Carlo approximation. We also show by simulation that the proposed solution is robust to small model mismatches; furthermore, the proposed architecture yields superior approximations with less parameters when compared to a standard ReLU architecture in a supervised learning setting.









Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly