Happy Thanksgiving y'all.
In Sunday Morning Insight: A Quick Panorama of Sensing from Direct Imaging to Machine Learning, we saw that the main difference between neural networks these days and traditional sensing revolved around the acquisition stage: It is linear in compressive sensing while nonlinear in neural networks. At some point the two fields will collide (for one thing Imaging CMOSes are already nonlinear devices and there is this issue of quantization and 1bit compressive sensing lurking beneath). One of the question would be to ask how does the nonlinear aquisition stage permits some sort of recovery, which seems to be the question asked in the following paper (and it seems that there is a clear connection to phase retrieval.)
In Sunday Morning Insight: A Quick Panorama of Sensing from Direct Imaging to Machine Learning, we saw that the main difference between neural networks these days and traditional sensing revolved around the acquisition stage: It is linear in compressive sensing while nonlinear in neural networks. At some point the two fields will collide (for one thing Imaging CMOSes are already nonlinear devices and there is this issue of quantization and 1bit compressive sensing lurking beneath). One of the question would be to ask how does the nonlinear aquisition stage permits some sort of recovery, which seems to be the question asked in the following paper (and it seems that there is a clear connection to phase retrieval.)
Without further ado, here is the paper: Signal Recovery from $\ell_p$ Pooling Representations by Joan Bruna, Arthur Szlam, Yann LeCun
In this work we compute lower Lipschitz bounds of $\ell_p$ pooling operators for $p=1, 2, \infty$ as well as $\ell_p$ pooling operators preceded by half-rectification layers. These give sufficient conditions for the design of invertible neural network layers. Numerical experiments on MNIST and image patches confirm that pooling layers can be inverted with phase recovery algorithms. Moreover, the regularity of the inverse pooling, controlled by the lower Lipschitz constant, is empirically verified with a nearest neighbor regression.
Credit Photo: NASA, MSL
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment