Pages

Wednesday, September 03, 2014

Some thoughts on invertibility: Signal recovery from Pooling Representations, Determination of Nonlinear Genetic Architecture using Compressed Sensing

You probably recall this entry back in November, the paper has been augmented since [1] and was one of the talks at ICML. Here is the video: Signal recovery from Pooling Representations by Joan Bruna Estrach, Arthur Szlam, Yann LeCun

In the paper [1], the authors essentially try to see which signal can be reconstructed after having gone through a  nonlinear transformation that is somewhat typical of those found in current neural network architectures. Some thoughts below:






In order to provide some context, we have to see this type of investigation as a piece within the generic thread of connecting what we know in signal processing and what we know in Machine Learning. Recently in Parallel Paths for Deep Learning and Signal Processing ?, I mentioned a potential connection between the two areas. The work of Joan Bruna Estrach, Arthur Szlam, Yann LeCun resonates in two three ways within this generic context:
  • They show how nonlinearities of phase retrieval match their systematic parametric approach to studying the nonlinearities of neural networks. We know from compressive sensing that in phase retrieval, we ought to have sharp phase transitions. What do we know else ? We know that Gerchberg-Saxton is not an ideal scheme (this is also what was found by the authors) but we also know that the sharp phase transitions of invertibility are intimitally connected to, not just the number of measurements, but also the sparsity or compressibility of the signal.  These universal sharp phase transitions that delimit the space of parameters beween what is and what is not invertible (in the words of this paper) are linked to, not just the nonlinear function but also, to the relative size of the sparsity or compressibility of the signal within the size of the overall signal. The following figure screams sharp transitions if you have been reading this blog long enough ( see The Map Makers )




Let us finally note that the neural network folks are not the only ones to look at nonlinearities. Recently, Steve Hsu and colleagues looked at GWAS and its potential connection with compressive sensing ( ... There will be a "before" and "after" this paper ... ) and are now trying to assess the influence of potential nonlinearities in figuring out the sampling requirements in GWAS studies [5,6,7]. Related entries on Machine Learning and phase transitions can be found in [8,9,10]


Most blog entries on what we call nonlinear compressive sensing can be found at: 



which includes the recent [11].




In this work we compute lower Lipschitz bounds of ℓp pooling operators for p=1,2,∞ as well as ℓppooling operators preceded by half-rectification layers. These give sufficient conditions for the design of invertible neural network layers. Numerical experiments on MNIST and image patches confirm that pooling layers can be inverted with phase recovery algorithms. Moreover, the regularity of the inverse pooling, controlled by the lower Lipschitz constant, is empirically verified with a nearest neighbor regression.

We introduce a statistical method that can reconstruct nonlinear genetic models (i.e., including epistasis, or gene-gene interactions) from phenotype-genotype (GWAS) data. The computational and data resource requirements are similar to those necessary for reconstruction of linear genetic models (or identification of gene-trait associations), assuming a condition of generalized sparsity, which limits the total number of gene-gene interactions. An example of a sparse nonlinear model is one in which a typical locus interacts with several or even many others, but only a small subset of all possible interactions exist. It seems plausible that most genetic architectures fall in this category. Our method uses a generalization of compressed sensing (L1-penalized regression) applied to nonlinear functions of the sensing matrix. We give theoretical arguments suggesting that the method is nearly optimal in performance, and demonstrate its effectiveness on broad classes of nonlinear genetic models using both real and simulated human genomes. 





Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment