Pages

Monday, February 27, 2017

A Random Matrix Approach to Neural Networks




A Random Matrix Approach to Neural Networks by Cosme Louart, Zhenyu Liao, Romain Couillet

This article studies the Gram random matrix model G=1TΣTΣΣ=σ(WX), classically found in random neural networks, where X=[x1,,xT]Rp×T is a (data) matrix of bounded norm, WRn×p is a matrix of independent zero-mean unit variance entries, and σ:RR is a Lipschitz continuous (activation) function --- σ(WX) being understood entry-wise. We prove that, as n,p,T grow large at the same rate, the resolvent Q=(G+γIT)1, for γ>0, has a similar behavior as that met in sample covariance matrix models, involving notably the moment Φ=TnE[G], which provides in passing a deterministic equivalent for the empirical spectral measure of G. This result, established by means of concentration of measure arguments, enables the estimation of the asymptotic performance of single-layer random neural networks. This in turn provides practical insights into the underlying mechanisms into play in random neural networks, entailing several unexpected consequences, as well as a fast practical means to tune the network hyperparameters.


Reproducibility: Python 3 codes used to produce the results of Section 4 are available at https://github.com/Zhenyu-LIAO/RMT4ELM 

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment