Pages

Tuesday, May 23, 2017

Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk

Credit: NASA, h/t Sarah Horst



Vlad just sent me the following:

Hi Igor,

I'm writing regarding my recent paper with Paul Hand. It's about combining principles of compressed sensing with deep generative priors, which has been previously shown empirically to require 10X fewer measurements than traditional CS in certain scenarios. As deep generative priors (such as those obtained via GANs) get better, this may improve the performance of CS and other inverse problems across the board.

In this paper we prove that the non-convex empirical risk objective for enforcing random deep generative priors subject to compressive random linear observations of the activations of the last layer has no spurious local minima, and for a fixed depth, these guarantees hold at order-optimal sample complexity.



Best,

-Vlad



Vladislav Voroninski
Thanks Vlad ! Thanks for the heads-up.

We examine the theoretical properties of enforcing priors provided by generative deep neural networks via empirical risk minimization. In particular we consider two models, one in which the task is to invert a generative neural network given access to its last layer and another which entails recovering a latent code in the domain of a generative neural network from compressive linear observations of its last layer. We establish that in both cases, in suitable regimes of network layer sizes and a randomness assumption on the network weights, that the non-convex objective function given by empirical risk minimization does not have any spurious stationary points. That is, we establish that with high probability, at any point away from small neighborhoods around two scalar multiples of the desired solution, there is a descent direction. These results constitute the first theoretical guarantees which establish the favorable global geometry of these non-convex optimization problems, and bridge the gap between the empirical success of deep learning and a rigorous understanding of non-linear inverse problems. 



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment