Tuesday, July 05, 2016

A Powerful Generative Model Using Random Weights for the Deep Image Representation

Here is a somewhat amazing paper with random projections. From the introduction, one can read:

 In this paper, we raise a fundamental issue that other researchers rarely address: Could we do deep visualization using untrained, random weight DNNs? This would allow us to separate the contribution of training from the contribution of the network structure. It might even give us a method to evaluate deep network architectures without spending days and significant computing resources in training networks so that we could compare them. Though Gray et al. demonstrated that the VGG architecture with random weights failed in generating textures and resulted in white noise images in an experiment indicating the trained filters might be crucial for texture generation [8], we conjecture the success of deep visualization mainly originates from the intrinsic nonlinearity and complexity of the deep network hierarchical structure rather than from the training, and that the architecture itself may cause the inversion invariant to the original image. Gatys et al.’s unsuccessful attempt on the texture synthesis using the VGG architecture with random weights may be due to their inappropriate scale of the weighting factors. 
My emphasis:

To what extent is the success of deep visualization due to the training? Could we do deep visualization using untrained, random weight networks? To address this issue, we explore new and powerful generative models for three popular deep visualization tasks using untrained, random weight convolutional neural networks. First we invert representations in feature spaces and reconstruct images from white noise inputs. The reconstruction quality is statistically higher than that of the same method applied on well trained networks with the same architecture. Next we synthesize textures using scaled correlations of representations in multiple layers and our results are almost indistinguishable with the original natural texture and the synthesized textures based on the trained network. Third, by recasting the content of an image in the style of various artworks, we create artistic images with high perceptual quality, highly competitive to the prior work of Gatys et al. on pretrained networks. To our knowledge this is the first demonstration of image representations using untrained deep neural networks. Our work provides a new and fascinating tool to study the representation of deep network architecture and sheds light on new understandings on deep visualization. 

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments: