Pages

Monday, August 24, 2015

Random Noise in the GoogleLeNet Architecture

Back in 1996, Sparse coding made a big splash in Science and Engineering because the elements of a dictionary looked very much like the wavelet functions that had been discovered a few years earlier. For the first time, there was a sense that an algorithm could produce some simple insight on how the machinery of the visual cortex. 

Roelof Pieters investigates, in his spare time, how random noise applied to different layers of current deep neural architectures produces different types of imagery. In the past few month, this process has been called DeepDreaming and has produced a few dog pictures. It is fascinating because as Pierre Sermanet speculated yesterday, some of this imagery might constitute a good detection clue for degenerative disease.

All that is well, but yesterday, Roelof mentioned that applying random noise on a particular GoogleLeNet layer (2012) produced something else than dogs. Here are a few examples of what he calls "Limited deep dreaming" that starts with random noise activating single units from a particular layer of GoogLeNet (3a output) -all the other images are listed in this Flickr album -





They certainly look very natural to me: Sometimes they look like structures found in electron microscopy, sometimes, they look like the structures found in numerical simulation of our universe:



 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

2 comments:

  1. It's not that paper, I am still expecting the authors of the paper I mentioned in the subway to post it on Arxiv. But looking at how fast they are going, I am pretty sure that by the time it is published, the results will be published by somebody else. It's a shame.

    Igor.

    ReplyDelete