Yann LeCun, the new Director of Facebook AI Research and a professor at New York University did an Ask Me Anything (AMA) on Reddit yesterday. Questions and answers are all here.

Andrew Ng has accepted our invitation to speak in a remote presentation at the Paris Machine Learning Meetup on June 16th. He also just announced that he is now Baidu's Chief Scientist to work on AI

```
I am joining Baidu as Chief Scientist to work on AI; will remain engaged with Coursera as Chairman. http://t.co/SiEgCtMGJT
```

— Andrew Ng (@AndrewYNg) May 16, 2014

We are trying things out at the meetup in that Andrew's talk will be shared and synchronized across four European meetups. Moreon that later.

Both of Yann and Andrew's work have been featured here on Nuit Blanche a few times. Andrew for his work on dictionary learning and Yann for his work on neural networks. Dictionary learning is key for unsupervised learning in Machine Learning while it is key in compressive sensing to find the right sparsifying basis. Convolutional Neural Networks are parallel to the current slew of reconstruction algorithms used in compressive sensing [1,2]. This week's entries ( Mitya Chklovskii's A Neuron as a Signal Processing Device and Chris Rozell's Compressive Sensing and Short Term Memory / Visual Nonclassical Receptive Field Effects) make me think we are seeing some sort of convergence from different communities [3].

[2] Sunday Morning Insight: Faster Than a Blink of an Eye

[3] From Direct Imaging to Machine Learning ... a rapid panorama (JIONC 2014)

**Join the CompressiveSensing subreddit or the Google+ Community and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## 5 comments:

I agree, I suspect that NN and CS are mostly the same field with different jargon - much like SVM is an efficient way to make perceptrons. No doubt well see more and more of this as people work out the reasons that the 'hacks' that make deep learning so good actually work and find more direct methods to evaluate them. Something I've been wondering along these lines is whether IHT and DropOut are pretty much the same operation (both are trying to improve robustness by removing self-correlation). The big advantage that DNN have today is the general optimisation framework that works across the multiple linear sub-problems (unfortunately, SGD really is the third best way to optimise anything...). CS hasn't really come up with an answer to this, which means that we tend to focus on efficient algorithms for what NNs call a 'single layer'.

How many people are into it?

Why not use Google+ and Hangout?

Royi

What is your question ?

Igor

njh,

From my non specialist point of view, it eerily seems that one could compare/map each layers of the DNN with the each iteration step of the IHT or similar schemes. The questioning on the reconstruction solver about the type of nonlinear functiom and parameter to use at each iteration looks similar to the questions on putting the right coefficients for the DNN layers.

Igor.

That's a good point, splitting each optimization step across the layer updates, as long as each step is robust to the problem changing underneath.

Post a Comment