Pages

Wednesday, August 27, 2014

Parallel Paths for Deep Learning and Signal Processing ?

This is a follow-up to this thread.

Deep learning in Machine Learning stands for neural networks with many layers for the purpose of producing features that can eventually be used for classification. In image/signal processing, the generic idea is to acquire and decompose a signal along a family of (generally) known signals (bases in known/identified spaces). Simple machine learning techniques can then rely on the elements of that decomposition (the features in ML) to achieve the last step of a classification task. Is there a convergence between the two approaches ?

As Stephane Mallat [1] pointed out in this panel, an FFT/IFFT is already deep, in fact a process that is decomposed through several matrix factorizations is also deep as it requires several iterations of different factorizations ( see Sparse Matrix Factorization: Simple rules for growing neural nets and Provable Bounds for Learning Some Deep Representations ) with each factored matrix representing a layer.

But if we explore recent developments in Compressive Sensing, we know that that most reconstruction solvers used to take a long time to converge and that any convergence was measured in hundreds if not thousands of iterations. If any iteration could be construed as a layer (as is the case in autoencoders) , a very deep network composed of a thousand layers would be clearly a non publishable offence.  Aside from the long "depth", some of these solvers rely on linear operations whereas current neural networks implicitely use nonlinear functionals.  

Recently, many things changed in Compressive Sensing with the appearance of Approximate Message Passing algorithms. They are theoretically sound and require only a few iterations (5 to 20) to obtain convergence. Each of these iterations can be expressed as a nonlinear functional of a matrix-vector multiply akin to each layer's computation in neural networks:




From: The SwAMP Thing! Sparse Estimation with the Swept Approximated Message-Passing Algorithm -implementation -

One could argue that the first layer in Compressive Sensing does not parallel that in Neural Networks. It turns out that a few people [6] are working on what is called  one bit sensing [5] which is very close in spirit to neural networks. 

In all, each of these approaches are building deep networks in their own way...using different vocabularies.





Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment