Pages

Wednesday, May 22, 2019

LightOn Workshop #3: The Future of Random Matrices (Friday May 24th, 2019)



We will have LightOn's third research workshop this coming Friday (Workshop #1 is here, Workshop #2 is here). You can register here. The program is currently as follows (we may have a fourth speaker, stay tuned).

The workshop should be streamed online and the link to Youtube be added later to this post. 



13:30 Start

Title: Differentially Private Compressive Learning - Large-scale learning with the memory of a goldfish
Abstract: Inspired by compressive sensing, Compressive Statistical Learning allows drastic volume and dimension reduction when learning from large/distributed/streamed data collections. The principle is to exploit random projections to compute a low‐dimensional (nonlinear) sketch (a vector of random empirical generalized moments), in essentially one pass on the training collection. Sketches of controlled size have been shown to capture the information relevant to the certain learning task such as unsupervised clustering, Gaussian mixture modeling or PCA. As a proof of concept, more than a thousand hours of speech recordings can be distilled to a sketch of only a few kilo‐bytes capturing enough information to estimate a Gaussian Mixture Model for speaker verification. The talk will highlight the main features of this framework, including statistical learning guarantees and differential privacy.
Joint work with Antoine Chatalic (IRISA, Rennes), Vincent Schellekens & Laurent Jacques (Univ Louvain, Belgium), Florimond Houssiau & Yves-Alexandre de Montjoye (Imperial College, London, UK), Nicolas Keriven (ENS Paris), Yann Traonmilin (Univ Bordeaux) Gilles Blanchard (IHES)

Title: Scaling-up Large Scale Kernel Learning

15:05 Coffee break

“Beyond backpropagation: alternative training methods for neural networks”
Abstract — Backpropagation has long been the de facto choice for training neural networks. Modern paradigms are implicitly optimized for it, and numerous guidelines exist to ensure its proper use. Yet, it is not without flaws: from forbidding effective parallelisation of the backward pass, to a lack of biological realism, issues abound. This has motivated the development of numerous alternative methods, most of which have failed to scale-up past toy problems like MNIST or CIFAR-10.
In this talk, we explore some recently developed training algorithms, and try to explain why they have failed to match the gold standard that is backpropagation. In particular, we focus on feedback alignment methods, and demonstrate a path to a better understanding of their underlying mechanics.

16:10 Fin


More information will be featured on LightOn's blog. We are also on Twitter (@LightOnIO) and LinkedIn.



Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

No comments:

Post a Comment