Pages

Wednesday, June 17, 2015

Paris Machine Learning Meetup #10 Season 2 Finale: "And so it begins": Deep Learning, Recovering Robots, Vowpal and Hadoop, Predicsis, Matlab, Bayesian test, Experiments on #ComputationalComedy & A.I.



Here is the streaming feed for tonight. The program is below: 

Video Feed

For this last regular meetup of the season (the season 2 finale), we will be were invited by the good folks at Mathworks and will have had the meetup at UPMC. The meetup started at 6:50PM Paris time.

Here is the program with the attendant slides. Most talks were in French (slides are in English always) except where noted. For those who speak english only, three presentations were spoken in English. We put them next to each other to have an "English session". That English speaking session  started at about 58 minutes and 22 seconds in the video with Samim and finished roughly at 2 hours 05 minutes with Florence.


+ Franck Bardol, Igor Carron, Meetup Presentation 
(talk given in French)

+ Olivier Corradi,  Snips.net Lightning talk (at 7minutes and 19 seconds in the video) Presentation slides
(talk given in French)
Snips is using artificial intelligence to make technology disappear. We're launching a unique lab to experiment with new technology - and we need your help to build it.
+ Heloise Nonne, Quantmetry, "Online learning, Vowpal Wabbit and Hadoop"
(talk given in French at 12 minutes and 19 seconds in the video)
Online learning has recently caught a lot of attention, following some competitions, and especially after Criteo released a very large dataset for a Kaggle contest.

Online learning allows to process massive data as the learner processes data in a sequential way using up a low amount of memory and limited CPU ressources. It is also particularly suited for handling time-evolving data or for exploring data set and testing many combination of features.

Vowpal Wabbit has become quite popular: it is a handy, light and efficient command line tool allowing to do online learning on GB of data, even on a standard laptop with standard memory. After a reminder of the online learning principles, I'll present the advantages of Vowpal Wabbit and the way it can be parallelized on Hadoop in a distributed fashion.
+ Amine El Helou, Laurence Vachon, MathWorks. “MATLAB for Data Science and Machine Learning
(talk given in French at 35 minutes and 50 seconds in the video)
An integrated data analytics workflow: developing data-driven predictive models with MATLAB.
+ Samim Winiger, "Experiments on #ComputationalComedy and A.I."  (remote from Berlin and in English , starts at 58 minutes and 22 seconds in the video)

Example of work: Obama-RNN — Machine generated political speeches but also TED-RNN — Machine generated TED-Talks, Ideas worth generating and attendant Video.

+ Ruslan Salakhutdinov, University of Toronto, Learning Multimodal Deep Models  (remote from Toronto and in English, starts at 1 hour 06 minutes and 08 seconds in the video)

+ Florence Benezit-Gajic, PredicSis, "PredicSis: Prediction API"
(talk given in English, starts at 1 hour 47 minutes and 55 seconds in the video)
Not every predictive API are born equal. Come and take a look at PredicSis API.
+ Jean-Baptiste Mouret, INRIA/UPMC, "Robots that can recover from damage in minutes" . Attendant video:  https://youtu.be/T-c17RKh3uE -
(talk given in French, starts at 2 hours 06 minutes and 40 seconds in the video)
A major obstacle to the widespread adoption of robots in uncontrolled environments (i.e. outside of factories) is their fragility. In this talk, we describe a trial-and-error learning algorithm that allows robots to adapt to damage in less than two minutes, and thus will enable more robust, effective, autonomous robots.
http://chronos.isir.upmc.fr/~mouret/website/nature_press.xhtml#faq

+ Christian Robert Paris Dauphine, "Testing as estimation: the demise of the Bayes factors"
(talk given in French, starts at 2 hours 26 minutes and 00 seconds in the video)
We consider a novel paradigm for Bayesian testing of hypotheses and Bayesian model comparison. Our alternative to the traditional construction of posterior probabilities that a given hypothesis is true or that the data originates from a specific model is to consider the models under comparison as components of a mixture model. We therefore replace the original testing problem with an estimation one that focus on the probability weight of a given model within a mixture model. We analyze the sensitivity on the resulting posterior distribution on the weights of various prior modeling on the weights. We stress that a major appeal in using this novel perspective is that generic improper priors are acceptable, while not putting convergence in jeopardy. Among other features, this allows for a resolution of the Lindley-Jeffreys paradox. http://arxiv.org/abs/1412.2044
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment