Pages

Wednesday, September 27, 2017

Videos: Montreal AI Symposium






So the Montreal AI Symposium is currently happening in Montreal. Here are the videos of the morning and afternoon sessions after the program (and congrats to the organizers Hugo Larochelle, Joëlle Pineau, Adam Trischler, Nicolas Chapados, Guillaume Chicoisne for put these presentations online through the streaming):


Keynote — Artificial Intelligence Goes All-In: Computers Playing Poker
Michael Bowling, University of Alberta and DeepMind
9.50 – 10.10 Contributed talk — A Distributional Perspective on Reinforcement Learning
Marc G. Bellemare, Google Brain
10.10 – 10.30 Contributed talk — Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments
Ryan Lowe, McGill University, OpenAI; Yi Wu, UC Berkeley; Aviv Tamar, UC Berkeley; Jean Harb, McGill University, OpenAI; Pieter Abbeel, UC Berkeley, Openai; Igor Mordatch, OpenAI
11.00 – 11.20
Contributed talk — Team Sports Modelling
Norm Ferns, SPORTLOGiQ; Mehrsan Javan, SPORTLOGiQ
11.20 – 11.40
Contributed talk — FigureQA: An annotated figure dataset for visual reasoning
Samira Ebrahimi Kahou, Microsoft; Adam Atkinson, Microsoft; Vincent Michalski, University of Montreal; Akos Kadar, Microsoft; Adam Trischler, Microsoft; Yoshua Bengio, University of Montreal
11.40 – 12.00
Contributed talk — FiLM: Visual Reasoning with a General Conditioning Layer
Ethan Perez, MILA and Rice University; Harm de Vries, MILA; Florian Strub, Université Lille; Vincent Dumoulin, MILA; Aaron Courville, MILA and CIFAR
13.30 – 14.10
Keynote — Deep Learning for Self-Driving Cars
Raquel Urtasun, University of Toronto and Uber

14.10 – 14.30
Contributed Talk — Deep 6-DOF Tracking
Mathieu Garon, Université Laval; Jean-François Lalonde, Université Laval

14.30 – 14.50
Contributed Talk — Deep Learning for Character Animation
Daniel Holden, Ubisoft Montreal
15.20 – 15.40
Contributed Talk — Assisting combinatorial chemistry in the search of highly bioactive peptides
Prudencio Tossou, Université Laval; Mario Marchand, Université Laval; François Laviolette, Université Laval

15.40 – 16.00
Contributed Talk — Saving Newborn Lives at Birth through Machine Learning
Charles Onu, Ubenwa Intelligence Solutions Inc; Doina Precup, McGill University

16.00 – 16.20
Contributed Talk — Meticulous Transparency — A Necessary Practice for Ethical AI
Abhishek Gupta ; Dr. David Benrimoh

17.00 – 20.00

Poster Session + Happy Hour with Sponsors

Morning session



Afternoon session


h/t Hugo


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, September 26, 2017

On Principal Components Regression, Random Projections, and Column Subsampling

Continuing on the Random Projection bit today:

Principal Components Regression (PCR) is a traditional tool for dimension reduction in linear regression that has been both criticized and defended. One concern about PCR is that obtaining the leading principal components tends to be computationally demanding for large data sets. While random projections do not possess the optimality properties of the leading principal subspace, they are computationally appealing and hence have become increasingly popular in recent years. In this paper, we present an analysis showing that for random projections satisfying a Johnson-Lindenstrauss embedding property, the prediction error in subsequent regression is close to that of PCR, at the expense of requiring a slightly large number of random projections than principal components. Column sub-sampling constitutes an even cheaper way of randomized dimension reduction outside the class of Johnson-Lindenstrauss transforms. We provide numerical results based on synthetic and real data as well as basic theory revealing differences and commonalities in terms of statistical performance.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Lazy stochastic principal component analysis / Near Optimal Sketching of Low-Rank Tensor Regression

Random projections at work today for evaluating PCA and Tensor Regression !



Stochastic principal component analysis (SPCA) has become a popular dimensionality reduction strategy for large, high-dimensional datasets. We derive a simplified algorithm, called Lazy SPCA, which has reduced computational complexity and is better suited for large-scale distributed computation. We prove that SPCA and Lazy SPCA find the same approximations to the principal subspace, and that the pairwise distances between samples in the lower-dimensional space is invariant to whether SPCA is executed lazily or not. Empirical studies find downstream predictive performance to be identical for both methods, and superior to random projections, across a range of predictive models (linear regression, logistic lasso, and random forests). In our largest experiment with 4.6 million samples, Lazy SPCA reduced 43.7 hours of computation to 9.9 hours. Overall, Lazy SPCA relies exclusively on matrix multiplications, besides an operation on a small square matrix whose size depends only on the target dimensionality.


Near Optimal Sketching of Low-Rank Tensor Regression by Jarvis Haupt, Xingguo Li, David P. Woodruff
We study the least squares regression problem
minΘSD,RAΘb2,
where SD,R is the set of Θ for which Θ=Rr=1θ(r)1θ(r)D for vectors θ(r)dRpd for all r[R] and d[D], and  denotes the outer product of vectors. That is, Θ is a low-dimensional, low-rank tensor. This is motivated by the fact that the number of parameters in Θ is only RDd=1pd, which is significantly smaller than the Dd=1pd number of parameters in ordinary least squares regression. We consider the above CP decomposition model of tensors Θ, as well as the Tucker decomposition. For both models we show how to apply data dimensionality reduction techniques based on {\it sparse} random projections ΦRm×n, with mn, to reduce the problem to a much smaller problem minΘΦAΘΦb2, for which if Θ is a near-optimum to the smaller problem, then it is also a near optimum to the original problem. We obtain significantly smaller dimension and sparsity in Φ than is possible for ordinary least squares regression, and we also provide a number of numerical simulations supporting our theory.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !