Showing posts with label videos. Show all posts
Showing posts with label videos. Show all posts

Saturday, February 17, 2018

Saturday Morning Videos: IPAM Workshop on New Deep Learning Techniques


Yann mentioned it on its twitter feed, the videos and slides of the IPAM workshop on New Deep Learning Techniques is out. Enjoy !

Samuel Bowman (New York University)
Toward natural language semantics in learned representations






Emily Fox (University of Washington)
Interpretable and Sparse Neural Network Time Series Models for Granger Causality Discovery






Ellie Pavlick (University of Pennsylvania)
Should we care about linguistics?






Leonidas Guibas (Stanford University)
Knowledge Transport Over Visual Data




Yann LeCun (New York University)
Public Lecture: Deep Learning and the Future of Artificial Intelligence






Alán Aspuru-Guzik (Harvard University)
Generative models for the inverse design of molecules and materials






Daniel Rueckert (Imperial College)
Deep learning in medical imaging: Techniques for image reconstruction, super-resolution and segmentation







Kyle Cranmer (New York University)
Deep Learning in the Physical Sciences







Stéphane Mallat (École Normale Supérieure)
Deep Generative Networks as Inverse Problems






Michael Elad (Technion - Israel Institute of Technology)
Sparse Modeling in Image Processing and Deep Learning







Yann LeCun (New York University)
Public Lecture: AI Breakthroughs & Obstacles to Progress, Mathematical and Otherwise






Xavier Bresson (Nanyang Technological University, Singapore)
Convolutional Neural Networks on Graphs







Federico Monti (Universita della Svizzera Italiana)
Deep Geometric Matrix Completion: a Geometric Deep Learning approach to Recommender Systems






Joan Bruna (New York University)
On Computational Hardness with Graph Neural Networks








Jure Leskovec (Stanford University)
Large-scale Graph Representation Learning







Arthur Szlam (Facebook)
Composable planning with attributes






Yann LeCun (New York University)
A Few (More) Approaches to Unsupervised Learning







Sanja Fidler (University of Toronto)
Teaching Machines with Humans in the Loop



Raquel Urtasun (University of Toronto)
Deep Learning for Self-Driving Cars




Pratik Chaudhari (University of California, Los Angeles (UCLA))
Unraveling the mysteries of stochastic gradient descent on deep networks







Stefano Soatto (University of California, Los Angeles (UCLA))
Emergence Theory of Deep Learning






Tom Goldstein (University of Maryland)
What do neural net loss functions look like?







Stanley Osher (University of California, Los Angeles (UCLA))
New Techniques in Optimization and Their Applications to Deep Learning and Related Inverse Problems







Michael Bronstein (USI Lugano, Switzerland)
Deep functional maps: intrinsic structured prediction for dense shape correspondence






Sainbayar Sukhbaatar (New York University)
Deep Architecture for Sets and Its Application to Multi-agent Communication






Zuowei Shen (National University of Singapore)
Deep Learning: Approximation of functions by composition







Wei Zhu (Duke University)
LDMnet: low dimensional manifold regularized neural networks












Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, August 11, 2015

Videos and Slides: Statistical Machine Learning course at CMU, Ryan Tibshirani and Larry Wasserman, Spring 2015


Here are the videos and handouts of this spring's Statistical Machine Learning at CMU taught by Ryan Tibshirani and  Larry Wasserman
 
Class Assistant: Mallory Deptola, Teaching Assistants: Sashank Reddi, Jisu Kim, Hanzhang Hu, Shashank Srivastava

Statistical Machine Learning is a second graduate level course in advanced machine learning , assuming students have taken Machine Learning (10-715) and Intermediate Statistics (36-705). The term "statistical" in the title reflects the emphasis on statistical theory and methodology.

The course combines methodology with theoretical foundations and computational aspects. It treats both the "art" of designing good learning algorithms and the "science" of analyzing an algorithm's statistical properties and performance guarantees. Theorems are presented together with practical aspects of methodology and intuition to help students develop tools for selecting appropriate methods and approaches to problems in their own research.

VIDEOS:
  1. Video Lecture 1 Review Part 1
  2. Video Lecture 2 Review Part 2
  3. Video Lecture 3 Density Estimation
  4. Video Lecture 4 Density Estimation
  5. Video Lecture 5 Clustering
  6. Video Lecture 6 Clustering
  7. Video Lecture 7 Nonparametric Regression
  8. Video Lecture 8 Nonparametric Regression
  9. Video Lecture 9 Nonparametric Regression
  10. Video Lecture 10 Nonparametric Regression
  11. Video Lecture 11 Nonparametric Bayes
  12. Video Lecture 12 Sparsity
  13. Video Lecture 13 Sparsity
  14. Video Lecture 14 Graphical Models
  15. Video Lecture: review class Midterm Review
  16. Video Lecture 15 Graphical Models
  17. Video Lecture 16 Convexity
  18. Video Lecture 17 Convexity
  19. Video Lecture 18 Concentration of Measure
  20. Video Lecture 19 Concentration of Measure
  21. Video Lecture 20 Minimax
  22. Video Lecture 21 Minimax
  23. Video Lecture 22 Stein
  24. Video Lecture 23 Stein
  25. Video Lecture 24 Active Learning
  26. Video Lecture 25 The Truth

HANDOUTS: Syllabus

  1. Review Part 1
  2. Review Part 2
  3. Density Estimation
  4. Clustering
  5. Nonparametric Regression
  6. Bayes/Frequentist
  7. Nonparametric Bayes
  8. Sparsity
  9. PRACTICE PROBLEMS
  10. Graphical Models
  11. Convex Optimization
  12. Concentration of Measure
  13. Minimax Theory
  14. Stein's Unbiased Risk Estimate
  15. Active Learning
  16. The Truth
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, August 10, 2015

Video: Deep Learning by Geoff Hinton, University of Cambridge, June 29th, 2015

Back in 2007, many people felt that vision ought to be a feedforward process with little or no feedbacks. And while this might be true for the process of scene understanding, Geoff Hinton makes the case that learning that understanding on the other hand, probably requires backpropagation which in turn will require us to look more deeply onto how the cortex really work. Without further ado:



the abstract:
I will describe an efficient, unsupervised learning procedure for a simple type of two-layer neural network called a Restricted Boltzmann Machine. I will then show how this algorithm can be used recursively to learn multiple layers of features without requiring any supervision. After this unsupervised “pre-training”, the features in all layers can be fine-tuned to be better at discriminating between classes by using the standard backpropagation procedure from the 1980s. Unsupervised pre-training greatly improves generalization to new data, especially when the number of labelled examples is small. Ten years ago, the pre-training approach initiated a revival of research on deep, feedforward neural networks. I will describe some of the major successes of deep networks for speech recognition, object recognition and machine translation and I will speculate about where this research is headed. The fact that backpropagation learning is now the method of choice for a wide variety of really difficult tasks means that neuroscientists may need to reconsider their well-worn arguments about why it cannot possibly be occurring in cortex. I shall conclude by undermining two of the commonest objections to the idea that cortex is actually backpropagating error derivatives through a hierarchy of cortical areas and I shall show that spike-time dependent plasticity is a signature of backpropagation.


other formats are available from Cambridge:
MPEG-4 Video 1280x720    2.99 Mbits/sec 1,45 GB View
MPEG-4 Video 640x360    1.94 Mbits/sec 961,61 MB View
WebM 640x360    876.95 kbits/sec 423,92 MB View
iPod Video 480x270    522.46 kbits/sec 252,56 MB View
MP3 44100 Hz 250.09 kbits/sec 120,90 MB Listen
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, May 23, 2015

Saturday Morning Videos: Slides and Videos from ICLR 2015

From the conference schedule
 
0900 0940 keynote Antoine Bordes (Facebook), Artificial Tasks for Artificial Intelligence (slides) Video1 Video2
0940 1000 oral Word Representations via Gaussian Embedding by Luke Vilnis and Andrew McCallum (Brown University) (slides) Video
1000 1020 oral Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) by Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan Yuille (Baidu and UCLA) (slides) Video
1020 1050 coffee break

1050 1130 keynote David Silver (Google DeepMind), Deep Reinforcement Learning (slides) Video1 Video2
1130 1150 oral Deep Structured Output Learning for Unconstrained Text Recognition by Text Recognition” by Max Jaderberg, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman (Oxford University and Google DeepMind) (slides) Video
1150 1210 oral Very Deep Convolutional Networks for Large-Scale Image Recognition by Karen Simonyan, Andrew Zisserman (Oxford) (slides) Video
1210 1230 oral Fast Convolutional Nets With fbfft: A GPU Performance Evaluation by Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, Yann LeCun (Facebook AI Research) (slides) Video
1230 1400 lunch On your own

1400 1700 posters Workshop Poster Session 1 – The Pavilion

1730 1900 dinner South Poolside – Sponsored by Google



May 8 0730 0900 breakfast South Poolside – Sponsored by Facebook

0900 1230 Oral Session – International Ballroom

0900 0940 keynote Terrence Sejnowski (Salk Institute), Beyond Representation Learning Video1 Video2
0940 1000 oral Reweighted Wake-Sleep (slides) Video
1000 1020 oral The local low-dimensionality of natural images (slides) Video
1020 1050 coffee break

1050 1130 keynote Percy Liang (Stanford), Learning Latent Programs for Question Answering (slides) Video1 Video2
1130 1150 oral Memory Networks (slides) Video
1150 1210 oral Object detectors emerge in Deep Scene CNNs (slides) Video
1210 1230 oral Qualitatively characterizing neural network optimization problems (slides) Video
1230 1400 lunch On your own

1400 1700 posters Workshop Poster Session 2 – The Pavilion

1730 1900 dinner South Poolside – Sponsored by IBM Watson



May 9 0730 0900 breakfast South Poolside – Sponsored by Qualcomm

0900 0940 keynote Hal Daumé III (U. Maryland), Algorithms that Learn to Think on their Feet (slides) Video
0940 1000 oral Neural Machine Translation by Jointly Learning to Align and Translate (slides) Video
1000 1030 coffee break


1030 1330 posters Conference Poster Session – The Pavilion (AISTATS attendees are invited to this poster session)

1330 1700 lunch and break On your own

1700 1800 ICLR/AISTATS Oral Session – International Ballroom

1700 1800 keynote Pierre Baldi (UC Irvine), The Ebb and Flow of Deep Learning: a Theory of Local Learning Video
1800 2000 ICLR/AISTATS reception Fresco's (near the pool)

 
 

Conference Oral Presentations

May 9 Conference Poster Session

Board Presentation
2 FitNets: Hints for Thin Deep Nets, Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio
3 Techniques for Learning Binary Stochastic Feedforward Neural Networks, Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh
4 Reweighted Wake-Sleep, Jorg Bornschein and Yoshua Bengio
5 Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs, Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan Yuille
7 Multiple Object Recognition with Visual Attention, Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu
8 Deep Narrow Boltzmann Machines are Universal Approximators, Guido Montufar
9 Transformation Properties of Learned Visual Representations, Taco Cohen and Max Welling
10 Joint RNN-Based Greedy Parsing and Word Composition, Joël Legrand and Ronan Collobert
11 Adam: A Method for Stochastic Optimization, Jimmy Ba and Diederik Kingma
13 Neural Machine Translation by Jointly Learning to Align and Translate, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio
15 Scheduled denoising autoencoders, Krzysztof Geras and Charles Sutton
16 Embedding Entities and Relations for Learning and Inference in Knowledge Bases, Bishan Yang, Scott Yih, Xiaodong He, Jianfeng Gao, and Li Deng
18 The local low-dimensionality of natural images, Olivier Henaff, Johannes Balle, Neil Rabinowitz, and Eero Simoncelli
20 Explaining and Harnessing Adversarial Examples, Ian Goodfellow, Jon Shlens, and Christian Szegedy
22 Modeling Compositionality with Multiplicative Recurrent Neural Networks, Ozan Irsoy and Claire Cardie
24 Very Deep Convolutional Networks for Large-Scale Image Recognition, Karen Simonyan and Andrew Zisserman
25 Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition, Vadim Lebedev, Yaroslav Ganin, Victor Lempitsky, Maksim Rakhuba, and Ivan Oseledets
27 Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN), Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan Yuille
28 Deep Structured Output Learning for Unconstrained Text Recognition, Max Jaderberg, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman
30 Zero-bias autoencoders and the benefits of co-adapting features, Kishore Konda, Roland Memisevic, and David Krueger
31 Automatic Discovery and Optimization of Parts for Image Classification, Sobhan Naderi Parizi, Andrea Vedaldi, Andrew Zisserman, and Pedro Felzenszwalb
33 Understanding Locally Competitive Networks, Rupesh Srivastava, Jonathan Masci, Faustino Gomez, and Juergen Schmidhuber
35 Leveraging Monolingual Data for Crosslingual Compositional Word Representations, Hubert Soyer, Pontus Stenetorp, and Akiko Aizawa
36 Move Evaluation in Go Using Deep Convolutional Neural Networks, Chris Maddison, Aja Huang, Ilya Sutskever, and David Silver
38 Fast Convolutional Nets With fbfft: A GPU Performance Evaluation, Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, and Yann LeCun
40 Word Representations via Gaussian Embedding, Luke Vilnis and Andrew McCallum
41 Qualitatively characterizing neural network optimization problems, Ian Goodfellow and Oriol Vinyals
42 Memory Networks, Jason Weston, Sumit Chopra, and Antoine Bordes
43 Generative Modeling of Convolutional Neural Networks, Jifeng Dai, Yang Lu, and Ying-Nian Wu
44 A Unified Perspective on Multi-Domain and Multi-Task Learning, Yongxin Yang and Timothy Hospedales
45 Object detectors emerge in Deep Scene CNNs, Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba

May 7 Workshop Poster Session

Board Presentation
2 Learning Non-deterministic Representations with Energy-based Ensembles, Maruan Al-Shedivat, Emre Neftci, and Gert Cauwenberghs
3 Diverse Embedding Neural Network Language Models, Kartik Audhkhasi, Abhinav Sethy, and Bhuvana Ramabhadran
4 Hot Swapping for Online Adaptation of Optimization Hyperparameters, Kevin Bache, Dennis Decoste, and Padhraic Smyth
5 Representation Learning for cold-start recommendation, Gabriella Contardo, Ludovic Denoyer, and Thierry Artieres
6 Training Convolutional Networks with Noisy Labels, Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus
7 Striving for Simplicity: The All Convolutional Net, Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox, and Martin Riedmiller
8 Learning linearly separable features for speech recognition using convolutional neural networks, Dimitri Palaz, Mathew Magimai Doss, and Ronan Collobert
9 Training Deep Neural Networks on Noisy Labels with Bootstrapping, Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich
10 On the Stability of Deep Networks, Raja Giryes, Guillermo Sapiro, and Alex Bronstein
11 Audio source separation with Discriminative Scattering Networks , Joan Bruna, Yann LeCun, and Pablo Sprechmann
13 Simple Image Description Generator via a Linear Phrase-Based Model, Pedro Pinheiro, Rémi Lebret, and Ronan Collobert
15 Stochastic Descent Analysis of Representation Learning Algorithms, Richard Golden
16 On Distinguishability Criteria for Estimating Generative Models, Ian Goodfellow
18 Embedding Word Similarity with Neural Machine Translation, Felix Hill, Kyunghyun Cho, Sebastien Jean, Coline Devin, and Yoshua Bengio
20 Deep metric learning using Triplet network, Elad Hoffer and Nir Ailon
22 Understanding Minimum Probability Flow for RBMs Under Various Kinds of Dynamics, Daniel Jiwoong Im, Ethan Buchman, and Graham Taylor
23 A Group Theoretic Perspective on Unsupervised Deep Learning, Arnab Paul and Suresh Venkatasubramanian
24 Learning Longer Memory in Recurrent Neural Networks, Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc'Aurelio Ranzato
25 Inducing Semantic Representation from Text by Jointly Predicting and Factorizing Relations, Ivan Titov and Ehsan Khoddam
27 NICE: Non-linear Independent Components Estimation, Laurent Dinh, David Krueger, and Yoshua Bengio
28 Discovering Hidden Factors of Variation in Deep Networks, Brian Cheung, Jesse Livezey, Arjun Bansal, and Bruno Olshausen
29 Tailoring Word Embeddings for Bilexical Predictions: An Experimental Comparison, Pranava Swaroop Madhyastha, Xavier Carreras, and Ariadna Quattoni
30 On Learning Vector Representations in Hierarchical Label Spaces, Jinseok Nam and Johannes Fürnkranz
31 In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning, Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro
33 Algorithmic Robustness for Semi-Supervised (ϵ, γ, τ)-Good Metric Learning, Maria-Irina Nicolae, Marc Sebban, Amaury Habrard, Éric Gaussier, and Massih-Reza Amini
35 Real-World Font Recognition Using Deep Network and Domain Adaptation, Zhangyang Wang, Jianchao Yang, Hailin Jin, Eli Shechtman, Aseem Agarwala, Jon Brandt, and Thomas Huang
36 Score Function Features for Discriminative Learning, Majid Janzamin, Hanie Sedghi, and Anima Anandkumar
38 Parallel training of DNNs with Natural Gradient and Parameter Averaging, Daniel Povey, Xioahui Zhang, and Sanjeev Khudanpur
40 A Generative Model for Deep Convolutional Learning, Yunchen Pu, Xin Yuan, and Lawrence Carin
41 Random Forests Can Hash, Qiang Qiu, Guillermo Sapiro, and Alex Bronstein
42 Provable Methods for Training Neural Networks with Sparse Connectivity, Hanie Sedghi, and Anima Anandkumar
43 Visual Scene Representations: sufficiency, minimality, invariance and approximation with deep convolutional networks, Stefano Soatto and Alessandro Chiuso
44 Deep learning with Elastic Averaging SGD, Sixin Zhang, Anna Choromanska, and Yann LeCun
45 Example Selection For Dictionary Learning, Tomoki Tsuchida and Garrison Cottrell
46 Permutohedral Lattice CNNs, Martin Kiefel, Varun Jampani, and Peter Gehler
47 Unsupervised Domain Adaptation with Feature Embeddings, Yi Yang and Jacob Eisenstein
49 Weakly Supervised Multi-embeddings Learning of Acoustic Models, Gabriel Synnaeve and Emmanuel Dupoux

May 8 Workshop Poster Session

Board Presentation
2 Learning Activation Functions to Improve Deep Neural Networks, Forest Agostinelli, Matthew Hoffman, Peter Sadowski, and Pierre Baldi
3 Restricted Boltzmann Machine for Classification with Hierarchical Correlated Prior, Gang Chen and Sargur Srihari
4 Learning Deep Structured Models, Liang-Chieh Chen, Alexander Schwing, Alan Yuille, and Raquel Urtasun
5 N-gram-Based Low-Dimensional Representation for Document Classification, Rémi Lebret and Ronan Collobert
6 Low precision arithmetic for deep learning, Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David
7 Theano-based Large-Scale Visual Recognition with Multiple GPUs, Weiguang Ding, Ruoyan Wang, Fei Mao, and Graham Taylor
8 Improving zero-shot learning by mitigating the hubness problem, Georgiana Dinu and Marco Baroni
9 Incorporating Both Distributional and Relational Semantics in Word Representations, Daniel Fried and Kevin Duh
10 Variational Recurrent Auto-Encoders, Otto Fabius and Joost van Amersfoort
11 Learning Compact Convolutional Neural Networks with Nested Dropout, Chelsea Finn, Lisa Anne Hendricks, and Trevor Darrell
13 Compact Part-Based Image Representations: Extremal Competition and Overgeneralization, Marc Goessling and Yali Amit
15 Unsupervised Feature Learning from Temporal Data, Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, and Yann LeCun
16 Classifier with Hierarchical Topographical Maps as Internal Representation, Pitoyo Hartono, Paul Hollensen, and Thomas Trappenberg
18 Entity-Augmented Distributional Semantics for Discourse Relations, Yangfeng Ji and Jacob Eisenstein
20 Flattened Convolutional Neural Networks for Feedforward Acceleration, Jonghoon Jin, Aysegul Dundar, and Eugenio Culurciello
22 Gradual Training Method for Denoising Auto Encoders, Alexander Kalmanovich and Gal Chechik
23 Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet, Matthias Kümmerer, Lucas Theis, and Matthias Bethge
24 Difference Target Propagation, Dong-Hyun Lee, Saizheng Zhang, Asja Fischer, Antoine Biard, and Yoshua Bengio
25 Predictive encoding of contextual relationships for perceptual inference, interpolation and prediction, Mingmin Zhao, Chengxu Zhuang, Yizhou Wang, and Tai Sing Lee
27 Purine: A Bi-Graph based deep learning framework, Min Lin, Shuo Li, Xuan Luo, and Shuicheng Yan
28 Pixel-wise Deep Learning for Contour Detection, Jyh-Jing Hwang and Tyng-Luh Liu
29 Ensemble of Generative and Discriminative Techniques for Sentiment Analysis of Movie Reviews, Grégoire Mesnil, Tomas Mikolov, Marc'Aurelio Ranzato, and Yoshua Bengio
30 Fast Label Embeddings for Extremely Large Output Spaces, Paul Mineiro and Nikos Karampatziakis
31 An Analysis of Unsupervised Pre-training in Light of Recent Advances, Tom Paine, Pooya Khorrami, Wei Han, and Thomas Huang
33 Fully Convolutional Multi-Class Multiple Instance Learning, Deepak Pathak, Evan Shelhamer, Jonathan Long, and Trevor Darrell
35 What Do Deep CNNs Learn About Objects?, Xingchao Peng, Baochen Sun, Karim Ali, and Kate Saenko
36 Representation using the Weyl Transform, Qiang Qiu, Andrew Thompson, Robert Calderbank, and Guillermo Sapiro
38 Denoising autoencoder with modulated lateral connections learns invariant representations of natural images, Antti Rasmus, Harri Valpola, and Tapani Raiko
40 Towards Deep Neural Network Architectures Robust to Adversarial Examples, Shixiang Gu and Luca Rigazio
41 Explorations on high dimensional landscapes, Levent Sagun, Ugur Guney, and Yann LeCun
42 Generative Class-conditional Autoencoders, Jan Rudy and Graham Taylor
43 Attention for Fine-Grained Categorization, Pierre Sermanet, Andrea Frome, and Esteban Real
44 A Baseline for Visual Instance Retrieval with Deep Convolutional Networks, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson
45 Visual Scene Representation: Scaling and Occlusion, Stefano Soatto, Jingming Dong, and Nikolaos Karianakis
46 Deep networks with large output spaces, Sudheendra Vijayanarasimhan, Jon Shlens, Jay Yagnik, and Rajat Monga
47 Efficient Exact Gradient Update for training Deep Networks with Very Large Sparse Targets, Pascal Vincent
49 Self-informed neural network structure learning, David Warde-Farley, Andrew Rabinovich, and Dragomir Anguelov
 
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, April 18, 2015

Saturday Morning Video: Towards a Learning Theory of Causation - Implementation -

Here is the video:

We pose causal inference as the problem of learning to classify probability distributions. In particular, we assume access to a collection {(Si,li)}ni=1, where each Si is a sample drawn from the probability distribution of Xi×Yi, and li is a binary label indicating whether "Xi→Yi" or "Xi←Yi". Given these data, we build a causal inference rule in two steps. First, we featurize each Si using the kernel mean embedding associated with some characteristic kernel. Second, we train a binary classifier on such embeddings to distinguish between causal directions. We present generalization bounds showing the statistical consistency and learning rates of the proposed approach, and provide a simple implementation that achieves state-of-the-art cause-effect inference. Furthermore, we extend our ideas to infer causal relationships between more than two variables.
The code is here.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, December 27, 2014

Video: Alexander Gerst’s Earth timelapses

 
of specific interest is the video starting at about 4 minutes and 38 seconds, with the overflight of France in about 8 seconds but real time video would really last about 2 minutes and 15 seconds.  
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, December 18, 2014

Video and Slides: Random Embeddings, Matrix-valued Kernels and Deep Learning


 

Random Embeddings, Matrix-valued Kernels and Deep Learning by Vikas Sindhwani

The recent dramatic success of Deep Neural Networks (DNNs) in many applications highlights the statistical benefits of marrying near-nonparametric models with large datasets, using efficient optimization algorithms running in distributed computing environments. In the 1990's, Kernel methods became the toolset of choice for a wide variety of machine learning problems due to their theoretical appeal and algorithmic roots in convex optimization, replacing neural nets in many settings. So what changed between then and the modern deep learning revolution? Perhaps the advent of "big data" or perhaps the notion of "depth" or perhaps better DNN training algorithms, or all of the above. Or perhaps also that the development of kernel methods has somewhat lagged behind in terms of scalable training techniques, effective mechanisms for kernel learning and parallel implementations.
I will describe new efforts to resolve scalability challenges of kernel methods, for both scalar and multivariate prediction settings, allowing them to be trained on big data using a combination of randomized data embeddings, Quasi-Monte Carlo (QMC) acceleration, distributed convex optimization and input-output kernel learning. I will report that on classic speech recognition and computer vision datasets, randomized kernel methods and deep neural networks turn out to have essentially identical performance. Curiously, though, randomized kernel methods begin to look a bit like neural networks, but with a clear mathematical basis for their architecture. Conversely, invariant kernel learning and matrix-valued kernels may offer a new way to construct deeper architectures. This talk will describe recent research results and personal perspectives on points of synergy between these fields.






 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly