So ICLR 2017 is taking place today in Toulon this week, there will be a blog post for each half day that features directly links to papers from the Open review section. The meeting will be featured live on Facebook here at: https://www.facebook.com/iclr.cc/ . If you want to say hi, I am around.
Monday April 24, 2017
Morning Session – Session Chair: Dhruv Batra
7.00 - 8.45 Registration
8.45 - 9.00 Opening Remarks
9.00 - 9.40 Invited talk 1: Eero Simoncelli
9.40 - 10.00 Contributed talk 1: End-to-end Optimized Image Compression
10.00 - 10.20 Contributed talk 2: Amortised MAP Inference for Image Super-resolution
10.20 - 10.30 Coffee Break
10.30 - 12.30 Poster Session 1
C1: Making Neural Programming Architectures Generalize via Recursion (slides, code, video)
C2: Learning Graphical State Transitions (code)
C3: Distributed Second-Order Optimization using Kronecker-Factored Approximations
C4: Normalizing the Normalizers: Comparing and Extending Network Normalization Schemes
C5: Neural Program Lattices
C6: Diet Networks: Thin Parameters for Fat Genomics
C7: Unsupervised Cross-Domain Image Generation (TensorFlow implementation )
C8: Towards Principled Methods for Training Generative Adversarial Networks
C9: Recurrent Mixture Density Network for Spatiotemporal Visual Attention
C10: Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer (PyTorch code)
C11: Pruning Filters for Efficient ConvNets
C12: Stick-Breaking Variational Autoencoders
C13: Identity Matters in Deep Learning
C14: On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
C15: Recurrent Hidden Semi-Markov Model
C16: Nonparametric Neural Networks
C17: Learning to Generate Samples from Noise through Infusion Training
C18: An Information-Theoretic Framework for Fast and Robust Unsupervised Learning via Neural Population Infomax
C19: Highway and Residual Networks learn Unrolled Iterative Estimation
C20: Soft Weight-Sharing for Neural Network Compression (Tutorial)
C21: Snapshot Ensembles: Train 1, Get M for Free
C22: Towards a Neural Statistician
C23: Learning Curve Prediction with Bayesian Neural Networks
C24: Learning End-to-End Goal-Oriented Dialog
C25: Multi-Agent Cooperation and the Emergence of (Natural) Language
C26: Efficient Vector Representation for Documents through Corruption ( code)
C27: Improving Neural Language Models with a Continuous Cache
C28: Program Synthesis for Character Level Language Modeling
C29: Tracking the World State with Recurrent Entity Networks (TensorFlow implementation)
C30: Reinforcement Learning with Unsupervised Auxiliary Tasks (blog post, an implementation )
C31: Neural Architecture Search with Reinforcement Learning ( slides, some implementation of appendix A)
C32: Sample Efficient Actor-Critic with Experience Replay
C33: Learning to Act by Predicting the Future
Workshop Papers
W1: Extrapolation and learning equations (short blog post)
W2: Effectiveness of Transfer Learning in EHR data
W3: Intelligent synapses for multi-task and transfer learning
W4: Unsupervised and Efficient Neural Graph Model with Distributed Representations
W5: Accelerating SGD for Distributed Deep-Learning Using an Approximted Hessian Matrix
W6: Accelerating Eulerian Fluid Simulation With Convolutional Networks (FluidNet code)
W7: Forced to Learn: Discovering Disentangled Representations Without Exhaustive Labels
W8: Dataset Augmentation in Feature Space
W9: Learning Algorithms for Active Learning
W10: Reinterpreting Importance-Weighted Autoencoders
W11: Robustness to Adversarial Examples through an Ensemble of Specialists
W12: (empty)
W13: On Hyperparameter Optimization in Learning Systems
W14: Recurrent Normalization Propagation
W15: Joint Training of Ratings and Reviews with Recurrent Recommender Networks
W16: Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses
W17: Joint Embeddings of Scene Graphs and Images
W18: Unseen Style Transfer Based on a Conditional Fast Style Transfer Network
Morning Session – Session Chair: Dhruv Batra
7.00 - 8.45 Registration
8.45 - 9.00 Opening Remarks
9.00 - 9.40 Invited talk 1: Eero Simoncelli
9.40 - 10.00 Contributed talk 1: End-to-end Optimized Image Compression
10.00 - 10.20 Contributed talk 2: Amortised MAP Inference for Image Super-resolution
10.20 - 10.30 Coffee Break
10.30 - 12.30 Poster Session 1
C1: Making Neural Programming Architectures Generalize via Recursion (slides, code, video)
C2: Learning Graphical State Transitions (code)
C3: Distributed Second-Order Optimization using Kronecker-Factored Approximations
C4: Normalizing the Normalizers: Comparing and Extending Network Normalization Schemes
C5: Neural Program Lattices
C6: Diet Networks: Thin Parameters for Fat Genomics
C7: Unsupervised Cross-Domain Image Generation (TensorFlow implementation )
C8: Towards Principled Methods for Training Generative Adversarial Networks
C9: Recurrent Mixture Density Network for Spatiotemporal Visual Attention
C10: Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer (PyTorch code)
C11: Pruning Filters for Efficient ConvNets
C12: Stick-Breaking Variational Autoencoders
C13: Identity Matters in Deep Learning
C14: On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
C15: Recurrent Hidden Semi-Markov Model
C16: Nonparametric Neural Networks
C17: Learning to Generate Samples from Noise through Infusion Training
C18: An Information-Theoretic Framework for Fast and Robust Unsupervised Learning via Neural Population Infomax
C19: Highway and Residual Networks learn Unrolled Iterative Estimation
C20: Soft Weight-Sharing for Neural Network Compression (Tutorial)
C21: Snapshot Ensembles: Train 1, Get M for Free
C22: Towards a Neural Statistician
C23: Learning Curve Prediction with Bayesian Neural Networks
C24: Learning End-to-End Goal-Oriented Dialog
C25: Multi-Agent Cooperation and the Emergence of (Natural) Language
C26: Efficient Vector Representation for Documents through Corruption ( code)
C27: Improving Neural Language Models with a Continuous Cache
C28: Program Synthesis for Character Level Language Modeling
C29: Tracking the World State with Recurrent Entity Networks (TensorFlow implementation)
C30: Reinforcement Learning with Unsupervised Auxiliary Tasks (blog post, an implementation )
C31: Neural Architecture Search with Reinforcement Learning ( slides, some implementation of appendix A)
C32: Sample Efficient Actor-Critic with Experience Replay
C33: Learning to Act by Predicting the Future
Workshop Papers
W1: Extrapolation and learning equations (short blog post)
W2: Effectiveness of Transfer Learning in EHR data
W3: Intelligent synapses for multi-task and transfer learning
W4: Unsupervised and Efficient Neural Graph Model with Distributed Representations
W5: Accelerating SGD for Distributed Deep-Learning Using an Approximted Hessian Matrix
W6: Accelerating Eulerian Fluid Simulation With Convolutional Networks (FluidNet code)
W7: Forced to Learn: Discovering Disentangled Representations Without Exhaustive Labels
W8: Dataset Augmentation in Feature Space
W9: Learning Algorithms for Active Learning
W10: Reinterpreting Importance-Weighted Autoencoders
W11: Robustness to Adversarial Examples through an Ensemble of Specialists
W12: (empty)
W13: On Hyperparameter Optimization in Learning Systems
W14: Recurrent Normalization Propagation
W15: Joint Training of Ratings and Reviews with Recurrent Recommender Networks
W16: Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses
W17: Joint Embeddings of Scene Graphs and Images
W18: Unseen Style Transfer Based on a Conditional Fast Style Transfer Network
No comments:
Post a Comment