Here are four videos (and paprts) from the Deep Reinforcement workshop at NIPS
Contributed Papers
- Honglak Lee, video; Deep Reinforcement Learning with Predictions
- Juergen Schmidhuber, Reinforcement Learning of Programs in General Purpose Computers with Memory
- Michael Bowling
- Volodymyr Mnih, video: Faster Deep Reinforcement Learning
- Gerry Tesauro, Deep RL and Games Research at IBM
- Osaro, tech talk
- Sergey Levine, Video: Deep Sensorimotor Learning for Robotic Control
- Yoshua Bengio
- Martin Riedmiller, video; Deep RL for Learning Machines
- Jan Koutnik, Compressed Neural Networks for Reinforcement Learning
Contributed Papers
- The importance of experience replay database composition in deep reinforcement learning Tim de Bruin, Jens Kober, Karl Tuyls, Robert Babuška
- Continuous deep-time neural reinforcement learning Davide Zambrano, Pieter R. Roelfsema and Sander M. Bohte
- Memory-based control with recurrent neural networks Nicolas Heess, Jonathan J Hunt, Timothy Lillicrap, David Silver
- How to discount deep reinforcement learning: towards new dynamic strategies Vincent François-Lavet, Raphael Fonteneau, Damien Ernst
- Strategic Dialogue Management via Deep Reinforcement Learning Heriberto Cuayáhuitl, Simon Keizer, Oliver Lemon
- Deep Reinforcement Learning in Parameterized Action Space Matthew Hausknecht, Peter Stone
- Guided Cost Learning: Inverse Optimal Control with Multilayer Neural Networks Chelsea Finn, Sergey Levine, Pieter Abbeel
- Learning Deep Control Policies for Autonomous Aerial Vehicles with MPC-Guided Policy Search Tianhao Zhang, Gregory Kahn, Sergey Levine, Pieter Abbeel
- Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov
- Deep Inverse Reinforcement Learning Markus Wulfmeier, Peter Ondruska and Ingmar Posner
- ADAAPT: A Deep Architecture for Adaptive Policy Transfer from Multiple Sources Janarthanan Rajendran, P Prasanna, Balaraman Ravindran, Mitesh Khapra
- Q-Networks for Binary Vector Actions Naoto Yoshida
- The option-critic architecture Pierre-Luc Bacon and Doina Precup
- Learning Deep Neural Network Policies with Continuous Memory States Marvin Zhang, Zoe McCarthy, Chelsea Finn, Sergey Levine, Pieter Abbeel
- Deep Attention Recurrent Q-Network Ivan Sorokin, Alexey Seleznev, Mikhail Pavlov, Aleksandr Fedorov, Anastasiia Ignateva
- Generating Text with Deep Reinforcement Learning Hongyu Guo
- Deep Spatial Autoencoders for Visuomotor Learning Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, Pieter Abbeel
- Data-Efficient Learning of Feedback Policies from Image Pixels using Deep Dynamical Models John-Alexander M. Assael, Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth
- One-Shot Learning of Manipulation Skills with Online Dynamics Adaptation and Neural Network Priors Justin Fu, Sergey Levine, Pieter Abbeel
- Learning Visual Models of Physics for Playing Billiards Katerina Fragkiadaki, Pulkit Agrawal, Sergey Levine, Jitendra Malik
- Conditional computation in neural networks for faster models Emmanuel Bengio, Joelle Pineau, Pierre-Luc Bacon, Doina Precup
- Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models Bradly C. Stadie, Sergey Levine, Pieter Abbeel
- Learning Simple Algorithms from Examples Wojciech Zaremba, Tomas Mikolov, Armand Joulin, Rob Fergus
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment