The L4DC: Learning for Dynamics and Control conference took place on May 30 & 31, 2019 at MIT. From the website page:
Over the next decade, the biggest generator of data is expected to be devices which sense and control the physical world.
This explosion of real-time data that is emerging from the physical world requires a rapprochement of areas such as machine learning, control theory, and optimization. While control theory has been firmly rooted in tradition of model-based design, the availability and scale of data (both temporal and spatial) will require rethinking of the foundations of our discipline. From a machine learning perspective, one of the main challenges going forward is to go beyond pattern recognition and address problems in data driven control and optimization of dynamical processes.
Our overall goal is to create a new community of people that think rigorously across the disciplines, asks new questions, and develops the foundations of this new scientific area
Here are the videos followed by the posters at the conference:
Emo Todorov (University of Washington): “Acceleration-based methods for trajectory optimization through contacts”
Posters
- Murad Abu-Khalaf, Sertac Karaman & Daniela Rus, MIT, “Shared Linear Quadratic Regulation Control: A Reinforcement Learning Approach” PDF
- Aaron Ames & Andrew J. Taylor, CalTech, “A Control Lyapunov Function Approach to Episodic Learning” PDF
- Anuradha M. Annaswamy, MIT, “Stable and Fast Learning with Momentum and Adaptive Rates” PDF
- Thomas Beckers, Technical University of Munich, “Gaussian Process Based Identification and Control with Formal Guarantees” PDF
- Kostas Bekris, Rutgers University, “Closing the Reality Gap of Physics-Based Robot Simulation Through Task-Oriented Bayesian Optimization”
- Julian Berberich, University of Stuttgart, “Data-Driven Model Predictive Control with Stability and Robustness Guarantees” PDF
- Tom Bertalan, MIT, “On Learning Hamiltonian Systems from Data”
- Calin Belta & Xiao Li, Boston University, “A Formal Methods Approach to Reinforcement Learning For Robotic Control”
- Nicholas M. Boffi, Harvard University, and Jean-Jacques Slotine, MIT, “A continuous-time analysis of distributed stochastic gradient”
- Byron Boots, Georgia Institute of Technology, “An Online Learning Approach to Model Predictive Control”
- Octavia Camps, Northeastern University College of Engineering, “KW-DYAN: A Recurrent Dynamics-Based Network for Video Prediction” PDF
- Bugra Can, Rutgers University, “Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances”
- Pratik Chaudhari, Amazon Web Services, “P3O: Policy-on Policy-off Policy Optimization”
- Alessandro Chiuso, ETH Zuerich, “CoRe: Control Oriented Learning – A Regularisation-Based Approach”
- Glen Chou, University of Michigan, “Learning Constraints from Demonstrations”
- Claus Danielson, Ankush Chakrabarty, Stefano Di Cairano, Mitsubishi Electric Research Laboratories, “Invariance for Safe Learning in Constraint-Enforcing Control”
- Adithya Devraj, University of Florida, “Stochastic Approximation and the Need for Speed” PDF
- Vikas Dhiman, UC San Diego, “Model-based Transfer Learning of Skills across Robots and Tools”
- Zhe Du, University of Michigan, “Online Robust Switched System Identification”
- Alec Farid, Princeton University, “PAC-Bayes Control: Learning Policies that Provably Generalize to Novel Environments” PDF
- Dylan Foster, MIT, “Model Selection for Contextual Bandits”
- Travis Gibson, Harvard Medical School, “Connections Between Adaptive Control and Machine Learning”
- Stephanie Gil, Arizona State University, “Generalized Rollout Algorithms for POMDP with Application to Sequential Repair Problems”
- Mert Gurbuzbalaban, Rutgers University, “Robust Accelerated Gradient Methods”
- Josiah Hanna, University of Texas, “Robot Learning in Simulation with Action Transformations”
- Hamed Hassani, University of Pennsylvania, “Distributed Scenarios in Submodular Optimization”
- Jonathan How, MIT, “Knowledge Transfer via Learning to Teach in Cooperative Multiagent Reinforcement Learning”
- Ameya Jagtap, Brown University, “Time-Parallel and Fractional Physics-Informed Neural Networks for Solving Transient PDEs” PDF
- Yassir Jedra, KTH Royal Institute of Technology in Stockholm, “Sample Complexity Lower Bounds for Linear System Identification”
- Angjoo Kanazawa, UC Berkeley, “SFV: Reinforcement Learning of Physical Skills from Videos”
- Bahir El Kadir & Amir Ali Ahmadi, Princeton University, “Learning Dynamical Systems With Side Information”
- Reza Khodayi-mehr, Duke University, “Model-Based Learning of Turbulent Flows using Mobile Robots” PDF
- Dong-Ki Kim, MIT, “Knowledge Transfer via Learning to Teach in Cooperative Multiagent Reinforcement Learning”
- George Kissas & Yibo Yang, University of Pennsylvania, “Learning the Flow Map of Dynamical Systems with Self-Supervised Neural Runge-Kutta Networks”
- Alec Koppel, University of Pennsylvania, “Global Convergence of Policy Gradient Methods: A Nonconvex Optimization Perspective”
- Abdul Rahman Kreidieh, UC Berkeley, “Scalable methods for the control of mixed autonomous system”
- Nevena Lazic, Google, “POLITEX: Regret Bounds for Policy Iteration using Expert Prediction” PDF
- Armin Lederer, Technical University of Munich, “Stable Feedback Linearization and Optimal Control for Gaussian Processes” PDF
- Na Li, Harvard University, “The Role of Prediction in Online Control”
- Jason Liang, MIT, “Learning the Contextual Demand Curve in Repeated Auctions”
- Nikolai Matni, UC Berkeley, “Robust Guarantees for Perception-Based Control”
- Jared Miller, Yang Zheng, Biel Roig-Solvas, Mario Sznaier, Antonis Papachristodoulou, Northeastern University/Harvard University/University of Oxford, “Chordal Decomposition in Rank Minimized SDPs”
- Yannis Paschalidis, Boston University, “Distributionally Robust Learning and Applications to Predictive and Prescriptive Health Analytics” PDF
- Panagiotis Patrinos & Mathijs Schuurmans, KU Leuven, “Safe Learning-Based Control of Stochastic Jump Linear Systems: A Distributionally Robust Approach” PDF
- Amirhossein Reisizadeh, UC Santa Barbara, “Robust and Communication-Efficient Collaborative Learning” PDF
- Anders Rantzer, Lund University, “On the Non-Robustness of Certainty Equivalence Control”
- Lilian Ratliff, Sam Burden, Sam Coogan, Benjamin Chasnov & Tanner Fiez, University of Washington, “Certifiable Algorithms for Learning and Control in Multiagent Systems” PDF
- Alejandro Ribeiro, University of Pennsylvania, “Know Your Limits: Learning Feasible Specifications Using Counterfactual Optimization”
- Thomas Schön, Uppsala University, “Robust Exploration for Data-Driven Linear Quadratic Control”
- Artin Spiridonof, Boston University, “Network Independence in Distributed Optimization” PDF
- Lili Su, MIT, “Distributed Learning and Estimation in the Presence of Byzantine Agents”
- Friedrich Solowjow & Sebastian Trimpe, Max Planck Institute for Intelligent Systems – Stuttgart, Germany, “Event-Triggered Learning”
- Karan Singh, Princeton University, “Online Control with Adversarial Disturbances”
- Madeleine Udell, Cornell University, “OBOE: Collaborative Filtering for Automated Machine Learning” PDF
- Min Wen, University of Pennsylvania, “Constrained Cross-Entropy Method for Safe Reinforcement Learning”
- Zhi Xu, MIT, “On Reinforcement Learning Using Monte Carlo Tree Search with Supervised Learning: Non-Asymptotic Analysis”
- Lin F. Yang, Princeton University, “Sample-Optimal Parametric Q-Learning with Linear Transition Models” PDF
- Yan Zhang, Duke University, “Distributed Off-Policy Actor-Critic Reinforcement Learning with Policy Consensus”
Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn or the Advanced Matrix Factorization group on LinkedIn
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.
Other links:
Paris Machine Learning: Meetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/>
About LightOn: Newsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
No comments:
Post a Comment