Showing posts with label MLVideos. Show all posts
Showing posts with label MLVideos. Show all posts

Friday, November 01, 2019

Videos: IMA Computational Imaging Workshop, October 14 - 18, 2019

** Nuit Blanche is now on Twitter: @NuitBlog ** 



Stanley ChanJeff FesslerJustin HaldarUlugbek KamilovSaiprasad RavishankarRebecca WillettBrendt Wohlberg just organized a workshop at IMA on computational imaging. Short story as this blog just passed the 8 million page views. Understanding of Compressed sensing was in large part, at least by looking at the stats:hits on this blog, due to an IMA meeting on the subject and the fact that people could watch the videos afterward. Hoping for this workshop to follow the same path. Given the amount of ML in it, I wonder if it shouldn't have been called TheGreatConvergence meeting:-)


This workshop will serve as a venue for presenting and discussing recent advances and trends in the growing field of computational imaging, where computation is a major component of the imaging system. Research on all aspects of the computational imaging pipeline from data acquisition (including non-traditional sensing methods) to system modeling and optimization to image reconstruction, processing, and analytics will be discussed, with talks addressing theory, algorithms and mathematical techniques, and computational hardware approaches for imaging problems and applications including MRI, tomography, ultrasound, microscopy, optics, computational photography, radar, lidar, astronomical imaging, hybrid imaging modalities, and novel and extreme imaging systems. The expanding role of computational imaging in industrial imaging applications will also be explored.
Given the rapidly growing interest in data-driven, machine learning, and large-scale optimization based methods in computational imaging, the workshop will partly focus on some of the key recent and new theoretical, algorithmic, or hardware (for efficient/optimized computation) developments and challenges in these areas. Several talks will focus on analyzing, incorporating, or learning various models including sparse and low-rank models, kernel and nonlinear models, plug-and-play models, graphical, manifold, tensor, and deep convolutional or filterbank models in computational imaging problems. Research and discussion of methods and theory for new sensing techniques including data-driven sensing, task-driven imaging optimization, and online/real-time imaging optimization will be encouraged. Discussion sessions during the workshop will explore the theoretical and practical impact of various presented methods and brainstorm the main challenges and open problems.
The workshop aims to encourage close interactions between mathematical and applied computational imaging researchers and practitioners, and bring together experts in academia and industry working in computational imaging theory and applications, with focus on data and system modeling, signal processing, machine learning, inverse problems, compressed sensing, data acquisition, image analysis, optimization, neuroscience, computation-driven hardware design, and related areas, and facilitate substantive and cross-disciplinary interactions on cutting-edge computational imaging methods and systems.




Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Saturday, June 15, 2019

Saturday Morning Videos: AutoML Workshop at ICML 2019

** Nuit Blanche is now on Twitter: @NuitBlog **


Katharina Eggensperger, Matthias Feurer, Frank Hutter, and Joaquin Vanschoren organized the AutoML workshop at ICML, and there are videos of the event that took place yesterday. Awesome.! Here is the intro for the workshop:
Machine learning has achieved considerable successes in recent years, but this success often relies on human experts, who construct appropriate features, design learning architectures, set their hyperparameters, and develop new learning algorithms. Driven by the demand for off-the-shelf machine learning methods from an ever-growing community, the research area of AutoML targets the progressive automation of machine learning aiming to make effective methods available to everyone. The workshop targets a broad audience ranging from core machine learning researchers in different fields of ML connected to AutoML, such as neural architecture search, hyperparameter optimization, meta-learning, and learning to learn, to domain experts aiming to apply machine learning to new types of problems.

All the videos are here.

Bayesian optimization is a powerful and flexible tool for AutoML. While BayesOpt was first deployed for AutoML simply as a black-box optimizer, recent approaches perform grey-box optimization: they leverage capabilities and problem structure specific to AutoML such as freezing and thawing training, early stopping, treating cross-validation error minimization as multi-task learning, and warm starting from previously tuned models. We provide an overview of this area and describe recent advances for optimizing sampling-based acquisition functions that make grey-box BayesOpt significantly more efficient.
The mission of AutoML is to make ML available for non-ML experts and to accelerate research on ML. We have a very similar mission at fast.ai and have helped over 200,000 non-ML experts use state-of-the-art ML (via our research, software, & teaching), yet we do not use methods from the AutoML literature. I will share several insights we've learned through this work, with the hope that they may be helpful to AutoML researchers.



AutoML aims at automating the process of designing good machine learning pipelines to solve different kinds of problems. However, existing AutoML systems are mainly designed for isolated learning by training a static model on a single batch of data; while in many real-world applications, data may arrive continuously in batches, possibly with concept drift. This raises a lifelong machine learning challenge for AutoML, as most existing AutoML systems can not evolve over time to learn from streaming data and adapt to concept drift. In this paper, we propose a novel AutoML system for this new scenario, i.e. a boosting tree based AutoML system for lifelong machine learning, which won the second place in the NeurIPS 2018 AutoML Challenge. 


In this talk I'll survey work by Google researchers over the past several years on the topic of AutoML, or learning-to-learn. The talk will touch on basic approaches, some successful applications of AutoML to a variety of domains, and sketch out some directions for future AutoML systems that can leverage massively multi-task learning systems for automatically solving new problems.


Recent advances in Neural Architecture Search (NAS) have produced state-of-the-art architectures on several tasks. NAS shifts the efforts of human experts from developing novel architectures directly to designing architecture search spaces and methods to explore them efficiently. The search space definition captures prior knowledge about the properties of the architectures and it is crucial for the complexity and the performance of the search algorithm. However, different search space definitions require restarting the learning process from scratch. We propose a novel agent based on the Transformer that supports joint training and efficient transfer of prior knowledge between multiple search spaces and tasks.
Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures. In order to help ground the empirical results in this field, we propose new NAS baselines that build off the following observations: (i) NAS is a specialized hyperparameter optimization problem; and (ii) random search is a competitive baseline for hyperparameter optimization. Leveraging these observations, we evaluate both random search with early-stopping and a novel random search with weight-sharing algorithm on two standard NAS benchmarks—PTB and CIFAR-10. Our results show that random search with early-stopping is a competitive NAS baseline, e.g., it performsat least as well as ENAS, a leading NAS method, on both benchmarks. Additionally, random search with weight-sharing outperforms random search with early-stopping, achieving a state-of-the-art NAS result onPTB and a highly competitive result on CIFAR-10. Finally, we explore the existing reproducibility issues of published NAS results.
The practical work of deploying a machine learning system is dominated by issues outside of training a model: data preparation, data cleaning, understanding the data set, debugging models, and so on. What does it mean to apply ML to this “grunt work” of machine learning and data science? I will describe first steps towards tools in these directions, based on the idea of semi-automating ML: using unsupervised learning to find patterns in the data that can be used to guide data analysts. I will also describe a new notebook system for pulling these tools together: if we augment Jupyter-style notebooks with data-flow and provenance information, this enables a new class of data-aware notebooks which are much more natural for data manipulation.
Panel Discussion





Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Saturday, June 08, 2019

Saturday Morning Videos: L4DC: Learning for Dynamics and Control, May 30 & 31, 2019, MIT

** Nuit Blanche is now on Twitter: @NuitBlog **


The L4DC: Learning for Dynamics and Control conference took place on May 30 & 31, 2019 at MIT. From the website page:


Over the next decade, the biggest generator of data is expected to be devices which sense and control the physical world.
This explosion of real-time data that is emerging from the physical world requires a rapprochement of areas such as machine learning, control theory, and optimization. While control theory has been firmly rooted in tradition of model-based design, the availability and scale of data (both temporal and spatial) will require rethinking of the foundations of our discipline. From a machine learning perspective, one of the main challenges going forward is to go beyond pattern recognition and address problems in data driven control and optimization of dynamical processes.
Our overall goal is to create a new community of people that think rigorously across the disciplines, asks new questions, and develops the foundations of this new scientific area

Here are the videos followed by the posters at the conference:














Emo Todorov (University of Washington): “Acceleration-based methods for trajectory optimization through contacts”









Posters 

  • Murad Abu-Khalaf, Sertac Karaman & Daniela Rus, MIT, “Shared Linear Quadratic Regulation Control: A Reinforcement Learning Approach” PDF
  • Aaron Ames & Andrew J. Taylor, CalTech, “A Control Lyapunov Function Approach to Episodic Learning” PDF
  • Anuradha M. Annaswamy, MIT, “Stable and Fast Learning with Momentum and Adaptive Rates” PDF
  • Thomas Beckers, Technical University of Munich, “Gaussian Process Based Identification and Control with Formal Guarantees” PDF
  • Kostas Bekris, Rutgers University, “Closing the Reality Gap of Physics-Based Robot Simulation Through Task-Oriented Bayesian Optimization”
  • Julian Berberich, University of Stuttgart, “Data-Driven Model Predictive Control with Stability and Robustness Guarantees” PDF
  • Tom Bertalan, MIT, “On Learning Hamiltonian Systems from Data”
  • Calin Belta & Xiao Li, Boston University, “A Formal Methods Approach to Reinforcement Learning For Robotic Control”
  • Nicholas M. Boffi, Harvard University, and Jean-Jacques Slotine, MIT, “A continuous-time analysis of distributed stochastic gradient”
  • Byron Boots, Georgia Institute of Technology, “An Online Learning Approach to Model Predictive Control”
  • Octavia Camps, Northeastern University College of Engineering, “KW-DYAN: A Recurrent Dynamics-Based Network for Video Prediction” PDF
  • Bugra Can, Rutgers University, “Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances”
  • Pratik Chaudhari, Amazon Web Services, “P3O: Policy-on Policy-off Policy Optimization”
  • Alessandro Chiuso, ETH Zuerich, “CoRe: Control Oriented Learning – A Regularisation-Based Approach”
  • Glen Chou, University of Michigan, “Learning Constraints from Demonstrations”
  • Claus Danielson, Ankush Chakrabarty, Stefano Di Cairano, Mitsubishi Electric Research Laboratories, “Invariance for Safe Learning in Constraint-Enforcing Control”
  • Adithya Devraj, University of Florida, “Stochastic Approximation and the Need for Speed” PDF
  • Vikas Dhiman, UC San Diego, “Model-based Transfer Learning of Skills across Robots and Tools”
  • Zhe Du, University of Michigan, “Online Robust Switched System Identification”
  • Alec Farid, Princeton University, “PAC-Bayes Control: Learning Policies that Provably Generalize to Novel Environments” PDF
  • Dylan Foster, MIT, “Model Selection for Contextual Bandits”
  • Travis Gibson, Harvard Medical School, “Connections Between Adaptive Control and Machine Learning”
  • Stephanie Gil, Arizona State University, “Generalized Rollout Algorithms for POMDP with Application to Sequential Repair Problems”
  • Mert Gurbuzbalaban, Rutgers University, “Robust Accelerated Gradient Methods”
  • Josiah Hanna, University of Texas, “Robot Learning in Simulation with Action Transformations”
  • Hamed Hassani, University of Pennsylvania, “Distributed Scenarios in Submodular Optimization”
  • Jonathan How, MIT, “Knowledge Transfer via Learning to Teach in Cooperative Multiagent Reinforcement Learning”
  • Ameya Jagtap, Brown University, “Time-Parallel and Fractional Physics-Informed Neural Networks for Solving Transient PDEs” PDF
  • Yassir Jedra, KTH Royal Institute of Technology in Stockholm, “Sample Complexity Lower Bounds for Linear System Identification”
  • Angjoo Kanazawa, UC Berkeley, “SFV: Reinforcement Learning of Physical Skills from Videos”
  • Bahir El Kadir & Amir Ali Ahmadi, Princeton University, “Learning Dynamical Systems With Side Information”
  • Reza Khodayi-mehr, Duke University, “Model-Based Learning of Turbulent Flows using Mobile Robots” PDF
  • Dong-Ki Kim, MIT, “Knowledge Transfer via Learning to Teach in Cooperative Multiagent Reinforcement Learning”
  • George Kissas & Yibo Yang, University of Pennsylvania, “Learning the Flow Map of Dynamical Systems with Self-Supervised Neural Runge-Kutta Networks”
  • Alec Koppel, University of Pennsylvania, “Global Convergence of Policy Gradient Methods: A Nonconvex Optimization Perspective”
  • Abdul Rahman Kreidieh, UC Berkeley, “Scalable methods for the control of mixed autonomous system”
  • Nevena Lazic, Google, “POLITEX: Regret Bounds for Policy Iteration using Expert Prediction” PDF
  • Armin Lederer, Technical University of Munich, “Stable Feedback Linearization and Optimal Control for Gaussian Processes” PDF
  • Na Li, Harvard University, “The Role of Prediction in Online Control”
  • Jason Liang, MIT, “Learning the Contextual Demand Curve in Repeated Auctions” 
  • Nikolai Matni, UC Berkeley, “Robust Guarantees for Perception-Based Control”
  • Jared Miller, Yang Zheng, Biel Roig-Solvas, Mario Sznaier, Antonis Papachristodoulou, Northeastern University/Harvard University/University of Oxford, “Chordal Decomposition in Rank Minimized SDPs”
  • Yannis Paschalidis, Boston University, “Distributionally Robust Learning and Applications to Predictive and Prescriptive Health Analytics” PDF
  • Panagiotis Patrinos & Mathijs Schuurmans, KU Leuven, “Safe Learning-Based Control of Stochastic Jump Linear Systems: A Distributionally Robust Approach” PDF
  • Amirhossein Reisizadeh, UC Santa Barbara, “Robust and Communication-Efficient Collaborative Learning” PDF
  • Anders Rantzer, Lund University, “On the Non-Robustness of Certainty Equivalence Control”
  • Lilian Ratliff, Sam Burden, Sam Coogan, Benjamin Chasnov & Tanner Fiez, University of Washington, “Certifiable Algorithms for Learning and Control in Multiagent Systems” PDF
  • Alejandro Ribeiro, University of Pennsylvania, “Know Your Limits: Learning Feasible Specifications Using Counterfactual Optimization”
  • Thomas Schön, Uppsala University, “Robust Exploration for Data-Driven Linear Quadratic Control”
  • Artin Spiridonof, Boston University, “Network Independence in Distributed Optimization” PDF
  • Lili Su, MIT, “Distributed Learning and Estimation in the Presence of Byzantine Agents”
  • Friedrich Solowjow & Sebastian Trimpe, Max Planck Institute for Intelligent Systems – Stuttgart, Germany, “Event-Triggered Learning”
  • Karan Singh, Princeton University, “Online Control with Adversarial Disturbances”
  • Madeleine Udell, Cornell University, “OBOE: Collaborative Filtering for Automated Machine Learning” PDF
  • Min Wen, University of Pennsylvania, “Constrained Cross-Entropy Method for Safe Reinforcement Learning”
  • Zhi Xu, MIT, “On Reinforcement Learning Using Monte Carlo Tree Search with Supervised Learning: Non-Asymptotic Analysis”
  • Lin F. Yang, Princeton University, “Sample-Optimal Parametric Q-Learning with Linear Transition Models” PDF
  • Yan Zhang, Duke University, “Distributed Off-Policy Actor-Critic Reinforcement Learning with Policy Consensus”



Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn


Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog

Saturday, June 01, 2019

Saturday Morning Videos: Deep Learning Boot Camp, May 28 – May 31, 2019, Simons Institute for the Theory of Computing,

** Nuit Blanche is now on Twitter: @NuitBlog **



Here are the videos of this past week's Deep Learning Boot Camp, (May 28 – May 31, 2019) that took place at the Simons Institute for the Theory of Computing. Thank you to the organizing committee:
The Boot Camp is intended to acquaint program participants with the key themes of the program. It  consisted of four days of tutorial presentations.




Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Saturday, May 25, 2019

Saturday Morning Videos: Imaging and Machine Learning Workshop, @IHP Paris, April 1st – 5th , 2019

** Nuit Blanche is now on Twitter: @NuitBlog ** 



This is the third Workshop «Imaging and Machine Learning» within the Mathematics of Imaging series organized in Paris this semester (videos of Workshop 1 are here, videos of Workshop 2 are here)

Structured prediction via implicit embeddings - Alessandro Rudi - Workshop 3 - CEB T1 2019


A Kernel Perspective for Regularizing Deep Neural Networks - Julien Mairal - Workshop 3 - CEB T1 2019

Optimization meets machine learning for neuroimaging - Alexandre Gramfort - Workshop 3 - CEB T1 2019

Random Matrix Advances in Machine Learning - Romain Couillet - Workshop 3 - CEB T1 2019

Iterative regularization via dual diagonal descent - Silvia Villa - Workshop 3 - CEB T1 2019

Scalable hyperparameter transfer learning - Valerio Perrone - Workshop 3 - CEB T1 2019

Using structure to select features in high dimension. - Chloe-Agathe Azencott - Workshop 3 - CEB T1 2019

Predicting aesthetic appreciation of images. - Naila Murray - Workshop 3 - CEB T1 2019

Learning Representations for Information Obfuscation (...) - Guillermo Sapiro - Workshop 3 - CEB T1 2019

Convex unmixing and learning the effect of latent (...) - Guillaume Obozinski - Workshop 3 - CEB T1 2019

Revisiting non-linear PCA with progressively grown autoencoders. - José Lezama - Workshop 3 - CEB T1 2019

On the several ways to regularize optimal transport. - Marco Cuturi - Workshop 3 - CEB T1 2019

Combinatorial Solutions to Elastic Shape Matching. - Daniel Cremers - Workshop 3 - CEB T1 2019

L’intelligence Artificielle est-elle Logique ou Géométrique ? - Stephane Mallat - Grand Public -CEB T1 2019

Rank optimality for the Burer-Monteiro factorization - Irène Waldspurger - Workshop 3 - CEB T1 2019

Bayesian inversion for tomography through machine learning. - Ozan Öktem - Workshop 3 - CEB T1 2019

Understanding geometric attributes with autoencoders. - Alasdair Newson - Workshop 3 - CEB T1 2019

Multigrain: a unified image embedding for classes (...) - Bertrand Thirion - Workshop 3 - CEB T1 2019

Deep Inversion, Autoencoders for Learned Regularization (...) - Christoph Brune - Workshop 3 - CEB T1 2019

Optimal machine learning with stochastic projections (...) - Lorenzo Rosasco - Workshop 3 - CEB T1 2019

Roto-Translation Covariant Convolutional Networks for (...) - Remco Duits - Workshop 3 - CEB T1 2019

Unsupervised domain adaptation with application to urban (...) - Patrick Pérez - Workshop 3 - CEB T1 2019

Designing multimodal deep architectures for Visual Question (...) - Matthieu Cord - Workshop 3 - CEB T1 2019

Towards demystifying over-parameterization in deep (...) - Mahdi Soltanolkotabi - Workshop 3 - CEB T1 2019

Nonnegative matrix factorisation with the beta-divergence (...) - Cédric Févotte - Workshop 3 -CEB T1 2019

Autoencoder Image Generation with Multiscale Sparse (...) - Stéphane Mallat - Workshop 3 - CEB T1 2019

Learning from permutations. - Jean-Philippe Vert - Workshop 3 - CEB T1 2019

Learned image reconstruction for high-resolution (...) - Marta Betcke - Workshop 3 - CEB T1 2019

Contextual Bandit: from Theory to Applications. - Claire Vernade - Workshop 3 - CEB T1 2019


On the Global Convergence of Gradient Descent for (...) - Francis Bach - Workshop 3 - CEB T1 2019



Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Printfriendly