From Ben Recht's The Algorithmic Frontiers of Atomic Norm Minimization: Relaxation, Discretization, and Greedy Pursuit
At NIPS, there was a workshop on Greedy Algorithms, Frank-Wolfe and Friends - A modern perspective. From the description page:
Motivation: Greedy algorithms and related first-order optimization algorithms are at the core of many of the state of the art sparse methods in machine learning, signal processing, harmonic analysis, statistics and other seemingly unrelated areas, with very different goals at first sight. Examples include matching pursuit, boosting, greedy methods for sub-modular optimization, structured prediction, and many more. In the field of optimization, the recent renewed interest in Frank-Wolfe/conditional gradient algorithms opens up an interesting perspective towards a unified understanding of these methods, with a big potential to translate the rich existing knowledge about the respective greedy methods between the different fields.The scope of this workshop is to gather renowned experts working on those algorithms in machine learning, optimization, signal processing, statistics and harmonic analysis, in order to engender a fruitful exchange of ideas and discussions and to push further the boundaries of scalable and efficient optimization for learning problems.Goals: The goals of the workshop are threefold. First, to provide an accessible review and synthesis of not only recent results but also old-but-forgotten results describing the behavior and performance of the related greedy algorithms. Second, to cover recent advances on extensions of such algorithms relevant to machine learners, such as learning with atomic-norm regularization (Rao et al., 2012; Tewari et al., 2011, Harchaoui et al., 2012), submodular optimization (Bach, 2011), herding algorithms (Chen et al., 2010; Bach et al., 2012), or structured prediction (Lacoste-Julien et al., 2013). One important example of interest here is the study lower bounds as in (Lan, 2013, Jaggi 2013) to better understand the limitations of such greedy algorithms and of sparse methods. Third, the goal is to provide a forum for open problems, and to serve as stepping-stone for cutting-edge research on this family of scalable algorithms for big data problems.
Time | Speaker | Title | Links | |
---|---|---|---|---|
7:30 - 7:40am | Organizers | Introduction and Poster Setup | [ Slides ] | |
7:40 - 8:20am | Robert M. Freund | Remarks on Frank-Wolfe and Structural Friends | [ Slides ] | |
8:20 - 9:00am | Ben Recht | The Algorithmic Frontiers of Atomic Norm Minimization: Relaxation, Discretization, and Greedy Pursuit | [ Slides ] | |
9:00 - 9:30pm | Coffee Break | |||
9:30 - 9:45am | Nikhil Rao, Parikshit Shah and Stephen Wright | Conditional Gradient with Enhancement and Truncation for Atomic Norm Regularization | [ Slides ] | |
9:45 - 9:55am | Hector Allende, Emanuele Frandi, Ricardo Nanculef, Claudio Sartori | Pairwise Away Steps for the Frank-Wolfe Algorithm | ||
9:55 - 10:05am | Simon Lacoste-Julien and Martin Jaggi | An Affine Invariant Linear Convergence Analysis for Frank-Wolfe Algorithms | [ Slides ] | |
10:05 - 10:15am | Vamsi Potluru, Jonathan Le Roux, Barak Pearlmutter, John Hershey and Matthew Brand | Coordinate Descent for mixed-norm NMF | [ Slides ] | |
10:15 - 10:30am | Robert M. Freund and Paul Grigas | New Analysis and Results for the Conditional Gradient Method | [ Slides ] | |
10:30 - 3:30pm | Lunch Break | |||
3:30 - 3:45pm | Marguerite Frank | Honorary Guest | ||
3:45 - 4:25pm | Shai Shalev-Schwartz | Efficiently Training Sum-Product Neural Networks using Forward Greedy Selection | [ Slides ] | |
4:25 - 4:40pm | Xiaocheng Tang and Katya Scheinberg | Complexity of Inexact Proximal Newton Methods | [ Slides ] | |
Vladimir Temlyakov | From Greedy Approximation to Greedy Optimization | [ Slides ] | ||
4:40 - 4:50pm | Jacob Steinhardt and Jonathan Huggins | A Greedy Framework for First-Order Optimization | [ Slides ] | |
4:50 - 5:00pm | Ahmed Farahat, Ali Ghodsi and Mohamed Kamel | A Fast Greedy Algorithm for Generalized Column Subset Selection | [ Slides ] [ Poster ] | |
5:00 - 5:30pm | Coffee Break | |||
5:30 - 6:10pm | Francis Bach | Conditional Gradients Everywhere | [ Slides ] | |
6:10 - 6:25pm | David Belanger, Dan Sheldon and Andrew McCallum | Marginal Inference in MRFs using Frank-Wolfe | [ Slides ] | |
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment