In this tutorial, we will start by discussing some of the basic ideas and techniques that allow a discrete optimization problem to be relaxed or immersed into a continuous optimization problem. Typically, the resulting continuous optimization problem is convex, implying strong properties of duality and efficiency. As part of the first lecture, we will review convexification ideas and strong duality in the context of cone optimization problems. The second and third lectures will focus on many approaches for going back from the continuous problem to the discrete problem, i.e. rounding a solution from the larger, continuous problem into a solution to the discrete problem. This is a very rich area, and we will survey a variety of techniques.
In this tutorial, we will start by discussing some of the basic ideas and techniques that allow a discrete optimization problem to be relaxed or immersed into a continuous optimization problem. Typically, the resulting continuous optimization problem is convex, implying strong properties of duality and efficiency. As part of the first lecture, we will review convexification ideas and strong duality in the context of cone optimization problems. The second and third lectures will focus on many approaches for going back from the continuous problem to the discrete problem, i.e. rounding a solution from the larger, continuous problem into a solution to the discrete problem. This is a very rich area, and we will survey a variety of techniques.
In these introductory lectures we will present a unified introduction to Semidefinite Programming hierarchies and Sum of Squares techniques for continuous/discrete optimization, highlighting their many different and complementary interpretations (Lagrangian/algebraic duality, geometric embeddings, proof systems, probabilistic, etc). We will concisely summarize the state of the art in both positive and negative results (e.g., planted clique, tensor completion, constraint satisfaction), as well as current algorithmic approaches.
In these introductory lectures we will present a unified introduction to Semidefinite Programming hierarchies and Sum of Squares techniques for continuous/discrete optimization, highlighting their many different and complementary interpretations (Lagrangian/algebraic duality, geometric embeddings, proof systems, probabilistic, etc). We will concisely summarize the state of the art in both positive and negative results (e.g., planted clique, tensor completion, constraint satisfaction), as well as current algorithmic approaches.
In this tutorial, we will start by discussing some of the basic ideas and techniques that allow a discrete optimization problem to be relaxed or immersed into a continuous optimization problem. Typically, the resulting continuous optimization problem is convex, implying strong properties of duality and efficiency. As part of the first lecture, we will review convexification ideas and strong duality in the context of cone optimization problems. The second and third lectures will focus on many approaches for going back from the continuous problem to the discrete problem, i.e. rounding a solution from the larger, continuous problem into a solution to the discrete problem. This is a very rich area, and we will survey a variety of techniques.
Techniques and insights from convex optimization play an important role in the design of algorithms for discrete optimization, as evidenced by the success of convex relaxations in providing a framework for constructing and analyzing approximation algorithms. In the last decade, iterative methods from convex optimization have served as a springboard to a number of algorithmic advances for many discrete optimization tasks, including fundamental ones such as submodular optimization and maximum flow problems.
In this bootcamp series, we will provide a walk-through of the continuous approach to algorithm design in discrete optimization. We will consider the algorithmic challenges that arise in the choice of formulation and iterative algorithm by casting first-order methods as discretizations of continuous-time dynamics. We will see that the non-smooth nature of discrete optimization problems leads us to smoothen our objectives via regularization and discuss how different choices of regularizers arise in different settings. We will then put these principles to work in the contexts of submodular optimization and competitive analysis of online algorithms.
Techniques and insights from convex optimization play an important role in the design of algorithms for discrete optimization, as evidenced by the success of convex relaxations in providing a framework for constructing and analyzing approximation algorithms. In the last decade, iterative methods from convex optimization have served as a springboard to a number of algorithmic advances for many discrete optimization tasks, including fundamental ones such as submodular optimization and maximum flow problems.
In this bootcamp series, we will provide a walk-through of the continuous approach to algorithm design in discrete optimization. We will consider the algorithmic challenges that arise in the choice of formulation and iterative algorithm by casting first-order methods as discretizations of continuous-time dynamics. We will see that the non-smooth nature of discrete optimization problems leads us to smoothen our objectives via regularization and discuss how different choices of regularizers arise in different settings. We will then put these principles to work in the contexts of submodular optimization and competitive analysis of online algorithms.
In this session, we will provide an overview of stochastic and robust optimization, with problems in statistics and machine learning providing a motivating focus. We will discuss connections between convex optimization and regret minimization, both with stochastic and deterministic feedback. We will view these problems through the minimax lens and show how algorithms emerge as approximate dynamic programming solutions. Finally, we will review robust optimization and explore the interactions between robustness and regularization.
In this session, we will provide an overview of stochastic and robust optimization, with problems in statistics and machine learning providing a motivating focus. We will discuss connections between convex optimization and regret minimization, both with stochastic and deterministic feedback. We will view these problems through the minimax lens and show how algorithms emerge as approximate dynamic programming solutions. Finally, we will review robust optimization and explore the interactions between robustness and regularization.
In these introductory lectures we will present a unified introduction to Semidefinite Programming hierarchies and Sum of Squares techniques for continuous/discrete optimization, highlighting their many different and complementary interpretations (Lagrangian/algebraic duality, geometric embeddings, proof systems, probabilistic, etc). We will concisely summarize the state of the art in both positive and negative results (e.g., planted clique, tensor completion, constraint satisfaction), as well as current algorithmic approaches.
In these introductory lectures we will present a unified introduction to Semidefinite Programming hierarchies and Sum of Squares techniques for continuous/discrete optimization, highlighting their many different and complementary interpretations (Lagrangian/algebraic duality, geometric embeddings, proof systems, probabilistic, etc). We will concisely summarize the state of the art in both positive and negative results (e.g., planted clique, tensor completion, constraint satisfaction), as well as current algorithmic approaches.
In this tutorial we will cover the recent progress on lower bounds on the size of linear programs and semidefinite programs for combinatorial optimization problems. We will present the definition and motivation of extension complexity and the relation to communication complexity. Then we will discuss the lower bound of Fiorini et al for the TSP and correlation polytope. We will also review SDP extension complexity with related concept such as the SDP rank of matrices. Finally, we will discuss lower bounds based on Fourier-methods by CLRS and LRS for both the LP model and the SDP model.
3:00 pm – 4:00 pm
In this tutorial we will cover the recent progress on lower bounds on the size of linear programs and semidefinite programs for combinatorial optimization problems. We will present the definition and motivation of extension complexity and the relation to communication complexity. Then we will discuss the lower bound of Fiorini et al for the TSP and correlation polytope. We will also review SDP extension complexity with related concept such as the SDP rank of matrices. Finally, we will discuss lower bounds based on Fourier-methods by CLRS and LRS for both the LP model and the SDP model.
In this session, we will provide an overview of stochastic and robust optimization, with problems in statistics and machine learning providing a motivating focus. We will discuss connections between convex optimization and regret minimization, both with stochastic and deterministic feedback. We will view these problems through the minimax lens and show how algorithms emerge as approximate dynamic programming solutions. Finally, we will review robust optimization and explore the interactions between robustness and regularization.
In this tutorial we will cover the recent progress on lower bounds on the size of linear programs and semidefinite programs for combinatorial optimization problems. We will present the definition and motivation of extension complexity and the relation to communication complexity. Then we will discuss the lower bound of Fiorini et al for the TSP and correlation polytope. We will also review SDP extension complexity with related concept such as the SDP rank of matrices. Finally, we will discuss lower bounds based on Fourier-methods by CLRS and LRS for both the LP model and the SDP model.
In this tutorial we will cover the recent progress on lower bounds on the size of linear programs and semidefinite programs for combinatorial optimization problems. We will present the definition and motivation of extension complexity and the relation to communication complexity. Then we will discuss the lower bound of Fiorini et al for the TSP and correlation polytope. We will also review SDP extension complexity with related concept such as the SDP rank of matrices. Finally, we will discuss lower bounds based on Fourier-methods by CLRS and LRS for both the LP model and the SDP model.
Interior point methods are one of the key approaches to solving linear programming formulations as well as other convex programs. They give rise to algorithms that not only are the fastest ones known from asymptotic analysis point of view but also are often superior in practice.
This series of four lectures will present fundamentals of interior point methods as well as touch on the key ideas and concepts behind some of the recent developments. The topics we plan to cover include: (1) background on linear programming optimality conditions, the central path and its neighborhoods, and Newton's method; (2) complete analysis of a primal-dual path-following algorithm for linear programming and its relationship to a basic primal barrier method; (3) how to perturb central path to speed up convergence of an interior point method; and (4) a variant of interior point method that achieves running times that scales with the intrinsic dimension of the problem (rather than its possibly larger standard representation).
No previous experience with interior point methods is required.
Interior point methods are one of the key approaches to solving linear programming formulations as well as other convex programs. They give rise to algorithms that not only are the fastest ones known from asymptotic analysis point of view but also are often superior in practice.
This series of four lectures will present fundamentals of interior point methods as well as touch on the key ideas and concepts behind some of the recent developments. The topics we plan to cover include: (1) background on linear programming optimality conditions, the central path and its neighborhoods, and Newton's method; (2) complete analysis of a primal-dual path-following algorithm for linear programming and its relationship to a basic primal barrier method; (3) how to perturb central path to speed up convergence of an interior point method; and (4) a variant of interior point method that achieves running times that scales with the intrinsic dimension of the problem (rather than its possibly larger standard representation).
No previous experience with interior point methods is required.
Interior point methods are one of the key approaches to solving linear programming formulations as well as other convex programs. They give rise to algorithms that not only are the fastest ones known from asymptotic analysis point of view but also are often superior in practice.
This series of four lectures will present fundamentals of interior point methods as well as touch on the key ideas and concepts behind some of the recent developments. The topics we plan to cover include: (1) background on linear programming optimality conditions, the central path and its neighborhoods, and Newton's method; (2) complete analysis of a primal-dual path-following algorithm for linear programming and its relationship to a basic primal barrier method; (3) how to perturb central path to speed up convergence of an interior point method; and (4) a variant of interior point method that achieves running times that scales with the intrinsic dimension of the problem (rather than its possibly larger standard representation).
No previous experience with interior point methods is required.
Techniques and insights from convex optimization play an important role in the design of algorithms for discrete optimization, as evidenced by the success of convex relaxations in providing a framework for constructing and analyzing approximation algorithms. In the last decade, iterative methods from convex optimization have served as a springboard to a number of algorithmic advances for many discrete optimization tasks, including fundamental ones such as submodular optimization and maximum flow problems.
In this bootcamp series, we will provide a walk-through of the continuous approach to algorithm design in discrete optimization. We will consider the algorithmic challenges that arise in the choice of formulation and iterative algorithm by casting first-order methods as discretizations of continuous-time dynamics. We will see that the non-smooth nature of discrete optimization problems leads us to smoothen our objectives via regularization and discuss how different choices of regularizers arise in different settings. We will then put these principles to work in the contexts of submodular optimization and competitive analysis of online algorithms.
Techniques and insights from convex optimization play an important role in the design of algorithms for discrete optimization, as evidenced by the success of convex relaxations in providing a framework for constructing and analyzing approximation algorithms. In the last decade, iterative methods from convex optimization have served as a springboard to a number of algorithmic advances for many discrete optimization tasks, including fundamental ones such as submodular optimization and maximum flow problems.
In this bootcamp series, we will provide a walk-through of the continuous approach to algorithm design in discrete optimization. We will consider the algorithmic challenges that arise in the choice of formulation and iterative algorithm by casting first-order methods as discretizations of continuous-time dynamics. We will see that the non-smooth nature of discrete optimization problems leads us to smoothen our objectives via regularization and discuss how different choices of regularizers arise in different settings. We will then put these principles to work in the contexts of submodular optimization and competitive analysis of online algorithms.
Interior point methods are one of the key approaches to solving linear programming formulations as well as other convex programs. They give rise to algorithms that not only are the fastest ones known from asymptotic analysis point of view but also are often superior in practice.
This series of four lectures will present fundamentals of interior point methods as well as touch on the key ideas and concepts behind some of the recent developments. The topics we plan to cover include: (1) background on linear programming optimality conditions, the central path and its neighborhoods, and Newton's method; (2) complete analysis of a primal-dual path-following algorithm for linear programming and its relationship to a basic primal barrier method; (3) how to perturb central path to speed up convergence of an interior point method; and (4) a variant of interior point method that achieves running times that scales with the intrinsic dimension of the problem (rather than its possibly larger standard representation).
No previous experience with interior point methods is required.
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.