IMA Workshop on Convexity and Optimization: Theory and Applications February 23-27, 2015
Presentation of the workshop:
Abstracts and Talk Materials are here.
Here are the videos:
the organizers were:Presentation of the workshop:
The workshop will consist of two parts. Day one will cover supply chain optimization with the objective of bringing together leading researchers/developers from industry with leading researchers from academia to discuss challenges, opportunities, and new trends in logistics, material handling, optimization, machine learning, and related algorithms.
The second part of the workshop (lasting four days) will focus on discrete and continuous optimization, with a foray into machine learning. Submodular functions are discrete analogs of convex functions (as well as concave functions in some contexts), arising in various fields of computer science and operations research. Since the seminal work of Jack Edmonds (1970), submodularity has long been recognized as a common structure of many efficiently solvable combinatorial optimization problems. Recent algorithmic developments in the past decade include a combinatorial strongly polynomial algorithm for minimization, constant factor approximation algorithms for maximization, and efficient methods for learning submodular functions. In addition, submodular functions find novel applications in combinatorial auctions, machine learning, and social networks. This workshop aims to provide a forum for researchers from a variety of backgrounds to exchange results, ideas, and problems on submodular optimization and its applications. Application domains have been numerous, ranging from (sensor placement for) water management to navigation of (mobile) robots, as evidenced by the works of Carlos Guestrin, Andreas Krause, and several others.
On the more general optimization front, the concept of robust optimization, as developed by Aharon Ben-Tal, Arkadi Nemirovsky, and collaborators, offers the promise of providing sets of near-optimal solutions (rather than a potentially unique optimal solution) to problems arising from families of input instances. This somewhat classic topic deserves more attention now due to (the obvious) appeal in the robustness to uncertainty, or noise in the input, typically arising from a large data set. The development of robust linear programs (LPs) and robust semidefinite programs (SDPs) is very much in its infancy from a theoretical computer science standpoint and we expect there to be a fruitful dialog between the various groups involved.
Optimization formulations and methods have been at the heart of many modern machine learning algorithms, which have been extensively used in many applications across science and engineering for automatically extracting essential knowledge from huge volumes of data. The increasing complexity, size, and variety of these applications has led to interesting interactions between optimization and machine learning. The workshop will highlight recent interesting connections between the two fields.
Abstracts and Talk Materials are here.
Here are the videos:
- Nina Balcan Carnegie-Mellon University
- Henrik Christensen Georgia Institute of Technology
- William Cook University of Waterloo
- Satoru Iwata University of Tokyo
- Prasad Tetali Georgia Institute of Technology
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment