The Summer School "Structured Regularization for High-Dimensional Data Analysis" is on-going in Paris right now at IHP Paris. Here are some of the videos of the second and third day (Tuesday and Wednesday) with Anders Hansen , Andrea Montanari, Francis Bach and Carlos Fernandez-Granda
On foundational computation problems in l1 and TV regularization
A.Hansen - 3/4 - 20/06/2017
Computing the Non-computable via sparsity
On foundational computation barriers in l1 and TV regularization
A.Hansen - 4/4 - 20/06/2017
Abstract: Submodular set-functions have many applications in combinatorial optimization, as they can be minimized and approximately maximized in polynomial time. A key element in many of the algorithms and analyses is the possibility of extending the submodular set-function to a convex function, which opens up tools from convex optimization. Submodularity goes beyond set-functions and has naturally been considered for problems with multiple labels or for functions defined on continuous domains, where it corresponds essentially to cross second-derivatives being nonpositive. In this talk, I will show that most results relating submodularity and convexity for set-functions can be extended to all submodular functions. In particular, (a) I will naturally define a continuous extension in a set of probability measures, (b) show that the extension is convex if and only if the original function is submodular, (c) prove that the problem of minimizing a submodular function is equivalent to a typically non-smooth convex optimization problem. Most of these extensions from the set-function situation are obtained by drawing links with the theory of multi-marginal optimal transport, which provides also a new interpretation of existing results for set-functions. I will then provide practical algorithms to minimize generic submodular functions on discrete domains, with associated convergence rates, and an application to proximal operators for non-convex penalty functions. Preprint available here.
Abstract: In the 70s and 80s geophysicists proposed using l1-norm regularization for deconvolution problem in the context of reflection seismology. Since then such methods have had a great impact in high-dimensional statistics and in signal-processing applications, but until recently their performance on the original deconvolution problem was not well understood theoretically. In this talk we provide an analysis of optimization-based methods for the deconvolution problem, including results on irregular sampling and sparse corruptions that highlight the modeling flexibility of these techniques.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment