Pages

Saturday, June 24, 2017

Saturday Morning Videos: "Structured Regularization for High-Dimensional Data Analysis" Majorization-Minimization Subspace Algorithms for Large Scale Data Processing, Regularization Methods for Large Scale Machine Learning



 Emilie Chouzenoux (Paris-Est): Majorization-Minimization Subspace Algorithms for Large Scale Data Processing
Abstract: Recent developments in data processing drive the need for solving optimization problems with increasingly large sizes, stretching traditional techniques to their limits. New optimization algorithms have thus to be designed, paying attention to computational complexity, scalability, and robustness. Majorization-Minimization (MM) approaches have become increasingly popular recently, in both signal/image processing and machine learning areas. Our talk will present new theoretical and practical results regarding the MM subspace algorithm [1], where the update of each iterate is restricted to a subspace spanned by few directions. We will first present the extension of this method to the online case when only a stochastic approximation of the criterion is employed at each iteration [2], and we will analyse its convergence rate properties [3]. In a second part of the talk, a novel block parallel MM subspace algorithm will be introduced, which can take advantage of the potential acceleration provided by multicore architectures [4]. Several examples, in the context of signal/image processing will be presented, to illustrate the efficiency of these methods.
[1] E. Chouzenoux, A. Jezierska, J.-C. Pesquet and H. Talbot. A Majorize-Minimize Subspace Approach for l2-l0 Image Regularization. SIAM Journal on Imaging Science, Vol. 6, No. 1, pages 563-591, 2013.
[2] E. Chouzenoux and J.-C. Pesquet. A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation. Tech. Rep., 2016.
[3] E. Chouzenoux and J.-C. Pesquet. Convergence Rate Analysis of the Majorize-Minimize Subspace Algorithm. IEEE Signal Processing Letters, Vol. 23, No. 9, pages 1284-1288, Septembre 2016.
[4] S. Cadoni, E. Chouzenoux, J.-C. Pesquet and C. Chaux. A Block Parallel Majorize-Minimize Memory Gradient Algorithm. In Proceedings of the 23rd IEEE International Conference on Image Processing (ICIP 2016), pages 3194-3198, Phoenix, Arizona, 25-28 septembre 2016.


Lecture 1/4
Lecture 2/4


Lorenzo Rosasco (Genova and MIT): Regularization Methods for Large Scale Machine Learning

Abstract: Regularization techniques originally developed to solve linear inverse problems can be extended to derive nonparametric machine learning methods. These methods perform well in practice and can be shown to have optimal statistical guarantees, however, computational requirements can prevent application to large scale scenarios. In this talk, we will describe recent attempts to tackle this challenge. Our presentation will be divided in two parts. In the first part, we will discuss so called iterative regularization, aka early stopping regularization. In particular, we will discuss accelerated and stochastic variants of this method and show how they allow to control at the same time the statistical and time complexities of the obtained solutions. In the second part, we will discuss novel regularization schemes obtained combining regularization with stochastic projections. These latter methods allow to control not only the statistical and time complexities of the obtained solutions but also the memory requirements.




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

2 comments: