Pages

Friday, November 27, 2015

Slides and Videos: Randomized Numerical Linear Algebra reading group, Berkeley, Spring 2015

Laura Grigori co-organized with Jim Demmel, Ming Gu, Michael Mahoney, the Randomized Numerical Linear Algebra reading group at Berkeley this past spring. Here are the slides and videos of the presentations that occured then. From her page:
 


Archived video of the lectures may be seen here 
  • Feb 17 - M. Mahoney: Randomized Algorithms for Matrices and Data, slides  
  • Feb 24 - J. Demmel: notes of the lecture , video of the lecture  
  • Mar 3 - J. Demmel, second part of lecture from Feb 24, notes of the lecture , video of the lecture 
  • Mar 10 - M. Gu: Subspace iteration randomization and singular value problems, slides , video of the lecture , arxiv.org 
  • Mar 17: Eric Hallman, based on the paper Sketching as a Tool for Numerical Linear Algebra, Woodruff, video of the lecture 
  • Mar 24 - spring break Mar 31 - Becca Roelofs, video of the lecture based on the paper Blendenpik: Supercharging LAPACK's least-squares solver, Avron et al. 
  • Apr 7 - Arturo Fernandez, slides based on the paper Relative-Error CUR Matrix Decompositions, Drineas et al, and Shivaram Venkataraman, High Performance Linear Algebra using Spark, video of the lecture, lecture starts at 3:59 
  • Apr 14 - Aydin Buluc, based on the paper Low Rank Approximation and Regression in Input Sparsity Time, Clarkson and Woodruff, and Chris Melgaard.video of the lecture 
  • Apr 21: Laura Grigori, CA rank revealing QR and LU factorizations, and Pieter Ghysels, Construction of hierarchically semi-separable matrices using randomized sampling and application in a sparse direct solver, video of the lecture, lecture starts at ~2:44 
  • Apr 28 - Yudong Chen, Fast projected gradient descent algorithms for low-rank estimation video of the lecture, lecture starts at 4:59

Abstract: Fitting a rank-r matrix to noisy data is in general NP-hard. A popular approach is by convex relaxations via nuclear/trace norm minimization. This approach is shown to provide strong (often order-wise unimprovable) statistical guarantees in terms of error bounds and sample complexity. Computationally, while nuclear norm minimization can be solved in polynomial time in principle by semidefinite programming, its time complexity is often too high for large matrices. In this talk, we consider an alternative approach via projected gradient descent over the space of n-by-r matrices, which scales well to large instances. Moreover, we develop a unified framework characterizing the convergence of projected gradient descent for a broad range of non-convex low-rank estimation problems. Our results apply to the problems of matrix sensing, matrix completion, robust PCA, sparse PCA, densest subgraph detection and others, for which we match the best known statistical guarantees provided by convex relaxation methods.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment