Thursday, May 15, 2014

OptShrink: An algorithm for improved low-rank signal matrix denoising by optimal, data-driven singular value shrinkage - implementation -

The truncated singular value decomposition (SVD) of the measurement matrix is the optimal solution to the_representation_ problem of how to best approximate a noisy measurement matrix using a low-rank matrix. Here, we consider the (unobservable)_denoising_ problem of how to best approximate a low-rank signal matrix buried in noise by optimal (re)weighting of the singular vectors of the measurement matrix. We exploit recent results from random matrix theory to exactly characterize the large matrix limit of the optimal weighting coefficients and show that they can be computed directly from data for a large class of noise models that includes the i.i.d. Gaussian noise case.
Our analysis brings into sharp focus the shrinkage-and-thresholding form of the optimal weights, the non-convex nature of the associated shrinkage function (on the singular values) and explains why matrix regularization via singular value thresholding with convex penalty functions (such as the nuclear norm) will always be suboptimal. We validate our theoretical predictions with numerical simulations, develop an implementable algorithm (OptShrink) that realizes the predicted performance gains and show how our methods can be used to improve estimation in the setting where the measured matrix has missing entries.

I like the paragaph on "Suboptimality of singular value thresholding" and "Better singular value shrinkage with non-convex potential functions?".

The attendant webpage where an implementation of OptShrink is located is here.

It starts with:

OptShrink - Low-Rank Signal Matrix Denoising via Optimal, Data-Driven Singular Value Shrinkage 
OptShrink is a simple, completely data-driven algorithm for denoising a low-rank signal matrix buried in noise. It takes as its input the signal-plus-noise matrix, an estimate of the signal matrix rank and returns as an output the improved signal matrix estimate. It computes this estimate by shrinking the singular values corresponding to the Truncated SVD (TSVD) in the correct manner as given by random matrix theory. It can be used in the missing data setting and for a large class of noise models for which the i.i.d. Gaussian setting is a special case. 
There are no tuning parameters involved so it can be used in a black-box manner wherever improving low-rank matrix estimation is desirable. The algorithm outperforms the truncated SVD (TSVD) significantly in the low to moderate SNR regime and will never do worse than the TSVD. The theory also explains why it will always do better than singular value thresholding. 
But you do not have to take our word for it - you can download it below and try it for yourself! 

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments: