Tuesday, October 20, 2015

Tensor vs Matrix Methods: Robust Tensor Decomposition under Block Sparse Perturbations


I wonder how these tensor techniques will be doing when used in combination with a randomization a la GoDec. We all know that this technique is fast in robust PCA for the matrix form, so I would expect similar speed-up for a tensot approach. Without further ado: Tensor vs Matrix Methods: Robust Tensor Decomposition under Block Sparse Perturbations by Anima Anandkumar, Prateek Jain, Yang Shi, U.N. Niranjan
Robust tensor CP decomposition involves decomposing a tensor into low rank and sparse components. We propose a novel non-convex iterative algorithm with guaranteed recovery. It alternates between low-rank CP decomposition through gradient ascent (a variant of the tensor power method), and hard thresholding of the residual. We prove convergence to the globally optimal solution under natural incoherence conditions on the low rank component, and bounded level of sparse perturbations. We compare our method with natural baselines, viz., which apply robust matrix PCA either to the flattened tensor, or to the matrix slices of the tensor. Our method can provably handle a far greater level of perturbation when the sparse tensor is block-structured. This naturally occurs in many applications such as the activity detection task in videos. Our experiments validate these findings. Thus, we establish that tensor methods can tolerate a higher level of gross corruptions compared to matrix methods
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly