Pages

Tuesday, August 07, 2012

Direct Optimization of the Dictionary Learning Problem - implementation -






A novel way of solving the dictionary learning problem is proposed in this paper. It is based on a so-called direct optimization as it avoids the usual technique which consists in alternatively optimizing the coefficients of a sparse decomposition and in optimizing dictionary atoms. The algorithm we advocate simply performs a joint proximal gradient descent step over the dictionary atoms and the coefficient matrix. As such, we have denoted the algorithm as a one-step block-coordinate proximal gradient descent and we have shown that it can be applied to a broader class of non-convex optimization problems than the dictionary learning one. After having derived the algorithm, we also provided in-depth discussions on how the stepsizes of the proximal gradient descent have been chosen. In addition, we uncover the connection between our direct approach and the alternating optimization method for dictionary learning. The main advantage of our novel algorithm is that, as suggested by our simulation study, it is far more efficient than alternating optimization algorithms.


Image Credit: NASA/JPL-Caltech. This image was taken by Rear Hazcam: Right A (RHAZ_RIGHT_A) onboard NASA's Mars rover Curiosity on Sol 0 (2012-08-06 06:03:27 UTC) . 
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment