Sparse Learning for Large-scale and High-dimensional Data: A Randomized Convex-concave Optimization Approach by Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou
Image Credit: NASA/JPL-Caltech/Space Science Institute
In this paper, we develop a randomized algorithm and theory for learning a sparse model from large-scale and high-dimensional data, which is usually formulated as an empirical risk minimization problem with a sparsity-inducing regularizer. Under the assumption that there exists a (approximately) sparse solution with high classification accuracy, we argue that the dual solution is also sparse or approximately sparse. The fact that both primal and dual solutions are sparse motivates us to develop a randomized approach for a general convex-concave optimization problem. Specifically, the proposed approach combines the strength of random projection with that of sparse learning: it utilizes random projection to reduce the dimensionality, and introduces $\ell_1$-norm regularization to alleviate the approximation error caused by random projection. Theoretical analysis shows that under favored conditions, the randomized algorithm can accurately recover the optimal solutions to the convex-concave optimization problem (i.e., recover both the primal and dual solutions). Furthermore, the solutions returned by our algorithm are guaranteed to be approximately sparse.
Image Credit: NASA/JPL-Caltech/Space Science Institute
W00094976.jpg was taken on
November 15, 2015 and received on Earth November 16, 2015. The camera
was pointing toward UNK, and the image was taken using the CB2 and CL2
filters.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment