Tuesday, September 10, 2013

Why does L_1-norm induces sparsity ?

We had a short discussion with Julien Mairal a long while ago and his presentation on Optimization for Sparse Estimation and Structured Sparsity featured at the IMA Short Course entitled ``Applied Statistics and Machine Learning'', reminded me of that conversation on how to easily make sense of the l_1 norm. The video of his talk are here: part I, and part II. Here are the a-ah slides of interest that tells you that only a linear functional will steadily get you to 0.:








Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly