I eventually was able to watch Miki Elad's presentation at MIA2012. While that presentation is not on his website yet, the closest I could find is: K-SVD Dictionary-Learning for Analysis Sparse Models. I had read these papers before on analysis versus synthesis and while I am still not quite clear on what is being really said (it needs to sink in first), there is I think a very interesting matrix decomposition I had not seen before:
Namely, \Omega and A are unknowns, X is known and A is made of sparse vector columns. While talking to Miki, we were wondering if this type of matrix decomposition existed or was needed in other fields of investigation (besides analysis dictionary learning for sparse models) ? Recall that the current and now mainstream dictionary learning solvers (featured in the Matrix Factorization Jungle Page) solve the following problem:
(Synthesis) Dictionary Learning: A = DX with unknown D and X, solve for sparse X
which is to be contrasted with the new decomposition:
(Analysis) Dictionary Learning A = \Omega X with unknown \Omega and A, solve for sparse A
Of the results that surprised me, the first one was pretty telling:
In short, the operator \Omega found by this dictionary learning decomposition seems to find back a TV like operator! (The Xs were made of patches).
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.