Tuesday, December 04, 2012

Additional insights on the Q&A with Ben Adcock and Anders Hansen

Hi Igor,

Thanks for turning our email correspondence into a blog post - it's great that you've taken the time to understand what we're doing.

A couple of comments:

(i) In the second paragraph you mention the Gibbs phenomenon. However, this is not the fundamental issue. More precisely, the issue arises because of the discretization involved when replacing the continuous WT and FT by their discrete versions. This can be thought of as `data mismatch'. Although the Gibbs phenomenon is a result of this discretization, what is more important for compressed sensing is the loss of sparsity as Anders mentioned.

(ii) Regarding the TV-norm and Justin Romberg's approach. Your sentence "Except of course in Ben and Anders's approach we don't need the heuristics of the TV-norm" might possibly be construed to mean that we're advocates of l^1 over TV. This is not the case: so far we've studied l^1 because it's more fundamental and widespread in CS (and also easier to analyze than TV). However, certainly TV minimization will be more effective in some problems, and the same phenomenon (asymptotic incoherence) should explain empirical results for schemes such as Justin Romberg's.

Best wishes, and thanks again,

Thanks Ben .

Join our Reddit Experiment, Join the CompressiveSensing subreddit and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments: