Dear Igor,How are you ?I would like to bring your attention to our new paper ""Improving Noise Robustness in Subspace-based Joint Sparse Recovery", which was recently accepted in IEEE Trans. on Signal Processing.This is an extension of our previous joint sparse recovery work (CS-MUSIC: http://bisp.kaist.ac.kr/papers/KimLeeYe-CSMUSIC.pdf). One of the main contributions of this work is to show that the noise robustness of subspace-based greedy algorithms can be signiﬁcantly improved by allowing sequential subspace estimation and support ﬁltering, even when the number of snapshots is insufﬁcient. Hope you can enjoy it !Best,-Jong
We mentioned CS-MUSIC before and its implementation, here is the new paper: Improving Noise Robustness in Subspace-based Joint Sparse Recovery by Jong Min Kim, Ok Kyun Lee, and Jong Chul Ye. The abstract reads:
In a multiple measurement vector problem (MMV), where multiple signals share a common sparse support and are sampled by a common sensing matrix, we can expect joint sparsity to enable a further reduction in the number of required measurements. While a diversity gain from joint sparsity had been demonstrated earlier in the case of a convex relaxation method using an l1=l2 mixed norm penalty, only recently was it shown that similar diversity gain can be achieved by greedy algorithms if we combine greedy steps with a MUSIC like subspace criterion. However, the main limitation of these hybrid algorithms is that they often require a large number of snapshots or a high signal-to-noise ratio (SNR) for an accurate subspace as well as partial support estimation. One of the main contributions of this work is to show that the noise robustness of these algorithms can be signiﬁcantly improved by allowing sequential subspace estimation and support ﬁltering, even when the number of snapshots is insufﬁcient. Numerical simulations show that a novel sequential compressive MUSIC (sequential CS-MUSIC) that combines the sequential subspace estimation and support ﬁltering steps signiﬁcantly outperforms the existing greedy algorithms and is quite comparable with computationally expensive state-of-art algorithms.
Thanks Jong !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.