Here is a new implementation:
A Fast Non-Gaussian Bayesian Matching Pursuit Method for Sparse Reconstruction by Mudassir Masood, and Tareq Y. Al-Naffouri. The abstract reads:
A fast matching pursuit method using a Bayesian approach is introduced for sparse signal recovery. This method, referred to as nGpFBMP, performs Bayesian estimates of sparse signals even when the signal prior is non-Gaussian or unknown. It is agnostic on signal statistics and utilizes a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data if not available. nGpFBMP utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean square error (MMSE) estimate of the sparse signal. Simulation results demonstrate the power and robustness of our proposed estimator.
An implementation is available here and will be featured on the compressive sensing big picture page.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
I am skeptical that their method for determining the sparsity fraction p is sensible and robust. Note that they initialize p very close to the true value in their numerical experiments.
ReplyDeleteHello anonymous,
ReplyDeleteWhile pest is 0.003, in figure 3 and 4 they have computation a real p to be ten times that: 0.03. For these particular computation (at p= 0.03), do you mean to say that the inital guess is too close to the real p ?
Cheers,
Igor.
It could be, I'm not sure.
ReplyDeleteMy real problem is that the method seems to have a vicious circularity. The matching pursuit is run until the support size slightly exceeds N*p_est, and then least squares projection is used to obtain x_est. Then p_est is re-estimated using the number of nonzero components of x_est... which was derived under the assumption of a support which just so happens to be of size slightly bigger than N*p_est. If there is any noise in the signal, all of those components are going to be nonzero and you're going to get the same p_est you put in (except slightly higher due to the "slightly exceeds").
So unless there is some special thresholding step, which really should be mentioned in the paper, I don't see how this is supposed to work.
Dear Anonymous,
ReplyDeleteI guess your point is that by going over the number of non-zero support by a certain amount, and using least square to figure out the next likely candidates for support, least square is likely. in the noisy case, to not be able to identify correctly the real support for the signal ?
If that is your argument, it seems that indeed I would agree with your point. We know that least square is not extraordinarily capable when the system is grossly underdetemined, but on the other hand can we say the same thing when the system is "slightly" underdetermined (with noise) ?
Since the code is available, could you give us a script where the solver fails hopelessly ?
Cheers,
Igor.
If you run the experiment_recover script, replacing the nGpFBMP with their nGpFBMP_refine, and display the p values at each iteration, you'll see that the p value always just slowly inches up until it stops for some reason. It doesn't matter how p is initialized, even if it's initialized to the true value, it always does this.
ReplyDeleteWhy it stops I'm not sure - matching pursuit should always be able to find additional support to match, just among the random noise. But it's definitely never gets closer to the true value.
The likelihood of hitting upon a solution that is sparser than the specified support in least squares is zero. So least squares cannot possibly give a sparse solution that would provide actual information towards adjusting p intelligently.
My guess is that getting the value of p correct is not too important to the quality of their result. Bayesian inference should be fairly insensitive to the hyperparameter values anyway.
Ok, I found in some cases p can go down.
ReplyDeleteIt's because they record the nu value at each stage of the matching pursuit, then go back and pick the stage that achieved the best nu value (see the end of the nGpFBMP script). Unlike normal matching pursuit where the sparsity is not penalized, you could have an earlier stage achieving the best nu value.
This is not mentioned in the paper at all, and is critical to the success of the algorithm. I'm still not convinced it's robust but it's at least plausible now. Interesting.
The fact that it is not part of the paper but crucial to the actual convergence seems to be a very interesting element.
ReplyDeleteThis paper isn't particularly Bayesian. It's more like sparsity-penalized matching pursuit, which has been around for a while and has been extended to structured sparsity too
ReplyDeleteDear Anonymous,
ReplyDeleteCan you provide some context when you are saying "it is not particularly bayesian" I am sure me and others might get a deeper understanding out of these issues from your explanation.
Cheers,
Igor.
I take that back. It's Bayesian, it's just approximate, which is fine. I just seem to remember papers by Baraniuk or somebody where additive sparsity penalties were used in conjunction with matching pursuit. Just a thought about connections with other things.
ReplyDeleteAnyway, my attitude about this paper is pretty positive if the authors include the bit we discussed, which should be quite easy.