This is the risk of an exercise like this: blogging and trying things at the same time. I think I screwed up. Thanks to a reader who mentioned he went through a similar exercise before, I went back and checked if the results of my implementation of the Subspace Pursuit algorithm and that of Wei Dai and Olgica Milenkovic were the same. Well they don't seem to give the same results. The algorithm provided by Wei Dai is in fact pretty often converging with very high accuracy whereas my implementation works very well but with a larger number of measurements. In other words, they don't seem to be the same. The idea of boosting the Subspace Pursuit algorithm was born out of the concern that one could improve the result of the Subspace Pursuit and indeed in MMA12, the boosting works well for a "deficient" subspace pursuit algorithm (mine). When switching from that deficient algorithm to that of Wei Dai and Olgica Milenkovic then boosting fails because the solution obtained by the Subspace Pursuit is already optimal.
The take away lesson from this is: Re-weighted Lp can always help suboptimal guesses :-) and I am not quite sure it cannot be salvaged. I'll put a warning on the associated entries.
Please join/comment on the Google+ Community (1307), the CompressiveSensing subreddit (622), the LinkedIn Compressive Sensing group (3125) or the Advanced Matrix Factorization Group (990)
Reference pages include The Big Picture in Compressive Sensing (and learning Compressive Sensing), the Advanced Matrix Factorization Jungle Page and the Reproducible Research page
Other: Paris Machine Learning Meetup Archives (1608), Highly Technical Reference Pages - Aggregators,