In previous work, numerical experiments showed that lp minimization with p above 0 and p less than 1 recovers sparse signals from fewer linear measurements than does l1 minimization. It was also shown that a weaker restricted isometry property is sufficient to guarantee perfect recovery in the lp case. In this work, we generalize this result to an lp variant of the restricted isometry property, and then determine how many random, Gaussian measurements are sufficient for the condition to hold with high probability. The resulting sufficient condition is met by fewer measurements for smaller p.
Besides the impressive result on the weaker RIP, one of the experimentally noticeable item featured in the article is that below p=0.8, using the reweighted Lp algorithm, perfect recovery below a certain number of measurements is not possible. That number 0.8 is also the coefficient used for efficient recovery of natural images, in Deconvolution using natural image priors by Anat Levin, Rob Fergus, Fredo Durand, and Bill Freeman on the reconstruction from coded aperture measurements. Naaaaah... it's just a coincidence....
Reference:
[1] Pisier Gilles, The volume of convex bodies and Banach space geometry.
Hi,
ReplyDeleteJust wanted to point out that this reweighted Lp idea to handle Lp
regularizers, with p < 1, was previously proposed in our paper
M. Figueiredo, R. Nowak, "A Bound Optimization Approach to Wavelet-Based Image Deconvolution",
IEEE-ICIP'2005, available from my
home page, www.lx.it.pt/~mtf/
It turns out that it can be derived from a bound optimization
(or majorization-minimization) perspective, by using an L1 upper bound for the Lp regularizer.
As also reported in that paper, we found p=0.7 to be a good value for image deconvolution; this is not far from the p=0.8 mentioned in this post.
Mario.
Thank you Mario.
ReplyDeleteIgor.
The idea of L^p regularization with p < 1 is not at all new, and the particular approach of iterative reweighted algorithms for L^p minimization with p < 1 is at least as old as the FOCUSS algorithm from 1997. New are the theoretical results, the ability to obtain exact reconstructions with many fewer measurements, and the (as yet unproven) claim to compute *global* minima with simple algorithms.
ReplyDeleteI don't see the significance of p=0.8; to me, smaller p is better, though the number of measurements stops decreasing appreciably for small p, say below p=0.5. The rate of convergence continues to improve, however.
Rick,
ReplyDeleteI agree with you on the fact that this idea of using smaller p is not new.
Irrespective to the rate of convergence for the reconstruction scheme (which is important from your standpoint), the number of measurements that stops decreasing for small p seems to point to some universal minimum amount of information needed. And your algorithm/tool provides a nice way to probe this for different types of statistics (natural image statistics, others...)
This is important because it may help in understanding how some minimal feedforward models (no reconstructions) of the visual cortex work and how much connection is needed to provide the full or near full information to the brain (http://nuit-blanche.blogspot.com/2007/04/compressed-sensing-in-primary-visual.html ).
In autism, for instance, there are clues pointing to a lack of density of wiring and there are known issues for autistic patients with regards to visual understanding of scenes.
I realize this is totally outside of the mathematics you are involved with but I believe the fact you highlighted is worth mentioning outside of the specifics of your paper.
Igor.
I agree that in general, there is
ReplyDeletenothing special about this range of p values (0.7~0.8); it just happens to yield good results in wavelet-based image restoration problems (which are of course different from compressed sensing).
Concerning reweighted l2 (with the additional epsilon), as done by Chartrand and Yin in the paper "Iteratively reweighted algorithms for compressive sensing", it is indeed novel, and the results are very interesting. What I wanted to say (maybe I wasn't clear) was that you can also handle lp regularizers, with p<1, using reweighted l1, as in the paper by Candès, Wakin, and Boyd, "Enhancing
sparsity by reweighted l1 minimization", and before them in our 2005 paper M. Figueiredo, R. Nowak, "A Bound Optimization Approach to Wavelet-Based Image Deconvolution".