We have some communications from two people today. First Jared Tanner adds some additional insight on a question about the RIP. Then, Ray Maleh wanted me to write a small piece on what is known with regards to RIP and recoverability of greedy algorithms but since I am not a specialist, I yielded the floor to him on the subject. Please note that the underlying reason for making a result known on this blog stems from the inability of current journals to go fast enough in publishing in a field that is growing as fast as compressive sensing. I guess the blog provides an avenue to make some results known more quickly to this community.
Thanks Jared and Ray, I always look forward to contributions that provide some context in our current state of knowledge. With no further babbling, first here is Jared 's contibution:
Thanks Jared and Ray, I always look forward to contributions that provide some context in our current state of knowledge. With no further babbling, first here is Jared 's contibution:
Dear Igor,
I read today a question/comment about the RIP and its lack of scale invariance. It was remarked that the important quantity is the ratio, which is scale invariant. This was the interpreted from Foucart and Lai's work. However, this is not the case. For most algorithms the best rip based proof involves a condition that depends on some function of the upper and lower rip constants, often of different degrees. For details on this see:
http://ecos.maths.ed.ac.uk/papers/BCTT_Greedy.pdf
and
http://ecos.maths.ed.ac.uk/papers/BlTh_SSRIC.pdf
In this second, letter, you will read on pages 5 and 6 that, as best we know, the best known rip condition for l1 is: 11 * lower_rip_{12k} + upper_rip_{11k} less than 10 and not the more recent result of Foucart and Lai.
There is a lot of confusion about the RIP. It would be advisable for people to look at the bounds we provided in: http://ecos.maths.ed.ac.uk/papers/RIP_BT.pdf
so that they have a better idea how the RIP constants, at least for Gaussian, behave.
All the best,
Jared
and then Ray's contribution:
Orthogonal Matching Pursuit (OMP) has long been considered a versatile and robust heuristic for solving compressive sensing problems. Up until lately, the only known theoretical results governing OMP's performance depended on the notion of dictionary coherence (see Tropp 04 and Gilbert et al. 03). It was not until last year that it was first shown that OMP can recover a $K$-sparse vector $x$ from measurements $y = \Phi x$ if the measurement matrix $\Phi$ satisfies a restricted isometry property (RIP).
In August 2009, Mark Davenport and Michael Wakin released a pre-print of the paper "Analysis of Orthogonal Matching Pursuit using the Restricted Isometry Property" where they show that given a measurement matrix $\Phi$ with a restricted isometry number $\delta_K$ satisfying $$\delta_K < \frac{1}{3sqrt{K}},$$ then OMP will recover any signal that is K-sparse from its measurements. They further conjecture the following \emph{unachievable} lower bound
$$\delta_K < \frac{1}{sqrt{K}}.$$
In January 2010, Entau Liu and V. N. Temlyakov released the preprint "Orthogonal Super Greedy Algorithm and Applications in Compressed Sensing" where they improve the result by showing that OMP can recover any K-sparse signal using a measurement matrix satisfying $$\delta_{K+1} < \frac{1}{(1+sqrt{2})\sqrt{K}}.$$ They mention that achieving a smaller restricted isometry bound is an interesting open problem.
In August 2009, while a Ph.D. student at the University of Michigan, Ray Maleh defended his dissertation "Efficient Sparse Approximation Methods for Medical Imaging." In Chapter 2 of his work, he shows that OMP can recover any K-sparse signal provided that
This is an improvement over the results of Davenport et al. and Liu et al. Furthermore, his result very closely approaches the unachievable lower bound $\delta_K \lt 1/sqrt{K}$. In addition, Maleh proves RIP-based performance error bounds that characterize OMP's ability to recover non-sparse vectors possibly in the presence of measurement noise. While still forthcoming as a journal paper, his dissertation can be found at:
The author can be reached at rmaleh@gmail.com.
$$\delta_{K+1} \lt \frac{1}{1+sqrt{K}}.$$
This is an improvement over the results of Davenport et al. and Liu et al. Furthermore, his result very closely approaches the unachievable lower bound $\delta_K \lt 1/sqrt{K}$. In addition, Maleh proves RIP-based performance error bounds that characterize OMP's ability to recover non-sparse vectors possibly in the presence of measurement noise. While still forthcoming as a journal paper, his dissertation can be found at:
The author can be reached at rmaleh@gmail.com.
Credit: JAXA, The last view of Hayabusa before it crashed. Via the planetary society blog.
Thank you to Jared Tanner for these important references.
ReplyDeleteI think it is useful to precise that, even if Foucart and Lai's results does not provide the best achievable bounds compared the recent ones obtained by Blanchard and Thompson, they are not wrong anyway (for BP), just weaker.
Moreover, the inequality
b * lower_rip_{(b+1)k} + upper_rip_{bk} <= b - 1
(e.g. with b=11) is of course still invariant under scaling of the matrix (i.e., A -> cA), since it is equivalent to
(1 + upper_rip_{(b-1)k})/
(1 - lower_rip_{bk}) <= b
It is true nevertheless that the insightful relations provided in the other referenced Blanchard et al. paper for different reconstruction methods (IHT, ROMP, ...) are not scale invariant. Except perhaps for CoSaMP.
Best,
Laurent
When talking about RIP, it is very important to see the relationship between dimensions of the matrix (say MxN) and sparsity of signal (say K). If RIP constant \delta = O(1/sqrt(K)), then number of measurements scale as M ~ K^2 poly(log N). And in this range there is little difference (if any) between coherence and RIP based results.
ReplyDelete