Tuesday, June 15, 2010

CS: RIP redux, Around the blog in 80 hours, Subspace Evolution and Transfer (SET) for Low-Rank Matrix Completion

From yesterday's presentation we had two comments. One from Laurent Jacques:

Thank you to Jared Tanner for these important references.

I think it is useful to precise that, even if Foucart and Lai's results does not provide the best achievable bounds compared the recent ones obtained by Blanchard and Thompson, they are not wrong anyway (for BP), just weaker.

Moreover, the inequality

b * lower_rip_{(b+1)k} + upper_rip_{bk} \lt b - 1

(e.g. with b=11) is of course still invariant under scaling of the matrix (i.e., A -> cA), since it is equivalent to

(1 + upper_rip_{(b-1)k})/(1 - lower_rip_{bk}) \lt b

It is true nevertheless that the insightful relations provided in the other referenced Blanchard et al. paper for different reconstruction methods (IHT, ROMP, ...) are not scale invariant. Except perhaps for CoSaMP.

Best,
Laurent


and the other from Salman:

When talking about RIP, it is very important to see the relationship between dimensions of the matrix (say MxN) and sparsity of signal (say K). If RIP constant \delta = O(1/sqrt(K)), then number of measurements scale as M ~ K^2 poly(log N). And in this range there is little difference (if any) between coherence and RIP based results.


I also had to apologize to Ray for butchering the end of his message. See Blogger, the service that runs this blog has a real problem with the \lt and \gt signs.

Other blog entries of relevant to some of the subject covered here include:

Djalil Chafai:
Terry Tao
Gonzalo Vazquez Vilar
John Langford

Jason Moore has a take on the failure of GWAS, Seth has another.

Finally, there is a preprint on arxiv:

We describe a new algorithm, termed subspace evolution and transfer (SET), for solving low-rank matrix completion problems. The algorithm takes as its input a subset of entries of a low-rank matrix, and outputs one low-rank matrix consistent with the given observations. The completion task is accomplished by searching for a column space on the Grassmann manifold that matches the incomplete observations. The SET algorithm consists of two parts -- subspace evolution and subspace transfer. In the evolution part, we use a gradient descent method on the Grassmann manifold to refine our estimate of the column space. Since the gradient descent algorithm is not guaranteed to converge, due to the existence of barriers along the search path, we design a new mechanism for detecting barriers and transferring the estimated column space across the barriers. This mechanism constitutes the core of the transfer step of the algorithm. The SET algorithm exhibits excellent empirical performance for both high and low sampling rate regimes.

No comments:

Printfriendly