Monday, March 23, 2009

CS: Matrix completion and Compressive Sensing, SVT, IHT,GI, Astronomical Data Analysis and Sparsity, ML conf.,

Seen in the comments section of Terry Tao's blog on the entry about Matrix completion we covered before:

Greg Kuperberg remarked :
The papers describe this matrix completion problem and algorithm as analogous to compressed sensing. But could compressed sensing be interpreted as a special case of matrix completion, namely for circulant matrices? (Or maybe generalized circulant, for some other abelian or even non-abelian group.) ....The point is that first, a circulant matrix is an expanded form of its first row, which is an arbitrary vector; and second, that you obtain the singular values of the circulant matrix by taking the Fourier transform of that vector. I don’t know enough details of compressed sensing to say this for sure, but this construction leads me to ask whether matrix completion is not just analogous to compressed sensing, but rather a generalization.
To what Terry Tao replied:
...Hmm, that’s a good observation. The compressed sensing problem of recovering a function with few Fourier coefficients from a sparse sample is indeed formally equivalent to recovering a circulant matrix with low rank from a sparse sample, and the recovery algorithm used in the former (L^1 minimisation of the Fourier transform) is a special case of the recovery algorithm used in the latter (minimisation of nuclear norm).
Then Greg Kuperberg follows up with:
There are many special cases of the matrix completion problem that could be called compressed sensing. As I said more briefly in the parentheses, suppose that the rows and columns of a matrix are indexed by elements of a group, and suppose that the entry M_{ab} only depends on a^{-1}b. Then this is a generalized kind of circulant matrix, particularly if the group is non-commutative, and the recovery algorithm is another kind of compressed sensing.

For this reason, matrix completion could just as well be called non-commutative compressed sensing. This is surely a useful viewpoint. Also this phrase “nuclear norm” makes this slightly less clear, since that norm is of course just the spectral ell^1 norm.

The next question of course is what theorems about compressed sensing are subsumed by the matrix completion version. If a result has been subsumed, that’s interesting for one reason; and if it hasn’t been, it’s also interesting because you can ask why not.


Taking things in a different direction, when I just skim the moment estimates in the matrix completion paper, it reminds me of the moment version of the non-commutative central limit theorem. (I don’t mean the free probability version, although that one is also a moment story, but rather the standard quantum probability version.) This is known as the Giri-von-Waldenfels theorem. I think that it’s an interesting problem to work out non-moment versions of this result, i.e., versions that more resemble convergence in distribution. A key problem is to define what convergence in distribution should mean in the quantum case. (Again, in the case of free probability it’s easier, because the limiting distribution is bounded and thus determined by its moments.) In my arXiv paper “A tracial quantum central limit theorem”, I obtained a partial result in this direction.

Interesting discussion! Since we are on the subject of Matrix Completion, the Singular Value Thresholding website is at:

The introduction reads;

Singular Value Thresholding (SVT) is an algorithm to minimize the nuclear norm of a matrix, subject to certain types of constraints. It has been successfully used in many matrix-completion problems (for more on the matrix completion problem, see Exact matrix completion via convex optimization by E.J. Candes and B. Recht). The SVT algorithm is described in the paper A singular value thresholding algorithm for matrix completion by J-F. Cai, E.J. Candes and Z. Shen.

The version of SVT available at this website is restricted to equality constraints of the form:

However, the user can easily modify it to handle inequality constraints of the form: SVT can also work with more general constraints (e.g. any Lipschitz-continuous convex function), but for simplicity, this is not implemented in the code below.

The SVT algorithm solves the following problem:

where the first norm is the nuclear norm (the sum of singular values) and the second norm is the Frobenius norm. Problem (P3), for large values of tau, is closely related to problem (P2):

Problem (P2) is often used as a proxy for problem (P1), since (P1) is not convex:
The code is available from the site.

While we are on the subject of Mathematics and Compressed Sensing, I nearly missed this most recent presentation/paper by Yves Meyer (and Basarab Matei as co-author) at the College de France entitled Simple Quasicrystals are Sets of Stable Sampling a continuation on his very interesting A variant on the compressed sensing of Emmanuel Candes (I talked about it here, here and here in reference to MRI). I seem to note that there doesn't seem to be a restriction on the positivity of the function.

In a different area, this is an issue of general interest: what are the parameters needed to make sure your reconstruction solver will converge to the right solution. It looks like there is not an obvious choice for the Bregman solvers as pointed by Michael Friedlander when he noted an issue with the parameters choice in this entry. More recently, an unknown blogger seems to have similar issues with a Bregman's algorithm as well. And so I am not overly surprised to see two papers at SPARS'09 (list of papers are here and here) trying to deal with a similar issue with the IHT algorithm:

Following up on a previous post, I went back to the Technology Review Arxiv blog and found a comment by KFC (the author of the blog) who cut and pasted an e-mail by Ori Katz one of the investigator at Weizmann performing the Ghost Imaging Study. I have seen those reconstructions with GPSR and they are indeed very nice. As I said in the previous post, I will feature the preprint when it comes out.

Found on Arxiv:

From some of the folks involved in getting compressive sensing in one of its first use in a spacemission coding scheme (Herschel) here is: Astronomical Data Analysis and Sparsity: from Wavelets to Compressed Sensing by Jean-Luc Starck and Jerome Bobin. The abstract reads:

Wavelets have been used extensively for several years now in astronomy for many purposes, ranging from data filtering and deconvolution, to star and galaxy detection or cosmic ray removal. More recent sparse representations such ridgelets or curvelets have also been proposed for the detection of anisotropic features such cosmic strings in the cosmic microwave background.We review in this paper a range of methods based on sparsity that have been proposed for astronomical data analysis. We also discuss what is the impact of Compressed Sensing, the new sampling theory, in astronomy for collecting the data, transferring them to the earth or reconstructing an image from incomplete measurements.

Finally, there is a Call for participation for a Workshop on Sparse Methods for Music Audio that will be held in conjunction with the 26th International Conference on Machine Learning in Montreal, Quebec, June 14 - 18, 2009.

Credit photo: NASA, Photography of Paris taken by an astronaut on board of the International Space Station on January 7th, 2008. See also astronaut photography of earth.

No comments: