Pages

Friday, March 27, 2009

CS: Matrix completion, Compressive Sensing Tutorial Videos, Non-convex algorithms

The image on the side represents Uranus as seen from the Hubble telescope after using stars in a calibration exercise. Each star imaging provides a Point Spread Function for the camera. Would it be a good thing to perform the same imaging calibration for the Cassini wide angle camera now that we know it has dust on one of its filter ? if one could aim for the right star, the camera could provide some superresolution.


Dear Greg,

The noncommutative circulant matrices are interesting, although they begin to fall outside of the scope of matrix completion as it becomes quite hard for such matrices to be low rank (the noncommutative Fourier transform, in addition to being sparse, has to avoid all the high-dimensional irreducible representations). The proofs of the matrix completion results mimic in many places the compressed sensing results (and now I have a better understanding as to why, thanks to your observation), but they are more complicated and give worse bounds. (For instance, applying the result of this paper directly to the problem of measuring a function with only r Fourier coefficients using as few samples as possible, we need nr log^6 n or so random samples, whereas our original result in that case gave r log n. (The factor of n discrepancy is due to the fact that in the circulant case, measuring one matrix entry automatically gives you another n-1 matrix entries for free, so that should be ignored.)

At an early stage in the project with Emmanuel we tried using various non-commutative tools, notably the non-commutative Khintchine inequality, but the high moment expressions we ended up facing were just too entangled together in a non-commutative way for such tools to simplify the expressions we were facing. It seems that the moment method is actually one of the more robust methods for doing things like controlling operator norms of products of random matrices, though the combinatorial tasks needed to execute the method can be really quite sizeable.

while Greg Kuperberg replied:

Yeah I sort of figured that the log factors would be worse. Of course it raises the question of what’s better about circulant matrices, actual convergence/completion or merely the methods for obtaining bounds.

As for non-commutative circulant matrices, it is true that there is a relevant negative result concerning the structure of finite groups. Namely, if all irreps have dimension at most k, then the group G has an abelian subgroup of index at most f(k). However, I think that f may grow quickly enough that there is a middle ground, where the rank is not all that much larger than sparseness and yet G is not all that close to abelian.

Then too, when you have a non-commutative circulant matrix M, or indeed in any circumstance when M a priori commutes with some group action of a group G, then the spectrum of M is colored by the irreps of G. So in this case you have many natural variations of the nuclear norm obtained by assigning different weights to the irreps. Should each irrep be weighted by its dimension, as you will obtain by direct restriction of the ordinary nuclear norm? Or should it be weighted by 1, or by something in between? Well, maybe it’s as simple as that the sparseness promise is described with a weighting, and the convex relaxation should simply be chosen with the same weighting.


While we are on the subject of matrix completion, version 3 of Matrix Completion from a Few Entries by Raghunandan H. Keshavan, Andrea Montanari, Sewoong Oh is now available. The abstract reads:
Let M be a random (alpha n) x n matrix of rank r<<= C(rn/|E|)^0.5 . Further, if r=O(1), M can be reconstructed exactly from |E| = O(n log(n)) entries. These results apply beyond random matrices to general low-rank incoherent matrices. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log(n)), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices.

Svetlana Avramov-Zamurovic has some new videos of the Compressive Sensing tutorial she is giving at USNA.


I have added them to the Compressive Sensing Video page.

Angshul Majumdar has provided a new suite of software to Matlab Central entitled: Non Convex Algorithms for Group Sparse Optimization that features a Reweighted L2,p algorithm.

2 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. There is a wrong link to the slides of Linear Programming.
    BTW, I couldn't download the video of linear programming, while the other two are ok.

    Thanks.

    ReplyDelete