On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries, and comes up in many areas of science and engineering including collaborative filtering, machine learning, control, remote sensing, and computer vision to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown low-rank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclear-norm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown nxn matrix of low rank r from just about nr log2 n noisy samples with an error which is proportional to the noise level. We present numerical results which complement our quantitative analysis and show that, in practice, nuclear norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout.
The Arxiv blog is now one of Technology Review's blogs. One of its entry features a write-up on a summary article on Compressive Sensing. In the end of the blog entry, it says that Wim (an anonymous commenter) and I have "shown" that the Ghost Imaging effect to be just compressed sensing and not some spooky quantum mechanical effect. This is a debate that is currently being argued as can be seen in Morton Rubin's Comment on Ghost imaging with a single detector [arXiv0812.2633v2]. I am no specialist of the underlying physics but as stated in the comments here and with further discussions with the group at Weizmann, it looks like the nonlinear solvers being used in CS could help as the solver used in the orignal paper is linear (I discussed this a while back on this blog, go for "if it walks like duck..."). Update: The folks at Weizmann have shown me some very nice new results in that direction and will publish the comparison between the nonlinear and linear solvers soon. However, I'll let Yaron Bromberg , one of the investigator at the Weizmann Ultrafast Optics Group, explains the more salient point:
I am impatient to see the new paper and see how the QM physics relates to the incoherent acquisition of the signal (which is what CS is really about). May we live in interesting times.......Please feel free to mention in your response that we are currently working on the next paper.
Actually, we would appreciate it if in your response you could make it clear that in our original paper we performed a linear reconstruction, without any L_1 minimization. Ghost imaging works well with a linear reconstruction, and therefore compressed sensing has nothing to do with the debate whether ghost imaging is a classical or quantum phenomenon. CS does allow us to obtain 'ghost images' with a significantly improved SNR and with much less realizations, and we are very excited from that. Somehow we feel that this point is not that clear in today's arxiv blog.
In yesterday's long post, I mentioned the linear reconstruction solver listed in Non-Iterative Computation of Sparsifiable Solutions to Underdetermined Kronecker Product Linear Systems of Equations by Andrew Yagle. Some astute reader may have noted that the matlab code listed did not parse. Andy mentioned to me that there was a Matlab to Tex conversion issue. The actual .m file is:
clear;
rand('state',0);
X=ceil(rand(256,256)-0.99965);
H=rand(22,256);
[U S V]=svd(H);
Y=H*(H*X)';
Q=V(:,1:22);
%Y(:)=kron(H,H)*X(:).484X65536 matrixXvector too big.
%GOAL: Get sparse X(length=65536) from Y(length=484).
W=inv(S(:,1:22))*U';
YY=reshape(kron(W,W)*Y(:),22,22);
I=find(abs(Q*null(YY)) <> 0)'
J=find(abs(Q*null(YY')) <> 0)'
Z(256,256)=0;
Z(I,:)=1;
Z(:,J)=1; %Lines with nonzero X
figure,imagesc(X),colormap(gray),title('SPARSE IMAGE')
figure,imagesc(Y),colormap(gray),title('DATA IMAGE')
figure,imagesc(Z+3*X),colormap(gray),title('INDICATOR')
In other news, I just found the following paper that deals with some version of the uncertainty principle which is somewhat funny in regards to the Quantum Mechanical debate above :-) : On uncertainty principles in the finite dimensional setting by Saifallah Ghobber and Philippe Jaming. The abstract reads:
The aim of this paper is to prove an uncertainty principle for the representation of a vector in two bases. Our result extends previously known qualitative uncertainty principles into quantitative estimates. We then show how to transfer this result to the discrete version of the Short Time Fourier Transform. An application to trigonometric polynomials is also given.
Of potential interest is: Some links between uncertainty principles and combinatorics.
No comments:
Post a Comment