You have been bothered by the seamlessly bad results of your hardware, you think it's your own damn fault, that you have not been careful enough with the calibration process, or maybe you did not provide the good parameters for the reconstruction solver, maybe it's the wrong solver, or you may be on the wrong side of the Donoho-Tanner transition, but you don't want to be saying to yourself that this investment in this whole compressive sensing thing has been for nothing, that the theory is somehow missing something, you know, not everything is that sparse after all, blahblahblahblahblahblahblahblahblah.... Anders Hansen thinks he has found the problem and how to get around it. Suffice to say, it seems to boil down to how careful you are with the dictionary and the issue seems to stem from our dirty processes for not caring about infinity. Here is his very readable paper: Generalized Sampling and Infinite Dimensional Compressed Sensing by Anders Hansen. The abstract reads:
We present an abstract framework that allows for Compressed Sensing in infinite dimensions. We show that the combination of a the recently developed Generalized Sampling Theorem (that is a generalization of the classical Shannon Sampling Theorem and allows for reconstructions in arbitrary bases) and infinite-dimensional compressed sensing not only allows for reconstructions in arbitrary bases, but also allows for substantial under sampling. As a corollary of our framework we solve an open problem in finite-dimensional compressed sensing, namely, we demonstrate that the results shown by Candès, Romberg and Tao in [10] on the discrete Fourier transform extends to arbitrary unitary matrices.
From the paper, one can read:
Remark 2.3. The well informed reader may object and suggest that if we have a signal g that is sparse in the Haar basis, why are we not “measuring” g with noiselets [14] (e.g. obtaining inner products hφj , gi for j ∈ N where the φjs are noiselets ) which would give us a finite-dimensional recovery problem a la (2.4). In that case n · υ2(U) = 1 in Theorem 2.2 (perfect incoherence) and thus we would have a very well suited recovery problem. The only problem is that the luxury to choose the measurements are very rare in applications. In particular, in the example that we just considered (and which replicates an MRI situation) we are given f = Fg, the Fourier transform of g. And thus the measurements are hϕj , gi for j ∈ Z and ϕj(x) = e−2πijǫx. This is due to the physics behind the MRI equipment and can therefore not be altered (in particular, measurements with noiselets are impossible). Thus, we must have a model for which the sampling vectors, say {ϕj}j∈N, are fixed!
Maybe I should give an example in the form of the example featured in "Compressed Sensing: How to wow your friends"
Let me note that I would welcome some type of connection between what Anders Hansen shows and this paper: A variant on the compressed sensing of Emmanuel Candes by Yves Meyer, Basarab Matei (.ps original file).
Let me note that I would welcome some type of connection between what Anders Hansen shows and this paper: A variant on the compressed sensing of Emmanuel Candes by Yves Meyer, Basarab Matei (.ps original file).
2 comments:
An example would be very helpful.
I'll try to make one for the Monday Morning Algorithm this coming monday.
Cheers,
Igor.
Post a Comment