Following yesterday's entry on CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Joel Tropp sent me an e-mail on an issue I totally missed and misreported:
"... Thanks for noting CoSaMP in your blog. There's another key point that you didn't discuss, so I wanted to mention it.
There are about three trillion different variations on the OMP algorithm, none of which is especially interesting per se. In fact, I wouldn't be surprised if CoSaMP needs to be tweaked before it performs well in practice. (Different halting criteria, extra least-squares steps, etc.)
There's a truly fundamental algorithmic idea that everyone keeps missing: the least-squares problems MUST be solved iteratively. For large-scale problems, you simply cannot use direct methods (e.g., Gram--Schmidt orthogonalization) for the reasons we outlined in the paper. Once you recognize the importance of iterative methods, you realize that the sampling matrix really ought to support a fast multiply. As a result, with a partial Fourier sampling matrix, you can get CoSaMP to run in time O(N*log(N)). This kind of efficiency is essential if you want to do compressive sampling in practice. The O(m*N) time bound for a Gaussian matrix is unacceptable.
(By the way, a second consequence of using iterative least-squares is that you have to revise the analysis to reflect the fact that iterative methods result in an approximation error.)
The other important point is that you need to prune the approximation to make sure it stays sparse. This idea, however, is not new: it is an important step in HHS pursuit. Indeed, most of the ideas that make CoSaMP work can ultimately be traced back to HHS pursuit and its precursors..."
The Fourier transform can be seen as a fast vector-matrix multiply but so can other transforms. This point also brings to light the potential for other transforms that have fast matrix-vector multiply to be part of new Compressive Sensing Measurement schemes such as :
- Wavelet transforms (choose whichever one from this list) and,
- Fast Multipole Methods or Fast Multipole-like methods.
Let us note that we have seen some of these fast transforms in the measurements side of the compressive sensing before. Multipole methods, for instance have been used in the computational sensing experiment of a linear scattering analysis by Lawrence Carin, Dehong Liu, Wenbin Lin, and Bin Guo in Compressive Sensing for Multi-Static Scattering Analysis.
An extract reads:
The MLFMM (Multi-Level Fast Multipole Method) code was now run in a CS mode. Specifically, rather than assuming plane-wave excitation, the excitation v in (15) was composed of a linear combination of plane waves, with the weights on this superposition drawn iid from a Gaussian random variable, with zero mean and unit variance.
Sampling different columns of a Fourier transform could be seen as equivalent to the process of a random linear combination of a Fast Multipole Transform. That probably needs to be looked into. Let us note that the authors did not try a different random set such as choosing only certain columns of that transform such as in the Partial Fourier Transform.
Fast matrix-vector multiply also point to other algorithms that are hierarchical in nature such as the Multiscale Random Projections identified by Marco Duarte, Mark Davenport, Michael Wakin, Jason Laska, Dharmpal Takhar, Kevin Kelly, and Richard Baraniuk [1]. The hierarchical nature of the measurement matrix would also ensure that at every stage, the result of the matrix - vector multiply is sparse.
The big question then becomes: Are there any transforms such as a Partial Fast Multipole Transform, Partial Wavelet Transform or the Multiscale Random Projections that yield measurement matrices that fulfill the Restricted Isometry Property with a small constant? and how do they fare with a greedy algorithm such as CoSaMP ?
While the partial Fourier Transform might point to application in optics, Fast Multipole Methods might point to applications in numerous engineering inverse problems and related hardware.
References:
[1]Marco Duarte, Mark Davenport, Michael Wakin, Jason Laska, Dharmpal Takhar, Kevin Kelly, and Richard Baraniuk, Multiscale random projections for compressive classification
Credit Photo: NASA/JPL, Dust devil on Mars as seen by the Spirit rover on February 26, 2007.
Hi,
ReplyDeleteWhile reading and implementing CS basics,want to ask restricted isometric property.
1)If signal is S-sparse,inorder to have perfect reconstruction,why do we require the measurement matrix(such as gaussian etc) should satisfy isometry property for 2S or 3S sparse signals?
2)I have checked through simulations that for gaussian measurement matrix,the isometry property is only satisfied if signal has very few nonzeros,where as with fourier measurement matrix,we get isometry property satsified for larger values of sparsity S.Does this happen actually?
Thanks
Samra
Samra,
ReplyDeleteQuick answer, some of these results have been shown in some of the papers of Candes, check out the first papers in :
http://www.google.com/search?&q=isometry+site%3Ahttp%3A%2F%2Fwww.acm.caltech.edu%2F%7Eemmanuel
Igor.
Thanks igor,
ReplyDelete1)I have infact first looked at paper by Candes,but I couldnot understand why we are looking at isometry constants corresponding to twice or three times the actual sparsity of a signal.I mean why we dont look at simply isometry constant values corresponding to S sparse signal(not higher).
2)In the Candes paper on Compressed Sensing,I have constructed Gaussian measurement matrix as explained in Candes paper page 9 and made simulation to give corresponding isometry constants which turn out to be much higher(0.8-0.9).
Thanks
Samra
Samra,
ReplyDeleteThe beginning of an answer:
http://www.acm.caltech.edu/~wakin/papers/randProjManifolds-03june2007.pdf
"....Whitney’s Embedding Theorem [29] actually suggests for certain K-dimensional manifolds that the number of measurements need be no larger than 2K +1 to ensure an embedding. However, as it offers no guarantee of stability, the practical recovery of a signal on the manifold could be complicated. Interestingly, the number 2K also arises in CS as the number of random measurements required to embed the set of all K-sparse signals (but again with no
guarantee on the conditioning of the recovery problem); see [3]..."
[3] is this reference:
http://www.dsp.ece.rice.edu/cs/DCS112005.pdf
and it explains that the reason we need more than 2K measurements has to do with the fact that we are solving an l_1 problem (where 2K measurements are needed) instead of an l_0 (where only K measurements would be Ok). Somehow, there is an inefficiency in using basis pursuit as opposed to the combinatorial approach. The ratio between the two 2K/K could be the reason why the isometry constant is for 2S in the 1S-sparse case.
I know it does not answer specifically your question, but I seem to have read something specific on this 2S-3S restricted isometry constant in one of Candes's paper but I can't find it for the moment.
Maybe you could ask Candes directly ?
Igor.
Hi,
ReplyDeleteIf i want to check the Restricted Isometry property for measurement matrix F,it should satisfy fo all S-sparse vectors x,the following relationship,
(1-delta)norm(x)^2 <= norm(Fc)^2 <= (1+delta) norm(x)^2
In the paper of Candes Decoding by Linear Programming,its written that if above condition is satisfied,all eigen values of (F*F),where F* is the Hermitian of F,should lie between (1-delta) and (1+delta).
I have a question regarding eigen values of F*F.I have checked there may exist F which satsifies the above Restricted Isometry for all S sparse vectors but still the eigen values of F*F may not lie between (1-delta) and (1+delta)?
Can anybody clarify this.
Thanks
Samra
Samra,
ReplyDeleteDo you have an example ?
Igor.
In a simple example,length of sparse vector=N=5
ReplyDeletenumber of measured samples=M=4
Sparsity=S=2
Creating Partial fourier matrix(without randomizing the rows) and normalizing its columns
F=dftmtx(N);
F=normalize(F(1:M,:));
I have sparse vector x
x=[0 0 0 0.6820 0.5253];
The RIP condition
(1-delta)norm(x)^2 <= norm(Fx)^2 <= (1+delta) norm(x)^2
is satisfied for delta=0.11(<1.0)
But if we see the eigen values of matrix F*F are[0.75,1.25] which doesnot lie between [1-delta,1+delta]
Hence even if for some sparse vector x,the RIP condition is satisfied but it doesnt imply about the bounds on eigen values.
Thanks
Samra