I have tried the
Sparse Measurement Matrices of
Radu Berinde and
Piotr Indyk on a restricted set of examples. It seems to work as well as random Gaussian matrices. I have implemented a function that implements these matrices
here. I used both
l1-Magic and the
re-weighted Lp reconstruction techniques and both yield similar results. However, currently none of the solvers take advantage of the fact that these matrices are sparse. For the
re-weighted Lp technique, one can speed up the reconstruction fairly easily using the sparse function of Matlab since the full algorithm implementation takes about 10 lines. But we also know that the re-reweighted Lp scheme is also slow, so the generic question is, besides L1-Magic (for which
Radu Berinde and
Piotr Indyk have done an implementation)
How can any of the current reconstruction solvers listed below utilize the sparsity of the measurement matrices ?
[
Update: Duh, the answer is simple, instead of defining the matrix, one has to define a way to compute Ax and A'x through a function handle. Most of these softwares handle this. It goes to say I should not be posting late at night]
Source:
Rice Compressed Sensing repository
3 comments:
Hi,
Actually, it is as easy as creating a sparse matrix, using Matlab's "sparse" command. Then all the matrix multiplications will take as many operations as there are non-zero elements. The solvers in l1magic only do matrix and transpose multiplications.
But yes, you can use the function handle feature to define your own (possibly faster) multiplication routine. This also puts l1magic routines into "large scale mode", supporting much bigger vector dimensions.
Hello Radu,
Thanks for the comment. Do you have a personal web page by any chance ?
Igor.
Radu,
Never mind, I found it.
Igor.
Post a Comment