[ The
Compressed Sampling blog is here ]
Moving forward from the recent
Synthesis and
L1 entries, I am trying to now review some of the new work on compressed sensing that do not generally fall into the traditional subjects of just compressive sampling.
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization by Benjamin Recht, Maryam Fazel, Pablo A. Parrilo This is the most intriguing paper I have seen so far because it promises to make a bridge between compressed sensing and other areas such as operator/matrix compression: Examples include Minimum order linear system realization, Low-Rank Matrix Completion, Low-dimensional Euclidean embedding problems, Image Compression. They make the case that the L1 reconstruction capability in compressed sensing has also been seen in other areas and that compressed sensing is now bringing the theoretical tools as to why these heuristics actually work. The areas of interest include matrices and the issues at hand are not about cardinality of matrices (sparsity) but rather ranks. The Hilbert Space norm that was Euclidian in CS becomes that of Frobenius and the sparsity inducing norm is not L1 but the nuclear norm. The tools to investigate these phenomena in CS are the convex optimization linear programming tools whereas they become that of the semidefinite programming in the context of this article. Lots of interesting and fascinating potential application are listed at the end of the paper. The figure on the right show the same phase change observed in CS but now the x-axis and y-axis represent some measure of the rank of the matrix being tested for recovery. Fascinating. Maybe SeDuMi should be part of the packages for CS from now on? I cannot wait to see how this parallel will play out with Action Respecting Embeddings and Nonlinear Dimension Reduction with Side Information since we were interested in this at some point. But right now it seems it is really a question of skillfully using the Matlab reshape function. As it turns out, CS may indeed have some influence in Machine Learning but in a way I did not anticipate. The Maximum Margin Matrix Factorization algorithm by Jason Rennie and Nathan Srebro may make big headway in enabling the nuclear norm approach developed in this paper. And yes, Pierre, the Netflix prize resolution, in that sense may use some of the concepts in CS. Furthermore, while reconstruction is indeed very interesting and satisfying, I am awaitng what their counterparts in detection will produce ( Richard Baraniuk and Michael Wakin in Random projections of smooth manifolds).
- One of the hot issue in CS is really about how to quantify the amount of sparsity one can get away with when using random or deterministic projections. As we all know Compressed Sensing does not require random projections to work. The bases just need to be incoherent. But when you do use them, what is the least sparse decomposition you can get away with, or how given a random matrix, can one figure out how it can be useful for CS without breaking down. Terry Tao explains what the challenge is here. Anatoly Eydelzon, a doctoral candidate at Rice will present his thesis on the subject on August 30th ( Deterministic Conditions for Solution Recovery in Compressive Sensing and Error Correction)
- Another way to go about this is to solve this issue with a deterministic algorithm. This is what Mark Iwen is doing with his paper: A Deterministic Sub-linear Time Sparse Fourier Algorithm via Non-adaptive Compressed Sensing Methods. First, it is nice to see another CS paper that specifically refers to group testing in the form of the Chinese Remainder Theorem. Mark Iwen modifies an algorithm develop by Graham Cormode and Muthu Muthukrishnan. One of the interesting thing about new papers is how Compressed Sensing is defined, which is always a little bit different than before. For instance, he gives a very compact definition of CS that I cannot recall being so clearly expressed before.
The general CS setup is as follows: Let A be an N-length signal/vector with complex valued entries and be a full rank N × N change of basis matrix. Furthermore, suppose that T A is sparse (i.e., only k ≪ N entries of T A are significant/large in magnitude). CS methods deal with generating a K ×N measurement matrix,M, with the smallest number of rows possible (i.e., K minimized) so that the k significant entries of · A can be well approximated by the K-element vector result of M· T A. (1) Note that CS is inherently algorithmic since a procedure for recovering T A’s largest k-entries from the result of Equation 1 must be specified.
At the end of the paper, it also becoming clear as to how CS is going to affect linear algebra:Compressed Sensing (CS) methods provide algorithms for approximating the result of any large matrix multiplication as long as it is known in advance that the result will be sparse/compressible.
This is very reminiscent of the Fast Multipole method that initially was thought as a mere improvement to some electrostatic or gravitational computations but eventually yielded major advances in many other areas. Mark used the approach of Cormode and Muthukrishnan ( Combinatorial algorithms for compressed sensing. G. Cormode and S. Muthukrishnan. SIROCCO 2006. DIMACS TR 2005-25)
- I could not review this paper by Ashwin Wagadarikar, Renu John, Rebecca Willett, and David Brady. "Single disperser design for compressive, single-snapshot spectral imaging," SPIE Optics and Photonics, 2007. [http://www.ee.duke.edu/~willett/papers/WagadarikarSPIE2007.pdf] because the pdf file seems to be bad.
- On a very local note, Ron DeVore will give a course on Compressed Sensing at Texas A&M University in the Math department this fall: Math 689 Section 602, in Blocker, http://www.math.tamu.edu/teaching/course_offerings/nextlongsemester.html
- If you know of a specific class or presentation made on the subject of Compressed Sensing and related areas please send me an e-mail, I'll try to make to map those and make it available.
- [Update: August 20. Muthu Muthukrishnan pointed to a variant of the JL transform by Jiri Matousek (here ). Does this new transformation follow the RIP ?]
Reference:
Rice
2 comments:
The paper has not been posted because the proceedings for the conference have not been published yet.
Thank you. Can you let me know when it is ?
Igor.
Post a Comment