Pages

Friday, February 08, 2008

Compressed Sensing: The Framework

[Update: This entry is now superseeded by this more complete web page trying to build the Big Picture on Compressed Sensing (also here) ]

Sometimes I lose track of this, but Compressed Sensing is really about acquiring a sparse signal with the help of an incoherent basis. While much of the emphasis is rightfully about reconstruction work, we are beginning to see the emergence of other tools that complete this framework. Two other stages are emerging:
  • the search for bases or dictionaries in which a set of signals are sparse in and,
  • specific incoherent measurements tools.
Algorithms already implemented that find sparse dictionaries are presented in:
Knowledge of specific domain signals enables the ability to build these hopefully small dictionaries. The next step will almost certainly bring about techniques that find elements within a manifold as opposed to a full set of functions, some sort of Data Driven Dictionary.

Non Adaptive Measurement codes/matrices (the first four are pasted from Terry Tao site):

  • Random Fourier Ensemble: The signal is a discrete function f on Z/NZ, and the measurements are the Fourier coefficients at a randomly selected set Omega of frequencies of size M ( A is an M x N matrix.)

  • Gaussian ensemble: A is an M x N matrix (M x N Gaussian variables).

  • Bernoulli ensemble: A is an M x N matrix (M x N Bernoulli variables).

  • Gaussian ensemble: A is an m x n matrix (m > n) whose measurement coefficients are Gaussian variables.
  • Other recent implementations achieving specific goals (sparsity,....)
    Obviously, with the advent of Data Driven Dictionnaries, we will have specific domain specific measurement matrix methods. Also, I am sure I am missing some implementation from the sketching world and I guess I should also list the adaptive schemes.

    Reconstruction codes - Matching Pursuit/Greedy, Basis Pursuit/Linear Programming, Bayesian -
    Eventually, the end point is the birth of specific hardware implementations of these schemes ([L1], [L2],[L3],[L4],[L5],[L6], [7], [8]). The most salient features of the random or noiselet measurements is that one can foresee a hardware implementation without having to know the decomposing sparse basis since there is a high probability that these two will be incoherent with each other.

    [ Update: some of the homemade codes are here]



    Credit: I have cut and pasted section of this entry directly from the Rice repository, Terry Tao site. NASA Mars rover Opportunity on Sol 1432.

    4 comments:

    1. I have put a comment on Tao's blog concerning the use of more advanced models for CS recovery.

      ReplyDelete
    2. I know, I put in a comment on adaptive methods and it seems that it did not get approval :-)

      Thanks for the pointer though.

      Igor.

      ReplyDelete
    3. Hi,

      interesting post,

      Actually, I am looking for a piece of code to play in order to sparsely represent difference between two classes of images. Perhaps, NMF methods is a good start. Although most of them including "Efficient sparse coding" can be potentially applied for supervised classification, they are usually used to sparsely represent one image ensemble regardless of label. In the other words, the sparse representation does not exhibit the difference between two groups optimally and sparsely but it represents the whole ensemble sparsely which does not necessary mean that it yields better classification between two groups. In the other hand, methods like Sparse LDA produce sparse features but they are computationally expensive for structured/high dim. data like images.
      Now, I want to play with this:
      http://www.cs.helsinki.fi/u/phoyer/software.html
      or this:
      http://www.csie.ntu.edu.tw/~cjlin/nmf/index.html

      do you have any suggestion?

      Thanks,
      Kayhan

      ReplyDelete
    4. Kayhan,

      I am not forgetting your previous request, but I think I am answering bit by bit in the entries I am putting forth.

      Anyway, I did not know about the second reference, both seem ok. I think Hoyer has also looked at tensor factorization which basically opens the door to more than 2 dimensional sparse decomposition which might be of interest when looking at data of dimension higher than 2. Please also note that Gabriel Peyre has found that a sparsity constraint is not helping much in getting a good dictionary. I'd say it's a good thing to try these things and eventually get some wisdom about what to use and what to not use.

      I realize this is not a more pointed answer but it also means that there are a lot of low hanging fruits as well.


      Igor.

      ReplyDelete