Pages

Wednesday, September 26, 2012

From Compressive Sensing to Machine Learning

If you recall the history of compressive sensing goes like this ( I know it is a little bit more complex but this is the gist), the Donoho-Tanner phase transition for sparse recovery is slowly eroding through additional knowledge of the measurement matrix or the signal or both. This was the story until a year ago:




In the same way I called these results stunning, the following figures are equally stunning


Because it shows that, unlike previous results, knowing the structure on the prior knowledge of the unknown vector yields similar results as designing a specific measurement matrix. This is new and it opens the door to a closer connection between compressive sensing and machine learning. The results above are from [1].They are very interesting but something was bothering me about them, so I asked Zhilin about it:
Dear Zhilin, 
I am surprised by the graphs of figure 1 of version 2 of this paper (  Extension of SBL Algorithms for the Recovery of Block Sparse Signals with Intra-Block Correlation by Zhilin Zhang, Bhaskar D. Rao. )[ NFM: That graph was not in version 1 ]  I am surprised not because they reach the rho=1 limit but because there is no difference between the no correlation and near full correlation between blocks. While the non correlation brings a surprise (because you now do as good as Krzakala et al), I would have have expected a much bigger surprise for the 95% correlation where the phase transition would have been **above** rho =1. What do you think ?....
Cheers,
Igor.

The idea here is that with structured sparsity, you get to know more about the inter-dependencies between elements of the signals. Hence you ought to have some results that shows that reconstruction is possible with a lower  number of samples compared to the number of elements in the signal (rho = 1).  Zhilin kindly responded with 
...But no matter what's the intra-block correlation, once rho is above 1, the success rate of BSBL algorithms is almost 0 (a success is defined as the normalized MSE less than 10^{-5}). But if a success is defined as MSE less than  10^{-3} or similar values, then the success rate is large even for rho much higher than 1, as shown in my work on fetal ECG recovery and EEG recovery....
[my emphasis]

In short, if the figure of merit changes the reconstruction figure also changes dramatically. And then comes nonlinear compressive sensing using techniques from Machine Learning. How much more of an improvement can we get ? Let me guess ....



... what about ten times! (from [2]) and more [3]


References:

2 comments:

  1. Hi Igor,

    I wanted to point out that the paper
    http://arxiv.org/abs/1111.1041
    also describe an algorithm that achieves rho =1
    for block sparse signals and random sensing matrices
    (see sections 3.2, 3.3 and 6.5).

    A

    ReplyDelete
  2. Andrea,

    This glaring oversight has been fixed:

    http://nuit-blanche.blogspot.com/2012/10/pushing-boundaries-in-compressive.html

    Thank you for the insight.

    Cheers,

    Igor.

    ReplyDelete