It may come under different heading: manifold signal processing, structured sparsity and norms, model based sensing, dictionary learning and even cosparsity: The idea is that most signals are really low dimensional without having to be sparse.

What can we learn from Zhilin's Block Sparse Bayesian Learning algorithm and results on a non sparse but structured signal ?

- Yes, it uses a random Bernouilli like measurement matrix for signal acquisition and that is compressive sensing
- Yes, it does a reconstruction of the original signal using structured sparsity, as opposed to vanilla sparsity

But does this approach help in performing classification or estimation on the compressed data ? (which is really what we are after) as we want lightweight sensors to not only gather data optimally but we also want (rough) classification near the sensor ... at very low cost.

The answer is yes because the reconstruction step is proof that all the information encoded through the Bernouilli like matrices was good enough in the first place. That structured signals can now fit more obviously within compressive sensing. We don't need RIP, NSP, or exhaustive dictionary learning with all sorts of guarantees as much anymore. As the figure above shows (from [1]), we used to not have that comfort. We were unsure that the compressed measurement really was catching the largest component of the signal and its as important dependencies. With this peace of mind, we can now instantiate our favorite machine learning algorithm on the compressed data. Strangely enough, at that point, with some caveat, we may not care about reconstruction anymore.

[1] Low Energy Wireless Body-Area Networks for Fetal ECG Telemonitoring via the Framework of Block Sparse Bayesian Learning by Zhilin Zhang, Tzyy-Ping Jung. , Scott Makeig , Bhaskar D. Rao .

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## 7 comments:

Interestingly, random projections have been used for the classifying ECG signals in this recent paper:

http://infoscience.epfl.ch/record/173978/files/0000625.pdf?version=1

So it does seem like good classification results can be obtained directly on the raw measurements of some simple sensors. I have no idea how computationally complex is the classifier used here, but it sure is a sign of things to come.

I agree!

Igor.

Hello Igor,

Regarding the comment about NSP and RIP, Do you have some other properties in mind that should be fulfilled before feeding it to ML algorithm?

Imama

Imama,

The rule should be how many rows are enough to get a good reconstruction for the full training set. For the practitioner all the other rules provide only higher bounds. Did I answer your question ?

Igor.

yes pretty much. but should there be a condition on rows structure?

There ought to be one. Unfortunately most conditions are NP or BPP hard and therefore time consuming to check for.

Eventually for the practitioner, it all comes down to trying it out on a training set and see if the reconstruction solver does the job.

Igor.

yes true. thanks for your response.

Imama

Post a Comment