Pages

Friday, October 23, 2009

CS: Learning with Compressible Priors, Spatially-Localized Compressed Sensing and Routing in Multi-Hop Sensor Networks?, Learning low dim manifolds

If you recall, one of the reason Compressive Sensing is bound to touch on many subjects and fields of engineering ( see sparsity in everything series of posts) lies in the fact that most occurrences in Nature follow some type of power law. Volkan Cevher provides some insight on the subject of signal sparsity and the probability distribution from which they are sampled from in his upcoming paper at NIPS entitled: Learning with Compressible Priors. The abstract reads:

We describe a set of probability distributions, dubbed compressible priors, whose independent and identically distributed (iid) realizations result in p-compressible signals. A signal x element of R^N is called p-compressible with magnitude R if its sorted coefficients exhibit a power-law decay as |x|_(i) . R i^(-d), where the decay rate d is equal to 1/p. p-compressible signals live close to K-sparse signals (K \lt\lt N) in the l_r-norm (r \gt p) since their best K-sparse approximation error decreases with O (R K^(1/r-1/p) ) We show that the membership of generalized Pareto, Student’s t, log-normal, Frechet, and log-logistic distributions to the set of compressible priors depends only on the distribution parameters and is independent of N. In contrast, we demonstrate that the membership of the generalized Gaussian distribution (GGD) depends both on the signal dimension and the GGD parameters: the expected decay rate of N-sample iid realizations from the GGD with the shape parameter q is given by 1/[q log (N/q)]. As stylized examples, we show via experiments that the wavelet coefficients of natural images are 1.67-compressible whereas their pixel gradients are 0:95 log (N/0.95)-compressible, on the average. We also leverage the connections between compressible priors and sparse signals to develop new iterative re-weighted sparse signal recovery algorithms that outperform the standard l_1-norm minimization. Finally, we describe how to learn the hyperparameters of compressible priors in underdetermined regression problems.

Volkan Cevher also makes available RANDSC, a small code generating compressible signals from a specified distribution. If we could now make a connection between these distributions and the l_q ( q less than 1) minimization techniques used to recover signals, it would be great[Oops, let me take that back, Volkan points to section 5.2 entitled " Iterative l_s-decoding for iid scale mixtures of GGD", duh]

Also found via another blog: Spatially-Localized Compressed Sensing and Routing in Multi-Hop Sensor Networks? by Sungwon Lee, Sundeep Pattem, Maheswaran Sathiamoorthy, Bhaskar Krishnamachari, and Antonio Ortega. The abstract reads:

We propose energy-efficient compressed sensing for wireless sensor networks using spatially-localized sparse projections. To keep the transmission cost for each measurement low, we obtain measurements from clusters of adjacent sensors. With localized projection, we show that joint reconstruction provides signi cantly better reconstruction than independent reconstruction. We also propose a metric of energy overlap between clusters and basis functions that allows us to characterize the gains of joint reconstruction for di erent basis functions. Compared with state of the art compressed sensing techniques for sensor network, our experimental results demonstrate signi cant gains in reconstruction accuracy and transmission cost.
Finally, I am rebounding on yesterday's statement on Machine Learning and Compressive Sensing here is a view of some of the subject in ML and Manifolds featured in the recent talk by Yoav Freund entitled Learning low dimensional manifolds and presented at Google (by the way, what's up with Google engineers who don't know their mics are on ? uh ?)




What is interesting is his use of manifold for system calibration at 52 minutes into the video where he describes the 3 dimensional manifold living in a 23-dimension dataset. Yoav's project described in the video is the Automatic Cameraman.

No comments:

Post a Comment