Learning Sparse Data with Near-optimal Speed and Efficiency from a Variety of Measurement Processes by Sidharth Jaggi
Compressive sensing, in which a k-sparse signal in an ambient real space of much larger dimension (n) is to be reconstructed via linear measurements is a canonical example of sparse recovery problems. There are others — network tomography (in which the structure of a network is to be probed via measurements that are linear but are constrained by the (unknown) structure of the network), group testing (in which the sparse signal is binary, and the measurements are non-linear (disjunctive)), and phase-recovery (in which a real sparse signal is to be reconstructed via amplitude measurements). for each of these we provide algorithms that are information-theoretically near-optimal (up to at most poly-log(n) factors) in both computational- and sample-complexity, and can be made robust to noise. "Structured sparse measurements" play a key role in this framework, but there's a need to tweak the framework for the idiosyncrasies of each measurement process as well.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment