In compressive sensing, the earliest results used randomization as a way to compress signals. But it is in fact deeper. This week, we saw in Extreme Compressive Sampling for Covariance Estimation that sparsity was not central to the argument of dimension reduction. Here is another paper that further enlighten us on this very specific issue. From the paper:
At the heart of our analysis is a theorem that shows that matrices that preserve the Euclidean norm of sparse vectors (a.k.a. RIP matrices), when multiplied by a random sign pattern preserve the Euclidean norm of any set. Roughly stated, linear transforms that provide low distortion embedding of sparse vectors also allow low distortion embedding of any set! We believe that our result provides a rigorous justification for replacing “slow” Gaussian matrices with “fast” and computationally friendly matrices in many scientific and engineering disciplines. Indeed, in a companion paper [18] we utilize our results in this paper to develop sharp rates of convergence for various optimization problems involving such matrices.
Isometric sketching of any set via the Restricted Isometry Property by Samet Oymak, Benjamin Recht, Mahdi Soltanolkotabi
In this paper we show that for the purposes of dimensionality reduction certain class of structured random matrices behave similarly to random Gaussian matrices. This class includes several matrices for which matrix-vector multiply can be computed in log-linear time, providing efficient dimensionality reduction of general sets. In particular, we show that using such matrices any set from high dimensions can be embedded into lower dimensions with near optimal distortion. We obtain our results by connecting dimensionality reduction of any set to dimensionality reduction of sparse vectors via a chaining argument.
Credit: Date: 28 May 2015
Satellite: Rosetta Depicts: Comet 67P/Churyumov-Gerasimenko Copyright: ESA/Rosetta/NAVCAM, CC BY-SA IGO 3.0
Satellite: Rosetta Depicts: Comet 67P/Churyumov-Gerasimenko Copyright: ESA/Rosetta/NAVCAM, CC BY-SA IGO 3.0
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
1 comment:
The way how this is worded, "any set," seems ambitious. We know that when the number of measurements is less than the signal length, there will be subspaces mapped to the same measurements. How will we "embed" a signal when another signal may get mapped to the same measurements?
Post a Comment