If you thought some of the techniques used to reconstruct signals took long, you've never tried "infinite dimensional convex optimization" :-). This is the subject of today's first paper: Compressive Sampling in Infinite Dimensions by Anders C. Hansen. The abstract reads;
Looking at the other interest of the author, I wonder aloud if there is a connection between the computation pseudospectra and the RIP/NullSpace conditons ? For more information on computing pseudospectra of rectangular matrices, one can check Eigenvalues and Pseudospectra of rectangular matrices by Thomas Wright and L. Nick Trefethen. You'd think there is a connection and that theorem 2 would help (since multiplying a matrix with a class of sparse vector is really about reducing the number of columns of that matrix). There is also a connection between DOA and pseudospectra and DOA has been the subject of several papers mentioned here before.
Volkan Cevher let me know of a new paper Ditributed Bearing Estimation via matrix Completion by Andrew Waters and Volkan Cevher. The abstract reads:
We generalize the theory of Compressive Sampling in Cn to infinite dimensional Hilbert spaces. The typical O(log(n)) estimates (where n is the dimension of the space) are manipulated to fit an infinite dimensional framework.
Looking at the other interest of the author, I wonder aloud if there is a connection between the computation pseudospectra and the RIP/NullSpace conditons ? For more information on computing pseudospectra of rectangular matrices, one can check Eigenvalues and Pseudospectra of rectangular matrices by Thomas Wright and L. Nick Trefethen. You'd think there is a connection and that theorem 2 would help (since multiplying a matrix with a class of sparse vector is really about reducing the number of columns of that matrix). There is also a connection between DOA and pseudospectra and DOA has been the subject of several papers mentioned here before.
Volkan Cevher let me know of a new paper Ditributed Bearing Estimation via matrix Completion by Andrew Waters and Volkan Cevher. The abstract reads:
We consider bearing estimation of multiple narrow-band plane waves impinging on an array of sensors. For this problem, bearing estimation algorithms such as minimum variance distortionless response (MVDR), multiple signal classification, and maximum likelihood generally require the array covariance matrix as sufficient statistics. Interestingly, the rank of the array covariance matrix is approximately equal to the number of the sources, which is typically much smaller than the number of sensors in many practical scenarios. In these scenarios, the covariance matrix is low-rank and can be estimated via matrix completion from only a small subset of its entries. We propose a distributed matrix completion framework to drastically reduce the inter-sensor communication in a network while still achieving near-optimal bearing estimation accuracy. Using recent results in noisy matrix completion, we provide sampling bounds and show how the additive noise at the sensor observations affects the reconstruction performance. We demonstrate via simulations that our approach sports desirable tradeoffs between communication costs and bearing estimation accuracy.Finally, today's arxiv new addtion: Optimal incorporation of sparsity information by weighted $L_1$ optimization by Toshiyuki Tanaka and Jack Raymond. The abstract reads:
Compressed sensing of sparse sources can be improved by incorporating prior knowledge of the source. In this paper we demonstrate a method for optimal selection of weights in weighted $L_1$ norm minimization for a noiseless reconstruction model, and show the improvements in compression that can be achieved.
No comments:
Post a Comment