Wednesday, February 20, 2008

Compressed Sensing: More on Deterministic Subsampling.


The current crop of deterministic schemes devised to perform compressed sensing measurements or subsample below Nyquist are listed here. To that we have to add the upcoming paper of Yves Meyer and Basarab Matei. Improving on a previous algorithm Mark Iwen and Craig Spencer have just released a new preprint entitled: Improved Bounds for a Deterministic Sublinear-Time Sparse Fourier Algorithm. The abstract reads:
This paper improves on the best-known runtime and measurement bounds for a recently proposed Deterministic sublinear-time Sparse Fourier Transform algorithm (hereafter called DSFT). In [1], [2], it is shown that DSFT can exactly reconstruct the Fourier transform (FT) of an N-bandwidth signal f , consisting of B  N non-zero frequencies, using O(B2 ·polylog(N)) time and O(B2 · polylog(N)) f -samples. DSFT works by taking advantage of natural aliasing phenomena to hash a frequency sparse signal’s FT information modulo O(B·polylog(N)) pairwise coprime numbers via O(B · polylog(N)) small Discrete Fourier Transforms. Number theoretic arguments then guarantee the original DFT frequencies/coefficients can be recovered via the Chinese Remainder Theorem. DSFT’s usage of primes makes its runtime and signal sample requirements highly dependent on the sizes of sums and products of small primes. Our new bounds utilize analytic number theoretic techniques to generate improved (asymptotic) bounds for DSFT. As a result, we provide better bounds for the sampling complexity/number of low-rate analog-to-digital converters (ADCs) required to deterministically recover frequency-sparse wideband signals via DSFT in signal processing applications [3], [4].

This is an improvement over the previous algorithm presented here by Mark Iwen in A Deterministic Sub-linear Time Sparse Fourier Algorithm via Non-adaptive Compressed Sensing Methods

Also in line with probabilistic schemes to help your average LAPACK library (see Random Projection Lapack ?), Mark Iwen and Craig Spencer are also making available A Note on Compressed Sensing and Rapid Matrix Multiplication when you know the result of a matrix multiplication will yield a sparse matrix, which is not often but a step in the right direction.The abstract reads:
This paper discusses how compressed sensing methods can be used to (approximately) multiply two square matrices quickly if the product is known to be sparse. Let A and B denote square N × N matrices,  > 0, and c > 0. If A · B is compressible in each column, we can obtain a near-optimal best cN^.29462 element-per-column approximation to A · B using O(N^(2+epsilon)) operations. More specifically, if each column of the product A · B has at most cN^.29462 non-zero elements, then we can calculate the product A · B exactly by using O(N^(2+epsilon)) operations.



Credit photo: NASA/JPL-Caltech/Cornell University, View from Spirit's Overwintering Position, photo taken Jan. 29, 2008.

No comments:

Printfriendly