In CoSAMP or Subspace Pursuit, one has to first guess the sparsity of the signal before reconstructing it. Thong Do, Lu Gan, Nam Nguyen and Trac Tran propose a new greedy scheme called SAMP that removes this requirement by making it part of the algorithm. The algorithm is proposed in Sparsity adaptive matching pursuit algorithm for practical compressed sensing. The abstract reads:
This paper presents a novel iterative greedy reconstruction algorithm for practical compressed sensing (CS), called the sparsity adaptive matching pursuit (SAMP). Compared with other state-of-the-art greedy algorithms, the most innovative feature of the SAMP is its capability of signal reconstruction without prior information of the sparsity. This makes it a promising candidate for many practical applications when the number of non-zero (significant) coefficients of a signal is not available. The proposed algorithm adopts a similar flavor of the EM algorithm, which alternatively estimates the sparsity and the true support set of the target signals. In fact, SAMP provides a generalized greedy reconstruction framework in which the orthogonal matching pursuit and the subspace pursuit can be viewed as its special cases. Such a connection also gives us an intuitive justification of trade-offs between computational complexity and reconstruction performance. While the SAMP offers a comparably theoretical guarantees as the best optimization-based approach, simulation results show that it outperforms many existing iterative algorithms, especially for compressible signals.
Expander graphs have been recently proposed in [1], [2], [11], [21], [22] to construct efficient compressed sensing algorithms. In particular, in [1], [2] it has been shown that any n-dimensional vector that is k-sparse (with k much less than n) can be fully recovered using O(k log n/k ) measurements and only O(k log n) simple recovery iterations. In this paper we improve upon this result by considering expander graphs with expansion coefficient beyond 3/4 and show that, with the same number of measurements, only O(k) recovery iterations are required, which is a significant improvement when n is large. In fact, full recovery can be done by at most 2k very simple iterations and, furthermore, the number of iterations can be made arbitrarily close to k. We also show that the recovery algorithm can be implemented very efficiently using a simple binary search tree.We further show that by tolerating a small penalty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. Finally, we compare our result with other recently developed expander graph-based methods and argue that ours compares favorably both in terms of the number of required measurements and in terms of the recovering time complexity.
Anna Gilbert's presentation at the L1 meeting at Texas A&M).
This issue of making the connection between sparse and dense matrices was also looked at by Dapo Omidiran and Martin J. Wainwright in High-dimensional subset recovery in noise: Sparsified measurements without loss of statistical efficiency. The abstract reads:
We consider the problem of estimating the support of a vector β ∈ Rp based on observations contaminated by noise. A significant body of work has studied behavior of ℓ1-relaxations when applied to measurement matrices drawn from standard dense ensembles (e.g., Gaussian, Bernoulli). In this paper, we analyze sparsified measurement ensembles, and consider the tradeoff between measurement sparsity, as measured by the fraction γ of non-zero entries, and the statistical efficiency, as measured by the minimal number of observations n required for exact support recovery with probability converging to one. Our main result is to prove that it it is
possible to let γ → 0 at some rate, yielding measurement matrices with a vanishing fraction of non-zeros per row while retaining the same statistical efficiency as dense ensembles. A variety of simulation results confirm the sharpness of our theoretical predictions.
If one is looking at more adaptive measurement matrices, this issue of sparsity of the measurement matrix is really dependent on the dictionaries used to decode signals as shown by Julio Martin Duarte-Carvajalino and Guillermo Sapiro in Learning to sense sparse signals: Simultaneous sensing matrix and sparsifying dictionary optimization.The abstract reads:
Sparse signals representation, analysis, and sensing, has received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, the learning of overcomplete dictionaries that facilitate a sparse representation of the image as a liner combination of a few atoms from such dictionary, leads to state-of-the-art results in image and video restoration and image classification. On the other hand, the framework of compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical Shannon-Nyquist Theorem. The goal of this paper is to present a framework that unifies the learning of overcomplete dictionaries for sparse image representation with the concepts of signal recovery from very few samples put forward by the CS theory. The samples used in CS correspond to linear projections defined by a sampling projection matrix. It has been shown that, for example, a non-adaptive random sampling matrix satisfies the fundamental theoretical requirements of CS, enjoying the additional benefit of universality. On the other hand, a projection sensing matrix that is optimally designed for a certain signal class can further improve the reconstruction accuracy or further reduce the necessary number of samples. In this work we introduce a framework for the joint design and optimization, from a set of training images, of the overcomplete non-parametric dictionary and the sensing matrix. We show that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary. The presentation of the framework and its efficient numerical optimization is complemented with numerous examples on classical image datasets.
One by Vladimir Stankovic, Lina Stankovic, and Samuel Cheng on Compressive video sampling. The abstract reads:
Compressive sampling is a novel framework that exploits sparsity of a signal in a transform domain to perform sampling below the Nyquist rate. In this paper, we apply compressive sampling to significantly reduce the sampling rate of video. A practical system is developed that first splits each video frame into non-overlapping blocks of equal size. Compressive sampling is then performed on sparse blocks, determined by predicting sparsity based on previous reference frames which are sampled conventionally. The blocks identified as sparse are reconstructed using the orthogonal matching pursuit algorithm, whereas the remaining blocks are sampled fully. Thus, the acquisition complexity and sampling time are reduced, while exploiting the local sparsity, within the DCT domain, of a video stream. Our simulation results indicate up to 50% saving in acquisition for Y-components of video with very small performance loss compared to traditional sampling.
V. Stankovic, L. Stankovic, and S. Cheng, "Compressive Sampling of Binary Images," in Proc. 2008 Congress on Image and Signal Processing, Sanya, Haina, China, May 2008.
The other is in the area of MRI where Tolga Cukur, Michael Lustig, and Dwight Nishimura present Improving non-contrast-enhanced steady-state free precession angiography with compressed sensing. The abstract reads:
Credit: NASA/JPL-Caltech/University of Arizona/Texas A&M, Baby Bear, Sol 14, image from the Phoenix lander from Mark Lemmon Surface Stereo Imager site.Flow-independent angiography offers the ability to produce vessel images without contrast agents. These angiograms can be acquired with magnetization-prepared three-dimensional balanced steady-state free precession sequences, where the phase encodes are interleaved and the preparation is repeated prior to each interleave. However, frequent repetition of the preparation significantly decreases the scan efficiency. The number of excitations can instead be reduced with compressed sensing that exploits the sparsity of the angiograms. Hence, the phase encodes can be undersampled to save scan time without significantly degrading image quality. These savings can be allotted for preparing the magnetization more often, or alternatively, improving resolution. The enhanced resolution and contrast achieved with the proposed method are demonstrated with lower leg angiograms. Depiction of the vasculature is significantly improved with the increased resolution in the phase encode plane and higher blood-to-background contrast.
No comments:
Post a Comment