First, here is a revision of Introduction to the non-asymptotic analysis of random matrices by Roman Vershynin
Then, we have:
Deterministic Bounds for Restricted Isometry of Compressed Sensing Matrices by Shriram Sarvotham, Richard Baraniuk. the abstract reads:
Compressed Sensing (CS) is an emerging field that enables reconstruction of a sparse signal x \in Rn that has only k \ll n non-zero coefficients from a small number m \ll n of linear projections. The projections are obtained by multiplying x by a matrix Phi? \in Rm?n - called a CS matrix - where k < m \ll n. In this work, we ask the following question: given the triplet {k, m, n} that defines the CS problem size, what are the deterministic limits on the performance of the best CS matrix in Rm?n? We select Restricted Isometry as the performance metric. We derive two deterministic converse bounds and one deterministic achievable bound on the Restricted Isometry for matrices in Rm?n in terms of n, m and k. The first converse bound (structural bound) is derived by exploiting the intricate relationships between the singular values of sub-matrices and the complete matrix. The second converse bound (packing bound) and the achievable bound (covering bound) are derived by recognizing the equivalence of CS matrices to codes on Grassmannian spaces. Simulations reveal that random Gaussian ? provide far from optimal performance. The derivation of the three bounds offers several new geometric insights that relate optimal CS matrices to equi-angular tight frames, codes on Grassmannian spaces, and the Generalized Pythagorean Theorem (GPT).
Compressed classification of observation sets with linear subspace embeddings by Dorina Thanou, Pascal Frossard.. The abstract reads:
We consider the problem of classification of a pattern from multiple compressed observations that are collected in a sensor network. In particular, we exploit the properties of random projections in generic sensor devices and we take some first steps in introducing linear dimensionality reduction techniques in the compressed domain. We design a classification framework that consists in embedding the low dimensional classification space given by classical linear dimensionality reduction techniques in the compressed domain. The measurements of the multiple observations are then projected onto the new classification subspace and are finally aggregated in order to reach a classification decision. Simulation results verify the effectiveness of our scheme and illustrate that compressed measurements combined with information diversity lead to efficient dimensionality reduction in simple sensing architectures.
The Restricted Isometry Property for Block Diagonal Matrices by Han Lun Yap, Armin Eftekhari, Michael B. Wakin, and Christopher J. Rozell. The abstract reads:
In compressive sensing (CS), the Restricted Isometry Property (RIP) is a powerful condition on measurement operators which ensures robust recovery of sparse vectors is possible from noisy, undersampled measurements via computationally tractable algorithms. Early papers in CS showed that Gaussian random matrices satisfy the RIP with high probability, but such matrices are usually undesirable in practical applications due to storage limitations, computational considerations, or the mismatch of such matrices with certain measurement architectures. To alleviate some or all of these difficulties, recent research efforts have focused on structured random matrices. In this paper, we study block diagonal measurement matrices where each block on the main diagonal is itself a Gaussian random matrix. The main result of this paper shows that such matrices can indeed satisfy the RIP but that the requisite number of measurements depends on the coherence of the basis in which the signals are sparse. In the best case—for signals that are sparse in the frequency domain—these matrices perform nearly as well as dense Gaussian random matrices despite having many fewer nonzero entries.
Sparsity Penalties in Dynamical System Estimation by Adam Charles, M. Salman Asif, Justin Romberg, Christopher Rozell. The abstract reads:
In this work we address the problem of state estimation in dynamical systems using recent developments in compressive sensing and sparse approximation. We formulate the traditional Kalman filter as a one-step update optimization procedure which leads us to a more unified framework, useful for incorporating sparsity constraints. We introduce three combinations of two sparsity conditions (sparsity in the state and sparsity in the innovations) and write recursive optimization programs to estimate the state for each model. This paper is meant as an overview of different methods for incorporating sparsity into the dynamic model, a presentation of algorithms that unify the support and coefficient estimation, and a demonstration that these suboptimal schemes can actually show some performance improvements (either in estimation error or convergence time) over standard optimal methods that use an impoverished model.
Stable Manifold Embeddings with Operators Satisfying the Restricted Isometry Property by Han Lun Yap, Michael B. Wakin, and Christopher J. Rozell The abstract reads:
Signals of interests can often be thought to come from a low dimensional signal model. The exploitation of this fact has led to many recent interesting advances in signal processing, one notable example being in the field of compressive sensing (CS). The literature on CS has established that many matrices satisfy the Restricted Isometry Property (RIP), which guarantees a stable (i.e., distance-preserving) embedding of a sparse signal model from an undersampled linear measurement system. In this work, we study the stable embedding of manifold signal models using matrices that satisfy the RIP. We show that by paying reasonable additional factors in the number of measurements, all matrices that satisfy the RIP can also be used (in conjunction with a random sign sequence) to obtain a stable embedding of a manifold.
ESTIMATION AND DYNAMIC UPDATING OF TIME-VARYING SIGNALS WITH SPARSE VARIATIONS by M. Salman Asif, Adam Charles, Justin Romberg, and Christopher J. Rozell. The abstract reads:
This paper presents an algorithm for an `1-regularized Kalman filter. Given observations of a discrete-time linear dynamical system with sparse errors in the state evolution, we estimate the state sequence by solving an optimization algorithm that balances fidelity to the measurements (measured by the standard `2 norm) against the sparsity of the innovations (measured using the `1 norm). We also derive an efficient algorithm for updating the estimate as the system evolves. This dynamic updating algorithm uses a homotopy scheme that tracks the solution as new measurements are slowly worked into the system and old measurements are slowly removed. The effective cost of adding new measurements is a number of low-rank updates to the solution of a linear system of equations that is roughly proportional to the joint sparsity of all the innovations in the time interval of interest.
This paper introduces a novel approach to learn a dictionary in a sparsity-promoting analysis-type prior. The dictionary is optimized in order to optimally restore a set of exemplars from their degraded noisy versions. Towards this goal, we cast our problem as a bilevel programming problem for which we propose a gradient descent algorithm to reach a stationary point that might be a local minimizer. When the dictionary analysis operator specializes to a convolution, our method turns out to be a way of learning generalized total variation-type prior. Applications to 1-D signal denoising are reported and potential applicability and extensions are discussed.
Learning Sparse Codes for Hyperspectral Imagery by Adam S. Charles, Bruno A. Olshausen, and Christopher J. Rozell. The abstract reads:
The spectral features in hyperspectral imagery (HSI) contain significant structure that, if properly characterized could enable more efficient data acquisition and improved data analysis. Because most pixels contain reflectances of just a few materials, we propose that a sparse coding model is well-matched to HSI data. Sparsity models consider each pixel as a combination of just a few elements from a larger dictionary, and this approach has proven effective in a wide range of applications. Furthermore, previous work has shown that optimal sparse coding dictionaries can be learned from a dataset with no other a priori information (in contrast to many HSI “endmember” discovery algorithms that assume the presence of pure spectra or side information). We modified an existing unsupervised learning approach and applied it to HSI data (with significant ground truth labeling) to learn an optimal sparse coding dictionary. Using this learned dictionary, we demonstrate three main findings: i) the sparse coding model learns spectral signatures of materials in the scene and locally approximates nonlinear manifolds for individual materials, ii) this learned dictionary can be used to infer HSI-resolution data with very high accuracy from simulated imagery collected at multispectral-level resolution, and iii) this learned dictionary improves the performance of a supervised classification algorithm, both in terms of the classifier complexity and generalization from very small training sets
Credit: NPO Lavochkin, In March 2011, Elektro captured this view of the Moon over the Red Sea region of the Earth.
No comments:
Post a Comment