I found the comments of the CSI entry and specifically this comment on friendfeed, fully justifying the time I spent on it after all.
Ming-Jun Lai and Simon Foucart made a major update of a paper I had already mentioned before entitled Sparse recovery with pre-Gaussian random matrices. The new abstract reads:
Ming-Jun Lai and Simon Foucart made a major update of a paper I had already mentioned before entitled Sparse recovery with pre-Gaussian random matrices. The new abstract reads:
For an $m \times N$ underdetermined system of linear equations with independent pre-Gaussian random coefficients satisfying simple moment conditions, it is proved that the $s$-sparse solutions of the system can be found by $\ell_1$-minimization under the optimal condition $m \ge c \, s \, \ln(e N /s)$. The main ingredient of the proof is a variation of a classical Restricted Isometry Property, where the inner norm becomes the $\ell_1$-norm and where the outer norm depends on the probability distributions.
Simon tells me that they "introduce a new RIP that provides an advantage over the traditional one for the pre-Gaussian measurement matrices (this is to be compared with the approach in Restricted isometry property of matrices with independent columns and neighborly polytopes by random sampling by Adamczak, Litvak, Pajor, Tomczak Jaegermann."
Thanks Simon.
From one of the author, we also have: The null space property for sparse recovery from multiple measurement vectors by Ming-Jun Lai and Louis Yang Liu. The abstract reads:
Thanks Simon.
From one of the author, we also have: The null space property for sparse recovery from multiple measurement vectors by Ming-Jun Lai and Louis Yang Liu. The abstract reads:
We prove a null space property for the uniqueness of the sparse solution vectors recovered from a minimization in `q quasi-norm subject to multiple systems of linear equations, where q 2 (0; 1]. Furthermore, we show that the null space property for the setting of the sparse solution vectors for multiple linear systems is equivalent to the null space property for the standard minimization in l_q quasi-norm subject to one linear system. This answers the questions raised in [Foucart and Gribonval'09, [15]].
I also went through the web and found the following paper to be presented at ICASSP: Sensitivity to Basis Mismatch in Compressed Sensing by Yuejie Chi, Ali Pezeshki, Louis Scharf and Robert Calderbank. The abstract reads:
Compressed sensing theory suggests that successful inversion of an image of the physical world from its modal parameters can be achieved at measurement dimensions far lower than the image dimension, provided that the image is sparse in an a priori known basis. The assumed basis for sparsity typically corresponds to a gridding of the parameter space, e.g., an DFT grid in spectrum analysis. However, in reality no physical field is sparse in the DFT basis or in an a priori known basis. No matter how finely we grid the parameter space the sources may not lie in the center of the grid cells and there is always mismatch between the assumed and the actual bases for sparsity. In this paper, we study the sensitivity of compressed sensing (basis pursuit to be exact) to mismatch between the assumed and the actual sparsity bases. Our mathematical analysis and numerical examples show that the performance of basis pursuit degrades considerably in the presence of basis mismatch.
Wideband Spectral Estimation from Compressed Measurements Exploiting Spectral a priori information in Cognitive Radio System by Gonzalo Vazquez-Vilar, Roberto Lopez-Valcarce, Carlos Mosquera and Nuria Gonzalez-Prelcic. The abstract reads:
In Cognitive Radio scenarios channelization information from primary network may be available to the spectral monitor. Under this assumption we propose a spectral estimation algorithm from compressed measurements of a multichannel wideband signal. The analysis of the Cramer-Rao Lower Bound (CRLB) for this estimation problem shows the importance of detecting the underlaying sparsity pattern of the signal. To this end we describe a Bayesian based iterative algorithm that discovers the set of active signals conforming the band and simultaneously reconstructs the spectrum. This iterative spectral estimator is shown to perform close to a Genie- Aided CRLB that includes full knowledge about the sparsity pattern of the channels.
Bayesian Compressed SensingUsign Generalized Cauchy Priors by Rafael E. Carrillo, Tuncer C. Aysal and Kenneth E. Barner. The abstract reads:
Robust Sampling and Reconstruction Methods for Sparse Signals In Presence of Impulsive NoiseCompressed sensing shows that a sparse or compressible signal can be reconstructed from a few incoherent measurements. Noting that sparse signals can be well modeled by algebraic-tailed impulsive distributions, in this paper, we formulate the sparse recovery problem in a Bayesian framework using algebraic-tailed priors from the generalized Cauchy distribution (GCD) family for the signal coefficients. We develop an iterative reconstruction algorithm from this Bayesian formulation. Simulation results show that the proposed method requires fewer samples than most existing reconstruction methods to recover sparse signals, thereby validating the use of GCD priors for the sparse reconstruction problem.
Rafael Carrillo, Kenneth Barner and Tuncer Can Aysal. The abstract reads:
Recent results in compressed sensing show that a sparse or compressible signal can be reconstructed from a few incoherent measurements. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are developed assuming a Gaussian (light-tailed) model for the corrupting noise. However, when the underlying signal and/or the measurements are corrupted by impulsive noise, commonly employed linear sampling operators, coupled with current reconstruction algorithms, fail to recover a close approximation of the signal. In this paper we propose robust methods for sampling and reconstructing sparse signals in the presence of impulsive noise. To solve the problem of impulsive noise embedded in the underlying signal prior the measurement process, we propose a robust nonlinear measurement operator based on the weighed myriad estimator. In addition, we introduce a geometric optimization problem based on L1 minimization employing a Lorentzian norm constraint on the residual error to recover sparse signals from noisy measurements. Analysis of the proposed methods show that in impulsive environments when the noise posses infinite variance we have a finite reconstruction error and furthermore these methods yield successful reconstruction of the desired signal. Simulations demonstrate that the proposed methods significantly outperform commonly employed compressed sensing sampling and reconstruction techniques in impulsive environments, while providing comparable performance in less demanding, light-tailed environments.
Unrelated to ICASSP, there is also: Coarse quantization for random interleaved sampling of bandlimited signals by Alexander M. Powell, Jared Tanner,Yang Wang, and Ozgur Yilmaz. The abstract reads:
While at the WAM seminar at Harvard, there is a talk on compressed sensing today:The compatibility of unsynchronized interleaved uniform sampling with Sigma-Delta analog-to-digital conversion is investigated. Let f be a bandlimited signal that is sampled on a collection of N interleaved grids {kT + Tn}k2Z with offsets {Tn}N n=1 [0, T]. If the offsets Tn are chosen independently and uniformly at random from [0, T] and if the sample values of f are quantized with a first order Sigma-Delta algorithm, then with high probability the quantization error |f(t) − e f(t)| is at most of order N−1 logN.
Tuesday, March 16WhenTue, March 16, 3pm – 4pmDescriptionFisher information, compressed sensing, and the origins of memory traces in dynamical systems Critical cognitive phenomena, such as planning and decision making, rely on the ability of the brain to hold information in working memory. Many proposals exist for the maintenance of such memories in persistent activity that arises from stable fixed point attractors in the dynamics of recurrent neural networks. However such fixed points are incapable of storing temporal sequences of recent events. An alternate, and less explored paradigm, is the storage of arbitrary temporal input sequences in the transient responses of a neural network. This paradigm raises a host of important questions. Are there any fundamental limits on the duration of such transient memory traces? How do these limits depend on the size of the network? What patterns of neural connectivity yield good performance on generic working memory tasks? How do these traces degrade in the presence of noise? In a more general context, to what extent can any high dimensional, dissipative dynamical system store past input perturbations in its instantaneous dynamical state? We combine information theory with dynamical systems theory to give precise answers to these questions for the class of all linear, and some nonlinear, neuronal networks. We uncover an important role for a special class of networks, known as nonnormal networks which possess a hidden feed-forward structure that is crucial for the maintenance of robust memory traces. Furthermore, we employ, and extend, recent results from the field of compressed sensing to show how recurrent neuronal processing can extend the duration of memory traces for sparse temporal sequences. Our results apply to general dynamical systems, and we discuss an application to memory in fluid mechanics.
Of interest is an abstract paper by this same author:
Compressed sensing in the brain: the role of sparseness in short term and long term memory by Surya Ganguli, and Haim Sompolinsky. The abstract reads:
That's all for today, folks.One of the most exciting advances in signal processing is the field of compressed sensing (CS) [1]. In CS, sparse high-dimensional stimuli are represented by lower dimensional dense measurements, which are linear mixtures of the stimuli. CS shows that by using the computationally tractable L1 norm as a sparse prior, the high-dimensional stimuli (for example, human speech, natural movies, FMRI data) can be fully reconstructed from the compressed measurements. In this work, we have extended CS theory and applied it to reveal new fundamental properties of
neural learning and memory in the biologically relevant scenario of sparse coding.
Credit: NASA, Hirise, An avalanche on Mars via the Badastronomy blog.
No comments:
Post a Comment