First, Michael Wakin just released on the ever fascinating subject of Manifold Signal processing in Manifold-Based Signal Recovery and Parameter Estimation from Compressive Measurements. The abstract reads:
A field known as Compressive Sensing (CS) has recently emerged to help address the growing challenges of capturing and processing high-dimensional signals and data sets. CS exploits the surprising fact that the information contained in a sparse signal can be preserved in a small number of compressive (or random) linear measurements of that signal. Strong theoretical guarantees have been established on the accuracy to which sparse or near-sparse signals can be recovered from noisy compressive measurements. In this paper, we address similar questions in the context of a different modeling framework. Instead of sparse models, we focus on the broad class of manifold models, which can arise in both parametric and non-parametric signal families. Building upon recent results concerning the stable embeddings of manifolds within the measurement space, we establish both deterministic and probabilistic instance-optimal bounds in ℓ2 for manifold-based signal recovery and parameter estimation from noisy compressive measurements. In line with analogous results for sparsity-based CS, we conclude that much stronger bounds are possible in the probabilistic setting. Our work supports the growing empirical evidence that manifold-based models can be used with high accuracy in compressive signal processing.
Also found on the internets, a poster entitled Detection and Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques by G. Prandi, Giuseppe Valenzise, Marco Tagliasacchi, Augusto Sarti. The abstract reads:
In most practical applications, for the sake of information integrity not only it is useful to detect whether a multimedia content has been modified or not, but also to identify which kind of attack has been carried out. In the case of audio streams, for example, it may be useful to localize the tamper in the time and/or frequency domain. In this paper we devise a hash-based tampering detection and localization system exploiting compressive sensing principles. The multimedia content provider produces a small hash signature using a limited number of random projections of a time-frequency representation of the original audio stream. At the content user side, the hash signature is used to estimate the distortion between the original and the received stream and, provided that the tamper is sufficiently sparse or sparsifiable in some orthonormal basis expansion or redundant dictionary (e.g. DCT or wavelet), to identify the time-frequency portion of the stream that has been manipulated. In order to keep the hash length small, the algorithm exploits distributed source coding techniques.
Shamgar Gurevich and Ronny Hadani introduce us to Incoherent dictionaries and the statistical restricted isometry property. The abstract reads:
In this paper we formulate and prove a statistical version of the Candes-Tao restricted isometry property (SRIP for short) which holds in general for any incoherent dictionary which is a disjoint union of orthonormal bases. In addition, we prove that, under appropriate normalization, the eigenvalues of the associated Gram matrix fluctuate around \lambda= 1 according to the Wigner semicircle distribution. The result is then applied to various dictionaries that arise naturally in the setting of finite harmonic analysis, giving, in particular, a better understanding on a remark of Applebaum-Howard-Searle-Calderbank concerning RIP for the Heisenberg dictionary of chirp like functions.Then Jessica Nelson and Vladimir Temlyakov released On the size of incoherent systems. The abstract reads:
This paper concerns special redundant systems, namely, incoherent systems or systems with small coherence parameter. Simple greedy-type algorithms perform well on these systems. For example, known results show that the smaller the coherence parameter, the better the performance of the Orthogonal Greedy Algorithm. These systems with small coherence parameter are also useful in the construction of compressed sensing matrices. Therefore, it is very desirable to build dictionaries with small coherence parameter.
We discuss the following problem for both Rn and Cn: How large can a system with coherence parameter not exceeding a fixed number μ be? We obtain upper and lower bounds for the maximal cardinality of such systems. Although the results herein are either known or simple corollaries of known results, our objective is to demonstrate how fundamental results from different areas of mathematics–linear algebra, probability, and number theory–collaborate on this important approximation theoretic problem.
Waheed Bajwa, Akbar M. Sayeed, and Robert Nowak also released Learning sparse doubly-selective channels. The abstract reads:
Coherent data communication over doubly-selective channels requires that the channel response be known at the receiver. Training-based schemes, which involve probing of the channel with known signaling waveforms and processing of the corresponding channel output to estimate the channel parameters, are commonly employed to learn the channel response in practice. Conventional training-based methods, often comprising of linear least squares channel estimators, are known to be optimal under the assumption of rich multipath channels. Numerous measurement campaigns have shown, however, that physical multipath channels tend to exhibit a sparse structure at high signal space dimension (time-bandwidth product), and can be characterized with significantly fewer parameters compared to the maximum number dictated by the delay-Doppler spread of the channel. In this paper, it is established that traditional training-based channel learning techniques are ill-suited to fully exploiting the inherent low-dimensionality of sparse channels. In contrast, key ideas from the emerging theory of compressed sensing are leveraged to propose sparse channel learning methods for both single-carrier and multicarrier probing waveforms that employ reconstruction algorithms based on convex/linear programming. In particular, it is shown that the performance of the proposed schemes come within a logarithmic factor of that of an ideal channel estimator, leading to significant reductions in the training energy and the loss in spectral efficiency associated with conventional training-based methods.
Matthias Seeger produced a talk on Large Scale Approximate Inference and Experimental Design for Sparse Linear Models. The video of the talk can be found here. The attendant technical report has been released by Matthias Seeger and Hannes Nickisch. It is entitled:Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models. The abstract reads:
We show that a commonly used variational relaxation of Bayesian inference for continuous variable models, based on Gaussian-form lower bounds, is a convex optimization problem, whenever the search for the posterior mode is convex (sparse linear model, logistic regression, etc.). We provide novel scalable algorithms for this relaxation, which run orders of magnitude faster than previous methods, and can be used to estimate posterior covariance matrices over full images. Our difference of convex algorithms involve a crucial decoupling step, and require the estimation of Gaussian marginal variances, which is done by the Lanczos algorithm. We show how to implement Bayesian experimental design for large scale generalized linear models with sparsity potentials, which can be used to optimize measurement architectures for whole images. Ours is the first method to tractably learn compressed sensing on entire natural images.
Applied Math Seminar at Stanford
Fall Quarter 2008
Friday October 3, 3:15p.m.
Sloan Mathematics Corner, Building 380, Room 380-C
Anna Gilbert
Department of Mathematics
University of Michigan
Combining geometry and combinatorics: a unified approach to sparse signal recovery
There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Phi and then uses linear programming to decode information about x from Phi x. The combinatorial approach constructs Phi and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of high-quality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the l_p norm for p=1, and then show that unbalanced expanders are essentially equivalent to RIP-p matrices. From known deterministic constructions for such matrices, we obtain new deterministic measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance. Joint work with Radu Berinde, Piotr Indyk, Howard Karloff, and Martin Strauss.
This one is past but I like that the author makes a point as to how the engineers might profit from the study.
Department of Mathematics
University of California, Berkeley
939 Evans 11:10 AM - 12:10 PM
09/26/2008
Laurent Demanet, Stanford
Compressive wave computation
This talk presents a strategy for computational wave propagation that consists in decomposing the solution wavefield onto a largely incomplete set of eigenfunctions of the weighted Laplacian, with eigenvalues chosen randomly. The recovery method is the ell-1 minimization of compressed sensing. For the mathematician, we establish three possibly new estimates for the wave equation that guarantee accuracy of the numerical method in one spatial dimension. For the engineer, the compressive strategy offers a unique combination of parallelism and memory savings that should be of particular relevance to applications in reflection seismology. Joint work with Gabriel Peyre.
Here is a talk that is bound to get the audience asking a lot of questions if there are into compressed sensing, polytopes, expander graphs and classification (we mentioned it earlier):
UCLA Math Department Seminar.
Image Processing Seminar
Thursday October 02, 2008
Organizer: Luminita Vese and Stefano Soatto
14:00-14:50 in MS6627
John Wright (University of Illinois at Urbana-Champaign)
Dense Error Correction via $L^1$-Minimization
Abstract. We consider the problem of recovering a non-negative sparse signal x 2 Rn from highly corrupted linear measurements y = Ax + e 2 Rm, where e is an unknown error vector whose nonzero entries may be unbounded. Motivated by the problem of face recognition in computer vision, we will prove that for highly correlated (and possibly overcomplete) dictionaries A, any non-negative, sufficiently sparse signal x can be recovered by solving an `1-minimization problem $$ min kxk1 + kek1 \ subject \ to \ y = Ax + e: $$ More precisely, if the fraction of errors is bounded away from one and the support of x grows sublinearly in the dimension m of the observation, then as m goes to infinity, the above `1-minimization succeeds for all signals x and almost all sign-and-support patterns of e. This result suggests that accurate recovery of sparse signals is possible and computationally feasible even with nearly 100% of the observations corrupted. The proof relies on a careful characterization of the faces of a convex polytope spanned together by the standard cross polytope and a set of iid Gaussian vectors with nonzero mean and small variance, which we call the \u201ccross-and-bouquet\u201d model. We discuss our result\u2019s implications for computer vision problems such as face recognition and motion segmentation, and more generally for robust source separation, and present experiments verifying its applicability to these domains. This is joint work with Yi Ma (UIUC). Speaker Bio: John Wright is a PhD candidate in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. His research interests include sparse representation and compressed sensing, data segmentation, lossy minimum description length methods, and their applications in computer vision, including face recognition, image and motion segmentation, and image reconstruction. He has interned at Microsoft Research in Asia, and Microsoft Live Labs in Redmond, and is currently a Microsoft Live Labs fellow.
German—Israel Workshop for Vision and Image Sciences 2008
Haifa, November 4-6, 2008
Learning Compressed Sensing
Yair Weiss
School of Computer Science and Engineering
The Hebrew University of Jerusalem
Abstract - Compressed sensing is a recent set of mathematical results showing that sparse signals can be exactly reconstructed from a small number of linear measurements. Interestingly, for ideal sparse signals with no measurement noise, random measurements allow perfect reconstruction while measurements based on principal component analysis (PCA) or independent component analysis (ICA) do not. At the same time, for other signal and noise distributions, PCA and ICA can significantly outperform random projections in terms of enabling reconstruction from a small number of measurements. In this paper we ask: given a training set typical of the signals we wish to measure, what are the optimal set of linear projections for compressed sensing? We show that the optimal projections are in general not the principal components nor the independent components of the data, but rather a seemingly novel set of projections that capture what is still uncertain about the signal, given the training set. We also show that the projections onto the learned uncertain components may far outperform random projections. This is particularly true in the case of natural images, where random projections have vanishingly small signal to noise ratio as the number of pixels becomes large.
joint work with Hyun-Sung Chang and Bill Freeman
All these seminars are in the CS Calendar.
Credit: SpaceX Corporation, Stage 1 Separation of the Commercial Falcon 1 Vehicle, Congratulations to SpaceX! and CCTV footage of Zhai Zhigang, the first Chinese to do an EVA.