Monday, February 09, 2009

CS: A review, Efficient Encoding of Audio Signals, Greedy Algorithms Comparison, Randomized algo, l1 solvers, Quality is Number 1 again!

Weren't Friday's papers interesting or what ? Today we have a batch of six. 

From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images by Alfred Bruckstein, David Donoho, Michael Elad. The abstract.
A full-rank matrix A element of R^(n×m) with n less than m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries. Can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena, in particular the existence of easily verifiable conditions under which optimally sparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several well-known signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable, but there is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems have energized research on such signal and image processing problems—to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical results on sparse modeling of signals and images, and recent applications in inverse problems and compression in image processing. This work lies at the intersection of signal processing and applied mathematics, and arose initially from the wavelets and harmonic analysis research communities. The aim of this paper is to introduce a few key notions and applications connected to sparsity, targeting newcomers interested in either the mathematical aspects of this area or its applications.



Efficient Encoding of a Sinusoidally-Modelled Audio Signal Using Compressed Sensing by Anthony Griffin, Christos Tzagkarakis, Toni Hirvonen, Athanasios Mouchtaris and Panagiotis Tsakalidesthe abstract reads:

In this paper, the compressed sensing (CS) methodology is applied to the harmonic part of sinusoidally- modelled audio signals. As this part of the model is sparse by definition in the frequency domain, we investigate whether CS can be used for encoding this signal at low bitrates, instead of encoding the sinusoidal parameters (amplitude, frequency, phase) as current state-of-the-art methods do. CS samples signals at a much lower rate than the Nyquist rate if they are sparse in some basis, thus it is a natural choice in this context. This also has the potential benefit of moving the computational burden from the encoder to the decoder. Previous work presented an initial investigation into the performance of this scheme, and this paper demonstrates how the performance can be further improved to rival that of a state-of-the-art encoder.

A Comparative Study of Some Greedy Pursuit Algorithms for Sparse Approximation by Gagan Rath and Arabinda Sahoo. The abstract reads:
Solving an under-determined system of equations for the sparsest solution has attracted considerable attention in recent years. Among the two well known approaches, the greedy algorithms like matching pursuits (MP) are simpler to implement and can produce satisfactory results under certain conditions. In this paper, we compare several greedy algorithms in terms of the sparsity of the solution vector and the approximation accuracy. We present two new greedy algorithms based on the recently proposed complementary matching pursuit (CMP) and the sensing dictionary framework, and compare them with the classical MP, CMP, and the sensing dictionary approach. It is shown that in the noise-free case, the complementary matching pursuit algorithm performs the best among these algorithms.
In a fascinating area I briefly mentioned a year ago, here is Randomized Kaczmarz Solver for Noisy Linear Systems by Deanna Needell. The abstract reads:
The Kaczmarz method is an iterative algorithm for solving systems of linear equations Ax = b. Theoretical convergence rates for this algorithm were largely unknown until recently when work was done on a randomized version of the algorithm. It was proved that for overdetermined systems, the randomized Kaczmarz method converges with expected exponential rate, independent of the number of equations in the system. Here we analyze the case where the system Ax = b is corrupted by noise, so we consider the system Ax ≈ b + r where r is an arbitrary error vector. We prove that in this noisy version, the randomized method reaches an error threshold dependent on the matrix A with the same rate as in the error-free case. We provide examples showing our results are sharp in the general context.

Olivier Grisel pointed out the following two papers while on Twitter. they are relevant to new schemes devised for solving l1 and related reconstructions. Now the question is whether they are any faster than the current ones we already have:

An interior-point stochastic approximation method and an L1-regularized delta rule by Peter Carbonetto, Mark Schmidt and Nando de Freitas. The abstract reads:

The stochastic approximation method is behind the solution to many important, actively-studied problems in machine learning. Despite its far-reaching application, there is almost no work on applying stochastic approximation to learning problems with general constraints. The reason for this, we hypothesize, is that no robust, widely-applicable stochastic approximation method exists for handling such problems. We propose that interior-point methods are a natural solution. We establish the stability of a stochastic interior-point approximation method both analytically and empirically, and demonstrate its utility by deriving an on-line learning algorithm that also performs feature selection via L1 regularization.

and Online and Batch Learning using Forward Looking Subgradients by John Duchi, Yoram Singer. The abstract reads:

We describe, analyze, and experiment with a new framework for empirical loss minimization with regularization. Our algorithmic framework alternates between two phases. On each iteration we first perform an unconstrained gradient descent step. We then cast and solve an instantaneous optimization problem that trades off minimization of a regularization term while keeping close proximity to the result of the first phase. This view yields a simple yet effective algorithm that can be used for batch penalized risk minimization and online learning. Furthermore, the two phase approach enables sparse solutions when used in conjunction with regularization functions that promote sparsity, such as ℓ1. We derive concrete and very simple algorithms for minimization of loss functions with ℓ1, ℓ2, ℓ2^2, and ℓ1 regularization. We also show how to construct efficient algorithms for mixed-norm ℓ1/ℓq regularization. We further extend the algorithms and efficient implementations for very high-dimensional data with sparsity. We demonstrate the potential of the proposed framework in a series of experiments with synthetic datasets.


Credit photo 1: NASAWatch, the Satellite in this photo is the NOAA-N Prime satellite. I talked about the incident here. Five years later, the satellite was launched on Friday and is currently in orbit. Woohoo.
Credit photo 2: NASA/Carleton Bailie, ULA 

No comments:

Printfriendly