Thursday, June 03, 2010

CS: LInkedIn Discussions, Nuclear Norm, Multiplicative Noise, Noiselets and more.

Yesterday, I mentioned the paper entitled L1 Minimization via Randomized First Order Algorithms by Anatoli Juditsky, Fatma Kilinc Karzan, Arkadi Nemirovski. Arkadi tells me that they are thinking about "what should be done to make the code publicly available". Great! we are all looking forward to solvers that can handle very large problems.

As the number of folks joining the LinkedIn Compressive Sensing Group grows to 412 members, we are beginning to see some activity on the Discussion section with four new discussions:
I look forward to more exchange there.

An anonymous commenter mentioned in the previous entry the following:

singular value thresholding:
here is an even simpler algorithm that does not need any SVDs, in case you are interested:
Thanks Anonymous. I look forward to a toy implementation of that algorithm from the authors! The paper is : A Simple Algorithm for Nuclear Norm Regularized Problems by Martin Jaggi, Marek Sulovsk y. The abstract reads:
Optimization problems with a nuclear norm regularization, such as e.g. low norm matrix factorizations, have seen many applications recently. We propose a new approximation algorithm building upon the recent sparse approximate SDP solver of (Hazan, 2008). The experimental e ciency of our method is demonstrated on large matrix completion problems such as the Netflix dataset. The algorithm comes with strong convergence guarantees, and can be interpreted as a rst theoretically justi ed variant of Simon-Funk-type SVD heuristics. The method is free of tuning parameters, and very easy to parallelize.

The attendant slides are here.

Here are additional papers and preprints that got my attention since yesterday:

Compressive Sensing (CS) is a new paradigm in signal acquisition and compression that has been attracting the interest of the signal compression community. When it comes to image compression applications, it is relevant to estimate the number of bits required to reach a specific image quality. Although several theoretical results regarding the rate-distortion performance of CS have been published recently, there are not many practical image compression results available. The main goal of this paper is to carry out an empirical analysis of the rate-distortion performance of CS in image compression. We analyze issues such as the minimization algorithm used and the transform employed, as well as the trade-off between number of measurements and quantization error. From the experimental results obtained we highlight the potential and limitations of CS when compared to traditional image compression
Echoing yesterday's paper on a similar subject on multiplicative noise in the measurement matrix here is: Anti-Measurement Matrix Uncertainty for Robust Sparse Signal Recovery with The Mixed l2 and l1 Norm Constraint by Yipeng Liu, Qun Wan. The abstract reads:
Compressive sensing (CS) is a technique for estimating a sparse signal from the random measurements and the measurement matrix. Traditional sparse signal recovery methods have seriously degeneration with the measurement matrix uncertainty (MMU). Here the MMU is modeled as a bounded additive error. An anti-uncertainty constraint in the form of a mixed l2 and l1 norm is deduced from the sparse signal model with MMU. Then we combine the sparse constraint with the anti-uncertainty constraint to get an anti-uncertainty sparse signal recovery operator. Numerical simulations demonstrate that the proposed operator has a better reconstructing performance with the MMU than traditional methods.
The feasibility of sparse signal reconstruction depends heavily on the inter-atom interference of redundant dictionary. In this paper, a semi-blindly weighted minimum variance distortionless response (SBWMVDR) is proposed to mitigate the inter-atom interference. Examples of direction of arrival estimation are presented to show that the orthogonal match pursuit (OMP) based on SBWMVDR performs better than the ordinary OMP algorithm.

On the incoherence of noiselet and Haar bases by Tomas Tuma, Paul Hurley. The abstract reads:
Noiselets are a family of functions completely uncompressible using Haar wavelet analysis. The resultant perfect incoherence to the Haar transform, coupled with the existence of a fast transform has resulted in their interest and use as a sampling basis in compressive sampling. We derive a recursive construction of noiselet matrices and give a short matrix-based proof of the incoherence.

Model selection: Two fundamental measures of coherence and their algorithmic significance by
Waheed U. Bajwa, Robert Calderbank, and Sina Jafarpour. The abstract reads:
High-rate data communication over a multipath wireless channel often requires that the channel response be known at the receiver. Training-based methods, which probe the channel in time, frequency, and space with known signals and reconstruct the channel response from the output signals, are most commonly used to accomplish this task. Traditional training-based channel estimation methods, typically comprising of linear reconstruction techniques, are known to be optimal for rich multipath channels. However, physical arguments and growing experimental evidence suggest that many wireless channels encountered in practice tend to exhibit a sparse multipath structure that gets pronounced as the signal space dimension gets large (e.g., due to large bandwidth or large number of antennas). In this paper, we formalize the notion of multipath sparsity and present a new approach to estimating sparse (or effectively sparse) multipath channels that is based on some of the recent
advances in the theory of compressed sensing. In particular, it is shown in the paper that the proposed approach, which is termed as compressed channel sensing, can potentially achieve a target reconstruction error using far less energy and, in many instances, latency and bandwidth than that dictated by the traditional leastsquares-
based training methods.

The problem of model selection arises in a number of contexts, such as compressed sensing, subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper generalizes the notion of \emph{incoherence} in the existing literature on model selection and introduces two fundamental measures of coherence---termed as the worst-case coherence and the average coherence---among the columns of a design matrix. In particular, it utilizes these two measures of coherence to provide an in-depth analysis of a simple one-step thresholding (OST) algorithm for model selection. One of the key insights offered by the ensuing analysis is that OST is feasible for model selection as long as the design matrix obeys an easily verifiable property. In addition, the paper also characterizes the model-selection performance of OST in terms of the worst-case coherence, \mu, and establishes that OST performs near-optimally in the low signal-to-noise ratio regime for N x C design matrices with \mu = O(N^{-1/2}). Finally, in contrast to some of the existing literature on model selection, the analysis in the paper is nonasymptotic in nature, it does not require knowledge of the true model order, it is applicable to generic (random or deterministic) design matrices, and it neither requires submatrices of the design matrix to have full rank, nor does it assume a statistical prior on the values of the nonzero entries of the data vector.

Credit video: RSA drawing of Dan Pink talk at TED.

No comments: