Monday, April 12, 2010

CS: Around the blogs, Comparing simultaneous sparse approximation algos, Noise-Sensitivity Phase Transition in CS, Message-Passing for CF

To start the week nicely, I decided to go around the blogs and features the ones that deal with subjects somehow related to Compressive Sensing, here is what I found this past week:

Sofia Dahl and Bob Sturm in
Andrew Gelman in
Alex Gittens in
Gonzalo Vazquez Vilar in
Frank Nielsen in
Andrew McGregor in

I also found the following noteworthy papers, enjoy!

In this paper, we survey and compare different algorithms that, given an overcomplete dictionary of elementary functions, solve the problem of simultaneous sparse signal approximation, with common sparsity profile induced by a $\ell_p-\ell_q$ mixed-norm. Such a problem is also known in the statistical learning community as the group lasso problem. We have gathered and detailed different algorithmic results concerning these two equivalent approximation problems. We have also enriched the discussion by providing relations between several algorithms. Experimental comparisons of several of the detailed algorithms have also been carried out. The main lesson learned from these experiments is that depending on the performance measure, greedy approaches and iterative reweighted algorithms are the efficient algorithms either in term of computational complexities or in term of sparsity recovery.
The attendant Multiple-Spars Toolbox featuring the examples of this paper can be found here.


Consider the noisy underdetermined system of linear equations: y=Ax0 + z0, with n x N measurement matrix A, n \lt N, and Gaussian white noise z0 ~ N(0,sigma^2 I). Both y and A are known, both x0 and z0 are unknown, and we seek an approximation to x0. When x0 has few nonzeros, useful approximations are obtained by l1-penalized l2 minimization, in which the reconstruction hxl solves min || y - Ax||^2/2 + lambda ||x||_1. Evaluate performance by mean-squared error (MSE = E ||hxl - x0||_2^2/N). Consider matrices A with iid Gaussian entries and a large-system limit in which n,N oinfty with n/N o delta and k/n o ho. Call the ratio MSE/sigma^2 the noise sensitivity. We develop formal expressions for the MSE of hxl, and evaluate its worst-case formal noise sensitivity over all types of k-sparse signals. The phase space 0 \lt delta, ho \lt 1 is partitioned by curve ho = hoMSE(delta) over all types of k-sparse signals. The phase space 0 \lt \delta; \rho \lt 1 is partitioned by curve \rho= \rhoMSE(\delta) into two regions. Formal noise sensitivity is bounded throughout the region \rho \lt \rhoMSE(delta) and is unbounded throughout the region \rho \gt\rhoMSE(delta). The phase boundary \rho = \rhoMSE(delta) is identical to the previously-known phase transition curve for equivalence of l_1 -l_0 minimization in the k-sparse noiseless case. Hence a single phase boundary describes the fundamental phase transitions both for the noiseless and noisy cases. Extensive computational experiments validate the predictions of this formalism, including the existence of game theoretical structures underlying it (saddlepoints in the payo , least-favorable signals and maximin penalization). Underlying our formalism is an approximate message passing soft thresholding algorithm (AMP) introduced earlier by the authors. Other papers by the authors detail expressions for the formal MSE of AMP and its close connection to `1-penalized reconstruction. Here we derive the minimax formal MSE of AMP and then read out results for `1-penalized reconstruction.
Message-Passing Inference on a Factor Graph for Collaborative Filtering by Byung-Hak Kim, Arvind Yedla, Henry D. Pfister. The abstract reads:
This paper introduces a novel message-passing (MP) framework for the collaborative filtering (CF) problem associated with recommender systems. We model the movie-rating prediction problem popularized by the Netflix Prize, using a probabilistic factor graph model and study the model by deriving generalization error bounds in terms of the training error. Based on the model, we develop a new MP algorithm, termed IMP, for learning the model. To show superiority of the IMP algorithm, we compare it with the closely related expectation-maximization (EM) based algorithm and a number of other matrix completion algorithms. Our simulation results on Netflix data show that, while the methods perform similarly with large amounts of data, the IMP algorithm is superior for small amounts of data. This improves the cold-start problem of the CF systems in practice. Another advantage of the IMP algorithm is that it can be analyzed using the technique of density evolution (DE) that was originally developed for MP decoding of error-correcting codes.
Finally, a paper behind a paywall:

Subcellular Localization of Gram-Negative Bacterial Proteins Using Sparse Learning by Zhonglong Zheng and Jie Yang. The abstract reads:
One of the main challenges faced by biological applications is to predict protein subcellular localization in an automatic fashion accurately. To achieve this in these applications, a wide variety of machine learning methods have been proposed in recent years. Most of them focus on finding the optimal classification scheme and less of them take the simplifying the complexity of biological system into account. Traditionally such bio-data are analyzed by first performing a feature selection before classification. Motivated by CS (Compressive Sensing), we propose a method which performs locality preserving projection with a sparseness criterion such that the feature selection and dimension reduction are merged into one analysis. The proposed sparse method decreases the complexity of biological system, while increases protein subcellular localization accuracy. Experimental results are quite encouraging, indicating that the aforementioned sparse method is quite promising in dealing with complicated biological problems, such as predicting the subcellular localization of Gram-negative bacterial proteins.


Nuit Blanche from Spy Films on Vimeo.



Credit Photo: Spy Films, Nuit Blanche, The making-off can be found here.

No comments:

Printfriendly