Friday, December 28, 2012

Multiple regularizers, Robust NMF and a simple question

About a year ago we had a few discussions on multiple regularizers ( Multiple Regularizers For the Reconstruction of Natural Objects ? and Optimization with multiple non-standard regularizers ) and it looks like the subject is taking off, see the next few papers:

Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speedup compared to the exact optimization algorithms.

Simultaneously Structured Models with Application to Sparse and Low-rank Matrices by Samet OymakAmin JalaliMaryam FazelYonina C. EldarBabak Hassibi. The abstract reads:
The topic of recovery of a structured model given a small number of linear observations has been well-studied in recent years. Examples include recovering sparse or group-sparse vectors, low-rank matrices, and the sum of sparse and low-rank matrices, among others. In various applications in signal processing and machine learning, the model of interest is known to be structured in \emph{several} ways at the same time, for example, a matrix that is simultaneously sparse and low-rank. An important application is the sparse phase retrieval problem, where the goal is to recover a sparse signal from phaseless measurements. In machine learning, the problem comes up when combining several regularizers that each promote a certain desired structure. Often penalties (norms) that promote each individual structure are known and yield an order-wise optimal number of measurements (e.g., $\ell_1$ norm for sparsity, nuclear norm for matrix rank), so it is reasonable to minimize a combination of such norms. We show that, surprisingly, if we use multi-objective optimization with the individual norms, then we can do no better, order-wise, than an algorithm that exploits only one of the several structures. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, i.e., not one that is a function of the convex relaxations used for each structure. We then specialize our results to the case of sparse and low-rank matrices. We show that a nonconvex formulation of the problem can recover the model from very few measurements, on the order of the degrees of freedom of the matrix, whereas the convex problem obtained from a combination of the $\ell_1$ and nuclear norms requires many more measurements. This proves an order-wise gap between the performance of the convex and nonconvex recovery problems in this case.

After the use of an iPhone for signal reconstruction (not just display!), here is an iPad involved in Robust PCA decomposition.  One item of further interest is the robust-NMF approach. Alas, like the other preprints, no codes are available:.

Learning Robust Low-Rank Representations by Pablo Sprechmann, Alex M. Bronstein, Guillermo Sapiro. The abstract reads:
In this paper we present a comprehensive framework for learning robust low-rank representations by combining and extending recent ideas for learning fast sparse coding regressors with structured non-convex optimization techniques. This approach connects robust principal component analysis (RPCA) with dictionary learning techniques and allows its approximation via trainable encoders. We propose an efficient feed-forward architecture derived from an optimization algorithm designed to exactly solve robust low dimensional projections. This architecture, in combination with different training objective functions, allows the regressors to be used as online approximants of the exact offline RPCA problem or as RPCA-based neural networks. Simple modifications of these encoders can handle challenging extensions, such as the inclusion of geometric data transformations. We present several examples with real data from image, audio, and video processing. When used to approximate RPCA, our basic implementation shows several orders of magnitude speedup compared to the exact solvers with almost no performance degradation. We show the strength of the inclusion of learning to the RPCA approach on a music source separation application, where the encoders outperform the exact RPCA algorithms, which are already reported to produce state-of-the-art results on a benchmark database. Our preliminary implementation on an iPad shows faster-than-real-time performance with minimal latency.

 RobustNMF showed up earlier here

When one simply consults PubMed with Matrix Factorization as a keyword, one realizes that NMF is being equated to Matrix Factorization (as if there were no other matrix factorization!) and that different fields are using it extensively. Why no code release  for Robust-NMF is beyond me.
If there is no code release, the algorithm cannot be listed on the Advanced Matrix Factorization Jungle page. If you implement some of these algorithms please do let me know and you will be feature on the blog and in the Jungle page.  

Image Credit:NASA/JPL-Caltech/Space Science Institute, PIA14934: A Splendor Seldom Seen

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments: