Pages

Friday, May 02, 2014

Multiple Regularizers: Multi-View Learning and Hyperspectral Imagery



I came across the following two papers which have in common the utilization of several regularizers in learning and inverse problems, an issue of on-going interest in semi-supervised learning, see also previously
I wonder how those approaches will eventually be comparable. The only way we have been able to see this in compressive sensing is through their performance in phase transition type of problems. We all know that using prior information will help in making polynomial-time algorithms go further (i.e. for instance less sampling is required for block sparse signals than sparse-only signals), but the question for this multi-regularizer issue is: should we go for simple metrics one at a time (and hence with multiple regularizers) or should we go for specifically designed metrics (see Francis Bach's course at slide 99 and up Structured sparsity through convex optimization , where he mentions the use of submodularity to find new regularizers) ? To avoid fragmentation between fields which will be unavoidable, we need to make sure that multi regularizers or simple more effective regularizers are put to the test through the simple phase transition acid test.

Without further due, here are today's papers:

Overlapping Trace Norms in Multi-View Learning by Behrouz Behmardi, Cedric Archambeau, Guillaume Bouchard
Multi-view learning leverages correlations between different sources of data to make predic- tions in one view based on observations in another view. A popular approach is to assume that, both, the correlations between the views and the view-specific covariances have a low- rank structure, leading to inter-battery factor analysis, a model closely related to canonical correlation analysis. We propose a convex relaxation of this model using structured norm regularization. Further, we extend the convex formulation to a robust version by adding an l1-penalized matrix to our estimator, similarly to convex robust PCA. We develop and compare scalable algorithms for several convex multi-view models. We show experimentally that the view-specific correlations are improving data imputation performances, as well as labeling accuracy in real-world multi-label prediction tasks.


Compressive Sensing (CS) indicates new mechanism for hyperspectral imaging and practical hyperspectral compressive sensors have been designed to acquire fewer compressive measurements. However the numerical reconstruction of the hyperspectral data from the compressive measurements requires solving an ill-posed inverse problem and additional constraints are needed to seek a better solution. Based on the observation that in CS reconstruction quality can be improved from intelligent use of prior knowledge of the original data, we proposed an efficient new method to reconstruct Hyperspectral Images (HSI) in this paper. Our method, which exploit the HSI data structure characters of spatial 2D piecewise smoothness, low-rank property and adjacent spectrum correlation, have allowed to reconstruct HSI with compound regularizers. Moreover, an efficient numerical algorithm is developed for our method. The experimental results show that our method exhibits its superiority over other known state-of-the-art methods with higher reconstruction quality at the same measurement rates.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment