Pages

Tuesday, August 08, 2017

When is Network Lasso Accurate?, The Network Nullspace Property for Compressed Sensing of Big Data over Networks, Semi-Supervised Learning via Sparse Label Propagation




Alex just mentioned this to me:

I started to translate recovery conditions from compressed sensing into the graph signal setting. So far, I managed to relate the null space property and variants of the restricted isometry properties to the connectivity properties (topology) of networks. In particular, the conditions amount to the existence of certain network flows​. I would be happy if you have a look and share your opinion with me:
Greetings from Finland, Alex

Thanks Alex ! Here are the preprints:


A main workhorse for statistical learning and signal processing using sparse models is the least absolute shrinkage and selection operator (Lasso). The Lasso has been adapted recently for massive network-structured datasets, i.e., big data over networks. In particular, the network Lasso allows to recover (or learn) graph signals from a small number of noisy signal samples by using the total variation semi-norm as a regularizer. Some work has been devoted to studying efficient and scalable implementations of the network Lasso. However, only little is known about the conditions on the underlying network structure which ensure a high accuracy of the network Lasso. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network lasso to deliver an accurate estimate of the entire underlying graph signal.


We adapt the nullspace property of compressed sensing for sparse vectors to semi-supervised learning of labels for network-structured datasets. In particular, we derive a sufficient condition, which we term the network nullspace property, for convex optimization methods to accurately learn labels which form smooth graph signals. The network nullspace property involves both the network topology and the sampling strategy and can be used to guide the design of efficient sampling strategies, i.e., the selection of those data points whose labels provide the most information for the learning task.


This work proposes a novel method for semi-supervised learning from partially labeled massive network-structured datasets, i.e., big data over networks. We model the underlying hypothesis, which relates data points to labels, as a graph signal, defined over some graph (network) structure intrinsic to the dataset. Following the key principle of supervised learning, i.e., similar inputs yield similar outputs, we require the graph signals induced by labels to have small total variation. Accordingly, we formulate the problem of learning the labels of data points as a non-smooth convex optimization problem which amounts to balancing between the empirical loss, i.e., the discrepancy with some partially available label information, and the smoothness quantified by the total variation of the learned graph signal. We solve this optimization problem by appealing to a recently proposed preconditioned variant of the popular primal-dual method by Pock and Chambolle, which results in a sparse label propagation algorithm. This learning algorithm allows for a highly scalable implementation as message passing over the underlying data graph. By applying concepts of compressed sensing to the learning problem, we are also able to provide a transparent sufficient condition on the underlying network structure such that accurate learning of the labels is possible. We also present an implementation of the message passing formulation allows for a highly scalable implementation in big data frameworks.





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment