In a A gut feeling review of 2012, I pointed out to an acceleration of work around AMP solvers. I was missing other items of similar importance. ADMM has also seen a rise in the past year and a half (especially if you consider Split Bregman to be an ADMM technique). But what was the most insightful this past year was the ascent of the Random Forest algorithm as witnessed in the numerous top places it got in different Kaggle contests (The Two Most Important Algorithms in Predictive Modeling Today). Since those contests are done on very different datasets, it really points to some universality that I like to hear about more often. Another item that, I think, ought to be watched out is the rise of GraphLab but most importantly GraphChi. Let us see how in one year those items will have evolved. In the meantime, here are two papers showing ways to further reduced the burden of using AMP solvers:
Iterative Decoding Beyond Belief Propagation by Shiva Kumar Planjery, Shashi Kiran Chilappagari, Bane Vasic, David Declercq, Ludovic Danjean. The abstract reads:
At the heart of modern coding theory lies the fact that low-density parity-check (LDPC) codes can be efﬁciently decoded by belief propagation (BP). The BP is an inference algorithm which operates on a graphical model of a code, and lends itself to low-complexity and high-speed implementations, making it the algorithm of choice in many applications. It has unprecedentedly good error rate performance, so good that when decoded by the BP, LDPC codes approach theoretical limits of channel capacity. However, this capacity approaching property holds only in the asymptotic limit of code length, while codes of practical lengths suffer abrupt performance degradation in the low noise regime known as the error ﬂoor phenomenon. Our study of error ﬂoor has led to an interesting and surprising ﬁnding that it is possible to design iterative decoders which are much simpler yet better than belief propagation! These decoders do not propagate beliefs but a rather different kind of messages that reﬂect the local structure of the code graph. This has opened a plethora of exciting theoretical problems and applications. This paper introduces this new paradigm.
Stochastic Belief Propagation: A Low-Complexity Alternative to the Sum-Product Algorithm by Nima Noorshams, Martin J. Wainwright. The abstract reads:
The belief propagation (BP) or sum-product algorithm is a widely-used message-passing method for computing marginal distributions in graphical models. At the core of the BP message updates, when applied to a graphical model involving discrete variables with pairwise interactions, lies a matrix-vector product with complexity that is quadratic in the state dimension d, and requires transmission of a (d − 1)- dimensional vector of real numbers (messages) to its neighbors. Since various applications involve very large state dimensions, such computation and communication complexities can be prohibitively complex. In this paper, we propose a low-complexity variant of BP, referred to as stochastic belief propagation (SBP). As suggested by the name, it is an adaptively randomized version of the BP message updates in which each node passes randomly chosen information to each of its neighbors. The SBP message updates reduce the computational complexity (per iteration) from quadratic to linear in d, without assuming any particular structure of the potentials, and also reduce the communication complexity signiﬁcantly, requiring only log2 d bits transmission per edge. Moreover, we establish a number of theoretical guarantees for the performance of SBP, showing that it converges almost surely to the BP ﬁxed point for any tree structured graph, and for any graph with cycles satisfying a contractivity condition. In addition, for these graphical models, we provide non-asymptotic upper bounds on the convergence rate, showing that the ℓ∞ norm of the error vector decays no slower than O1/√t with the number of iterations t on trees and the normalized mean-squared error decays as O 1/t for general graphs. This analysis, also supported by experimental results, shows that SBP can provably yield reductions in computational and communication complexities for various classes of graphical models.
h/t Danny Bickson and Sergey Ten
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.