Wednesday, April 23, 2014

A Convergence of Interests: IEEE SP Mag, COLT2014 and more...

I don't know if this is a sign of times, but  this Sunday Morning Insight's entry entitled Why You Should Care About Phase Transitions in Clustering got 35+ Google recommendations. It's a record as far as this blog is concerned. Why a sign of times ? because I think in the same way we see a convergence between sensing and machine learning, there seems to be a similar convergence in themes related to the general topic of advanced matrix factorizations in theoretical computer science, mathematics, statistics and signal processing. Here are four examples that just showed up on my radar screen that all relate in one way or another to advanced matrix factorization.

Nicolas Gillis just mentioned to me that the IEEE Signal Processing magazine is freely available online at: http://online.qmags.com/SIPR0514. Here are the article that he mentioned might be of interest to readers of Nuit Blanche:

Dustin Mixon just wrote a blog entry on  Compressive classification and the rare eclipse problem that we mentioned a while back

Afonso Bandeira discusses The rank recovery problem specifically about a problem he and co-authors mentioned in [1]. I very much like the clear introduction which goes like this:
Recovery problems in many fields are commonly solved under the paradigm of maximum likelihood estimation. Despite the rich theory it enjoys, in many instances the parameter space is exponentially large and non-convex, often rendering the computation of the maximum likelihood estimator (MLE) intractable. It is then common to settle for heuristics, such as expectation-maximization. Unfortunately, popular iterative heuristics often get trapped in local minima and one usually does not know if the global optimum was achieved
A common alternative to these heuristics is the use of convex relaxations - to attempt optimizing (usually minimizing the minus log-likelihood) in a larger convex set that contains the parameter space of interest, as that allows one to leverage the power of convex optimization (e.g. Candes and Tao (2010)). The downside is that the solution obtained might not be in the original feasible set, forcing one to take an extra, potentially suboptimal, rounding step. The upside is that if it does lie in the original set then no rounding step is needed and one is guaranteed to have found the optimal solution to the original problem. Fortunately, this seems to often be the case in several problems.
I had this back and forth discussion at a meeting focused on sensor design recently. It was about the fact that Maximum Likelihood solvers were potentially solving something different than what was intended. It triggered this entry: You are not paying attention if...you believe an algorithm implementation is the same as the algorithm itself

And finally, Sebastien Bubeck listed the COLT2014 accepted papers. Here are a few [2-13] that are connected to the theme of Nuit Blanche on Matrix Factorization, namely:
Let me note that [8] potentially avoids the traditional iterative computation of A and then X and then A etc in dictionary learning (and Blind deconvolution/calibration). I, for one, much liked the "previous work" section of version 1 through 3 of the manuscript (now at version 4). But hey, I am sure peer review has enhanced the clarity of the paper....I also liked the possibilities encapsulated in the end of the abstract of [2]:

. On the technical side, we contribute several new ideas on how to encode hard combinatorial problems in low-rank optimization problems. We hope that these techniques will be helpful in further understanding the computational limits of Matrix Completion and related problems.
and here is a connection to Machine Learning with Matrix Completion (from [2])
We also consider a natural variant of Matrix Completion where the unknown matrix is positive semidefinite and so must be the output matrix. The positive semidefinite completion problem arises naturally in the context of Support Vector Machines (SVM). The kernel matrix used in SVM learning must be positive semidefinite as it is the Gram matrix of feature vectors. But oftentimes the kernel matrix is derived from partial similarity information resulting in incomplete kernel matrices. In fact, this is a typical situation in medical and biological application domains . In such cases the data analyst would like to complete the partial kernel matrix to a full kernel matrix while ensuring positive semidefiniteness. Moreover, since it is often infeasible to store a dense n  n matrix, it is desirable to also have a low-rank representation of the kernel matrix .This is precisely the low-rank positive semidefinite completion problem. Our results establish strong hardness results for this problem under natural relexations. In this case we show that for any k 2 it is NP-hard to complete a partially given rank k matrix by a rank (2k -1) matrix

Refrences and abstracts:

We have observed an interesting, yet unexplained, phenomenon: Semidefinite programming (SDP) based relaxations of maximum likelihood estimators (MLE) tend to be tight in recovery problems with noisy data, even when MLE cannot exactly recover the ground truth. Several results establish tightness of SDP based relaxations in the regime where exact recovery from MLE is possible. However, to the best of our knowledge, their tightness is not understood beyond this regime. As an illustrative example, we focus on the generalized Procrustes problem.

Matrix Completion is the problem of recovering an unknown real-valued low-rank matrix from a subsample of its entries. Important recent results show that the problem can be solved efficiently under the assumption that the unknown matrix is incoherent and the subsample is drawn uniformly at random. Are these assumptions necessary?
It is well known that Matrix Completion in its full generality is NP-hard. However, little is known if make additional assumptions such as incoherence and permit the algorithm to output a matrix of slightly higher rank. In this paper we prove that Matrix Completion remains computationally intractable even if the unknown matrix has rank  4 but we are allowed to output any constant rank matrix, and even if additionally we assume that the unknown matrix is incoherent and are shown 90% of the entries. This result relies on the conjectured hardness of the 4-Coloring problem. We also consider the positive semidefinite Matrix Completion problem. Here we show a similar hardness result under the standard assumption that  P≠NP. Our results greatly narrow the gap between existing feasibility results and computational lower bounds. In particular, we believe that our results give the first complexity-theoretic justification for why distributional assumptions are needed beyond the incoherence assumption in order to obtain positive results. On the technical side, we contribute several new ideas on how to encode hard combinatorial problems in low-rank optimization problems. We hope that these techniques will be helpful in further understanding the computational limits of Matrix Completion and related problems.

Under a standard assumption in complexity theory (NP not in P/poly), we demonstrate a gap between the minimax prediction risk for sparse linear regression that can be achieved by polynomial-time algorithms, and that achieved by optimal algorithms. In particular, when the design matrix is ill-conditioned, the minimax prediction loss achievable by polynomial-time algorithms can be substantially greater than that of an optimal algorithm. This result is the first known gap between polynomial and optimal algorithms for sparse linear regression, and does not depend on conjectures in average-case complexity.


We obtain sharp bounds on the performance of Empirical Risk Minimization performed in a convex class and with respect to the squared loss, without any boundedness assumptions on class members or on the target. Rather than resorting to a concentration-based argument, the method relies on a `small-ball' assumption and thus holds for heavy-tailed sampling and heavy-tailed targets. Moreover, the resulting estimates scale correctly with the `noise'. When applied to the classical, bounded scenario, the method always improves the known estimates.

From concentration inequalities for the suprema of Gaussian or Rademacher processes an inequality is derived. It is applied to sharpen existing and to derive novel bounds on the empirical Rademacher complexities of unit balls in various norms appearing in the context of structured sparsity and multitask dictionary learning or matrix factorization. A key role is played by the largest eigenvalue of the data covariance matrix.

We solve the COLT 2013 open problem of \citet{SCB} on minimizing regret in the setting of advice-efficient multiarmed bandits with expert advice. We give an algorithm for the setting of K arms and N experts out of which we are allowed to query and use only M experts' advices in each round, which has a regret bound of \tilde{O}\bigP{\sqrt{\frac{\min\{K, M\} N}{M} T}} after T rounds. We also prove that any algorithm for this problem must have expected regret at least \tilde{\Omega}\bigP{\sqrt{\frac{\min\{K, M\} N}{M}T}}, thus showing that our upper bound is nearly tight.

We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: we prove that given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error.
Kruskal's theorem has found many applications in proving the identifiability of parameters for various latent variable models and mixture models such as Hidden Markov models, topic models etc. Our robust version immediately implies identifiability using only polynomially many samples in many of these settings. This polynomial identifiability is an essential first step towards efficient learning algorithms for these models.
Recently, algorithms based on tensor decompositions have been used to estimate the parameters of various hidden variable models efficiently in special cases as long as they satisfy certain "non-degeneracy" properties. Our methods give a way to go beyond this non-degeneracy barrier, and establish polynomial identifiability of the parameters under much milder conditions. Given the importance of Kruskal's theorem in the tensor literature, we expect that this robust version will have several applications beyond the settings we explore in this work.

In sparse recovery we are given a matrix A (the dictionary) and a vector of the form AX where X is sparse, and the goal is to recover X. This is a central notion in signal processing, statistics and machine learning. But in applications such as sparse coding, edge detection, compression and super resolution, the dictionary A is unknown and has to be learned from random examples of the form Y=AX where X is drawn from an appropriate distribution --- this is the dictionary learning problem. In most settings, A is overcomplete: it has more columns than rows. This paper presents a polynomial-time algorithm for learning overcomplete dictionaries; the only previously known algorithm with provable guarantees is the recent work of Spielman, Wang and Wright who gave an algorithm for the full-rank case, which is rarely the case in applications. Our algorithm applies to incoherent dictionaries which have been a central object of study since they were introduced in seminal work of Donoho and Huo. In particular, a dictionary is μ-incoherent if each pair of columns has inner product at most μ/n
The algorithm makes natural stochastic assumptions about the unknown sparse vector X, which can contain kcmin(n/μlogn,m1/2η) non-zero entries (for any η>0). This is close to the best kallowable by the best sparse recovery algorithms even if one knows the dictionary A exactly. Moreover, both the running time and sample complexity depend on log1/ϵ, where ϵ is the target accuracy, and so our algorithms converge very quickly to the true dictionary. Our algorithm can also tolerate substantial amounts of noise provided it is incoherent with respect to the dictionary (e.g., Gaussian). In the noisy setting, our running time and sample complexity depend polynomially on 1/ϵ, and this is necessary.

We consider the problem of sparse coding, where each sample consists of a sparse combination of a set of dictionary atoms, and the task is to learn both the dictionary elements and the mixing coefficients. Alternating minimization is a popular heuristic for sparse coding, where the dictionary and the coefficients are estimated in alternate steps, keeping the other fixed. Typically, the coefficients are estimated via ℓ1 minimization, keeping the dictionary fixed, and the dictionary is estimated through least squares, keeping the coefficients fixed. In this paper, we establish local linear convergence for this variant of alternating minimization and establish that the basin of attraction for the global optimum (corresponding to the true dictionary and the coefficients) is O(1/s2) where s is the sparsity level in each sample and the dictionary elements are incoherent. Combined with the recent results of approximate dictionary estimation, these yield the provable guarantees for exact recovery of both the dictionary elements and the coefficients.


In this paper we show that very large mixtures of Gaussians are efficiently learnable in high dimension. More precisely, we prove that a mixture with known identical covariance matrices whose number of components is a polynomial of any fixed degree in the dimension n is polynomially learnable as long as a certain non-degeneracy condition on the means is satisfied. It turns out that this condition is generic in the sense of smoothed complexity, as soon as the dimensionality of the space is high enough. Moreover, we prove that no such condition can possibly exist in low dimension and the problem of learning the parameters is generically hard. In contrast, much of the existing work on Gaussian Mixtures relies on low-dimensional projections and thus hits an artificial barrier. Our main result on mixture recovery relies on a new "Poissonization"-based technique, which transforms a mixture of Gaussians to a linear map of a product distribution. The problem of learning this map can be efficiently solved using some recent results on tensor decompositions and Independent Component Analysis (ICA), thus giving an algorithm for recovering the mixture. In addition, we combine our low-dimensional hardness results for Gaussian mixtures with Poissonization to show how to embed difficult instances of low-dimensional Gaussian mixtures into the ICA setting, thus establishing exponential information-theoretic lower bounds for underdetermined ICA in low dimension. To the best of our knowledge, this is the first such result in the literature. In addition to contributing to the problem of Gaussian mixture learning, we believe that this work is among the first steps toward better understanding the rare phenomenon of the "blessing of dimensionality" in the computational aspects of statistical inference.


Shie Mannor (EE-Technion), Vianney Perchet (LPMA), Gilles Stoltz (GREGH)
In the standard setting of approachability there are two players and a target set. The players play a repeated vector-valued game where one of them wants to have the average vector-valued payoff converge to the target set which the other player tries to exclude. We revisit the classical setting and consider the setting where the player has a preference relation between target sets: she wishes to approach the smallest ("best") set possible given the observed average payoffs in hindsight. Moreover, as opposed to previous works on approachability, and in the spirit of online learning, we do not assume that there is a known game structure with actions for two players. Rather, the player receives an arbitrary vector-valued reward vector at every round. We show that it is impossible, in general, to approach the best target set in hindsight. We further propose a concrete strategy that approaches a non-trivial relaxation of the best-in-hindsight given the actual rewards. Our approach does not require projection onto a target set and amounts to switching between scalar regret minimization algorithms that are performed in episodes.

In this paper, we consider networks consisting of a finite number of non-overlapping communities. To extract these communities, the interaction between pairs of nodes may be sampled from a large available data set, which allows a given node pair to be sampled several times. When a node pair is sampled, the observed outcome is a binary random variable, equal to 1 if nodes interact and to 0 otherwise. The outcome is more likely to be positive if nodes belong to the same communities. For a given budget of node pair samples or observations, we wish to jointly design a sampling strategy (the sequence of sampled node pairs) and a clustering algorithm that recover the hidden communities with the highest possible accuracy. We consider both non-adaptive and adaptive sampling strategies, and for both classes of strategies, we derive fundamental performance limits satisfied by any sampling and clustering algorithm. In particular, we provide necessary conditions for the existence of algorithms recovering the communities accurately as the network size grows large. We also devise simple algorithms that accurately reconstruct the communities when this is at all possible, hence proving that the proposed necessary conditions for accurate community detection are also sufficient. The classical problem of community detection in the stochastic block model can be seen as a particular instance of the problems consider here. But our framework covers more general scenarios where the sequence of sampled node pairs can be designed in an adaptive manner. The paper provides new results for the stochastic block model, and extends the analysis to the case of adaptive sampling.
The paper proposes a novel upper confidence bound (UCB) procedure for identifying the arm with the largest mean in a multi-armed bandit game in the fixed confidence setting using a small number of total samples. The procedure cannot be improved in the sense that the number of samples required to identify the best arm is within a constant factor of a lower bound based on the law of the iterated logarithm (LIL). Inspired by the LIL, we construct our confidence bounds to explicitly account for the infinite time horizon of the algorithm. In addition, by using a novel stopping time for the algorithm we avoid a union bound over the arms that has been observed in other UCB-type algorithms. We prove that the algorithm is optimal up to constants and also show through simulations that it provides superior performance with respect to the state-of-the-art.
credit photo: NASA/ESA


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly