David Woodruff, IBM Almaden
In the turnstile model of data streams, an underlying vector x in {-m, -m+1,..., m-1, m} is presented as a long sequence of arbitrary positive and negative integer updates to its coordinates. A randomized algorithm seeks to approximate a function f(x) with constant probability while only making a single pass over this sequence of updates and using a small amount of space. All known algorithms in this model are linear sketches: they sample a matrix A from a distribution on integer matrices in the preprocessing phase, and maintain the linear sketch Ax while processing the stream. At the end of the stream, they output an arbitrary function of Ax. One cannot help but ask: are linear sketches universal?
In this work we answer this question by showing that any 1-pass constant probability streaming algorithm for approximating an arbitrary function f of x in the turnstile model can also be implemented by sampling a matrix A from the uniform distribution on O(n log m) integer matrices, with entries of magnitude poly(n), and maintaining the linear sketch Ax. Furthermore, the logarithm of the number of possible states of Ax, as x ranges over {-m, -m+1, ..., m}^n, plus the amount of randomness needed to store A, is at most a logarithmic factor larger than the space required of the space-optimal algorithm. Our result shows that to prove space lower bounds for 1-pass streaming algorithms, it suffices to prove lower bounds in the simultaneous model of communication complexity, rather than the stronger 1-way model. Moreover, the fact that we can assume we have a linear sketch with polynomially-bounded entries further simplifies existing lower bounds, e.g., for frequency moments we present a simpler proof of the tilde{Omega}(n^{1-2/k}) bit complexity lower bound without using communication complexity.
Joint work with Yi Li and Huy L. Nguyen
Sofya Raskhodnikova, Penn State, Boston University, and Harvard University
We initiate a systematic study of sublinear algorithms for approximately testing properties of real-valued data with respect to L_p distances. Such algorithms distinguish datasets which either have (or are close to having) a certain property from datasets which are far from having it with respect to L_p distance. For applications involving noisy real-valued data, using L_p distances allows algorithms to withstand noise of bounded L_p norm. While the classical property testing framework developed with respect to Hamming distance has been studied extensively, testing with respect to L_p distances has received little attention.
We use our framework to design simple and fast algorithms for classic problems, such as testing monotonicity, convexity and the Lipschitz property, and also distance approximation to monotonicity. In particular, for functions over the hypergrid domains [n]^d, the complexity of our algorithms for all these properties does not depend on the linear dimension n. This is impossible in the standard model.
Most of our algorithms require minimal assumptions on the choice of sampled data: either uniform or easily samplable random queries suffice. We also show connections between the L_p-testing model and the standard framework of property testing with respect to Hamming distance. Some of our results improve existing bounds for Hamming distance.
Joint work with Piotr Berman and Grigory Yaroslavtsev.
Robert Krauthgamer, Weizmann Institute of Science
We study the problem of sketching an input graph, so that, given the sketch, one can estimate the value (capacity) of any cut in the graph up to $1+\epsilon$ approximation. Our results include both upper and lower bound on the sketch size, expressed in terms of the vertex-set size $n$ and the accuracy $\epsilon$.
We design a randomized scheme which, given $\epsilon\in(0,1)$ and an $n$-vertex graph $G=(V,E)$ with edge capacities,produces a sketch of size $\tilde O(n/\epsilon)$ bits,from which the capacity of any cut $(S,V\setminus S)$ can be reported,with high probability, within approximation factor $(1+\epsilon)$. The previous upper bound is $\tilde O(n/\epsilon^2)$ bits,which follows by storing a cut sparsifier graph as constructed by Benczur and Karger (1996) and followup work.
In contrast, we show that if a sketch succeeds in estimating the capacity of all cuts $(S,\bar S)$ in the graph (simultaneously), it must be of size $\Omega(n/\epsilon^2)$ bits.
[Joint work with Alexandr Andoni and David Woodruff.]
Michael Mahoney, UC Berkeley
Implicit regularization in sublinear approximation algorithms [
slides (pdf)]
Most work in sublinear algorithms has adopted the following perspective: the data are too large to compute the quantity of interest, and so we develop an algorithm that is faster (in that it touches only a small amount of the data) than an exact computation of the quantity of interest; and then we "settle" for a solution that is only a little worse, for worst-case input, than the solution we would have found, had memory and/or running time not been an issue. Here, we will describe one such algorithm for finding locally-biased clusters in a large graph that goes beyond this traditional worst-case analysis in the sense that, in addition to coming with worst-case approximation guarantees of this form, it also has the interesting property that its output is "better" than the output of the more expensive exact algorithm---better by empirical metrics of interest to downstream users of this algorithm (who don't know or care about the theoretical guarantees). This tradeoff between solution quality (as measured by the objective function of purported interest) and solution niceness (as measured in some other way) is known as regularization; it is ubiquitous in machine learning, statistics, scientific computing, etc.; and it is usually implemented by adding some sort of norm constraint to the original objective. We will describe how approximate computation, in and of itself, can implicitly implement regularization; and we will describe how the sublinear push procedure for approximating a personalized PageRank vector implicitly optimizes an objective with a sparsity-inducing L1 regularization term added.
Ryan O’Donnell, Carnegie Mellon University
Given ε, S, and black-box access to a completely arbitrary set F in R^n, we give an algorithm that tests whether F has surface area at most S or is ε-far from all sets of surface area at most (4/pi) S. Our test uses O(1/ε) queries, independent of the dimension n; previous work only treated the case of n = 1. We also mention a recent improvement due to Neeman which brings the 4/pi down to any number exceeding 1.
Joint work with Pravesh Kothari, Amir Nayyeri, and Chenggang Wu
Krzysztof Onak, IBM Research
A Near-Optimal Algorithm for Testing Isomorphism of Two Unknown Graphs [
slides (pdf)]
Fischer and Matsliah (SICOMP 2008) showed that the number of queries necessary to test isomorphism of two unknown graphs in the dense graph model is at least Omega(n) and at most O(n^(5/4)). We essentially close this gap by showing an algorithm that makes n * 2^O(sqrt(log n)) queries.
One of the main tools in the previous work on the subject is the algorithm for testing if two unknown distributions are identical (Batu, Fortnow, Rubinfeld, Smith, White [JACM 2013]). A major obstacle in the quest for an efficient graph isomorphism tester turns out to be the Omega(n^(2/3)) distribution testing lower bound (Valiant [SICOMP 2011]). We bypass this lower bound by designing a more efficient algorithm that for two distributions on near-isomorphic sets of points is required to reject only if the Earth-Mover Distance between them is large.
Joint work with Xiaorui Sun.
Ely Porat, Bar Ilan University
Group Testing, Compressed Sensing and Algorithmic Applications [
slides (pptx)]
Group testing is a long studied problem in combinatorics: A small set of r ill people must be identified out of the whole population of n people, by using only queries (tests) of the form “Does the set X contain an ill member?”. I will discuss the current state of the art, and show several surprising applications for group testing techniques (including pattern matching, crypto, and similarity sketching).
Alexandr Andoni, Microsoft Research
Motivated by modern parallel computing models (think MapReduce), we give a new algorithmic framework for geometric graph problems. Our framework applies to problems such as the Minimum Spanning Tree (MST) problem over a set of points in a low-dimensional space, which is useful for hierarchical agglomerative clustering. Our algorithm computes a $(1+epsilon)$-approximate MST in a *constant* number of rounds of communication, while using total space and communication proportional to the size of the data only.
Our framework proves to have implications beyond the parallel models as well. For example, we consider the Earth-Mover Distance (EMD) problem, for which we obtain a new near-linear time algorithm as well as a first streaming algorithm (assuming we can pre-sort the points). Technically, our framework for EMD shows how to effectively break up a "big Linear Programming problem" into a number of "small Linear Programming problems", which can be later recombined using a dynamic programming approach. Our results also address open questions from [Sarathkumar-Agarwal, STOC'12], as well as a well-known open question on streaming EMD (http://sublinear.info/index.php?title=Open_Problems:7 ).
Joint work with Aleksandar Nikolov, Krzysztof Onak, Grigory Yaroslavtsev.
Vladimir Braverman
Approximating Large Frequency Moments with $O(n^{1−2/k})$ Bits
Preprint
We consider the problem of approximating frequency moments in the streaming model. Given a stream D = {p_1,p_2,...,p_m} of numbers from {1,...,n}, a frequency of i is defined as f_i = |{j: p_j = i}|. The k-th frequency moment of D is defined as F_k = \sum_{i=1}^n f_i^k.
In their celebrated paper, Alon, Matias, and Szegedy (STOC 1996) introduced the problem of computing a (1 +/- epsilon)-approximation of F_k with sublinear memory. We give upper bound of O(n^{1-2/k}) bits that matches, up to a constant factor, the lower bound of Woodruff and Zhang (STOC 12) for constant epsilon and k > 3.
Joint work with Jonathan Katzman, Charles Seidell and Gregory Vorsanger.
Amit Chakrabarti, Dartmouth College
We study the problem of finding a maximum matching in a graph given by an input stream listing its edges in some arbitrary order, where the quantity to be maximized is given by a monotone submodular function on subsets of edges. This problem, which we call maximum submodular-function matching (MSM), is a natural generalization of maximum weight matching (MWM), which is in turn a generalization of maximum cardinality matching (MCM). We give two incomparable algorithms for this problem with space usage falling in the semi-streaming range---they store only $O(n)$ edges, using $O(n\log n)$ working memory---that achieve approximation ratios of $7.75$ in a single pass and $(3+\eps)$ in $O(\eps^{-3})$ passes respectively. The operations of these algorithms mimic those of Zelke's and McGregor's respective algorithms for MWM; the novelty lies in the analysis for the MSM setting. In fact we identify a general framework for MWM algorithms that allows this kind of adaptation to the broader setting of MSM. We also give generalizations of these results where the maximization is over ``independent sets'' in a very general sense. This generalization captures hypermatchings in hypergraphs as well as independence in the intersection of multiple matroids. Joint work with Sagar Kale.
Graham Cormode, University of Warwick
As graphs continue to grow in size, we seek ways to effectively process such data at scale. The model of streaming graph processing, in which a compact summary is maintained as each edge insertion/deletion is observed, is an attractive one. However, few results are known for optimization problems over such dynamic graph streams. In this work, we introduce a new approach to handling graph streams, by instead seeking solutions for the parameterized versions of problems where we are given a parameter k and the objective is to decide whether there is a solution bounded by k. By combining kernelization techniques with randomized sketch structures, we obtain the first streaming algorithms for parameterized versions of Vertex Cover and Maximal Matching.
Joint work with Morteza Monemizadeh, Rajesh Chitnis, MohammadTaghi Hajiaghayi
Deeparnab Chakrabarty, Microsoft Research
The primary problem in property testing is to decide whether a given function satisfies a certain property, or is far from any function satisfying it. This crucially requires a notion of distance between functions. The most prevalent notion is the Hamming distance over the {\em uniform} distribution on the domain. This restriction to uniformity is more a matter of convenience than of necessity, and it is important to investigate distances induced by more general distributions.
In this talk, I'll talk about property testing a rather general class of bounded derivative properties over arbitrary product distributions. Bounded derivative properties include fundamental properties such as monotonicity and Lipschitz continuity. Our results subsume almost all known results (upper and lower bounds) on monotonicity and Lipschitz testing.
We prove an intimate connection between bounded derivative property testing and binary search trees (BSTs). We exhibit a tester whose query complexity is the sum of expected depths of optimal BSTs for each marginal. Furthermore, we show this sum-of-depths is also a lower bound. A fundamental technical contribution of this work is an {\em optimal dimension reduction theorem} for all bounded derivative properties, which relates the distance of a function from the property to the distance of restrictions of the function to random lines. Such a theorem has been elusive even for monotonicity for the past 15 years, and our theorem is an exponential improvement to the previous best known result.
Joint work with Kashyap Dixit, Madhav Jha, and C. Seshadhri
Venkat Chandrasekaran
Computational and Statistical Tradeoffs via Convex Relaxation
ArXiV preprint
TBA
Shiri Chechik, Microsoft Research Silicon Valley
In recent years, the practical need for fast retrieval of shortest path queries has significantly increased, in part due to the developing GPS navigation systems and other route planning softwares.
Classical shortest paths algorithms such as Dijkstra's algorithm return shortest path distance in almost linear time in the number of nodes. However, continent-sized road networks require something much faster. It is possible to achieve sublinear-time queries if preprocessing is allowed.
A distance oracle is a data structure that allows fast retrieval of a distance estimate for any pair of nodes. The focus in designing distance oracles is often on the tradeoff between several parameters: the construction time (the time it takes to construct the distance oracle), the size of the data structure, the query time and the quality of the answer (how close is the estimated distance to the real distance).
I will discuss a recent work on distance oracles with constant query time, which essentially obtains optimal bounds for general graphs and improves over the Thorup-Zwick distance oracle [STOC ’01, J.ACM ‘05].
Artur Czumaj, University of Warwick
Testing Cluster Structure of Graphs
Cluster analysis is a fundamental task in data analysis that aims at partitioning a set of objects into a disjoint collection of objects with similar characteristics. In this talk, we will use the concept of conductance to measure the quality of cluster structure and will focus on a question of approximately recognizing cluster structure of a graph in sublinear time in the framework of property testing in the bounded degree model. We show how to test in O*(√n) time whether a graph with n nodes can be partitioned into no more than k parts (clusters) such that the outer-conductance of each cluster is at most \phi_i and the inner-conductance of the induced subgraph on each cluster is at least \phi_o, for a large spectrum of parameters k, \phi_i, and \phi_o. By the lower bound of \Omega(√n) for testing graph expansion, which corresponds to the case when k=1 in our problem, our algorithm is asymptotically optimal up to polylogarithmic factors.
This is joint work with Pan Peng and Christian Sohler.
Mark Davenport, Georgia Institute of Technology
Adaptive sensing of sparse signals in noise
In this talk I will focus on the problem of recovering a sparse image from a small number of noisy measurements. To begin, I will consider the case where the measurements are acquired in a nonadaptive fashion as in the standard compressive sensing framework. I will describe lower bounds on the minimax mean-squared error of the recovered vector which very nearly matches the performance of L1-minimization techniques, and hence shows that in certain regimes these techniques are essentially optimal. I will then consider the case where the measurements are acquired sequentially in an adaptive manner. I will describe a lower bound that shows that, surprisingly, in certain worst-case regimes, adaptivity does not allow for substantial improvement over standard nonadaptive techniques in terms of the minimax MSE. Nonetheless, I will also show that there are important regimes where the benefits of adaptive sensing are clear and overwhelming, and can be achieved via relatively simple algorithms that also have the benefit of a drastic reduction in the computational complexity of the recovery process.
Dan Feldman, MIT
Sublinear Systems using Core-sets
A core-set for a given problem is a semantic compression of its input, in the sense that a solution for the problem with the (small) coreset as input yields a provable approximate solution to the problem with the original (Big) Data. Core-set can usually be computed via one pass over a streaming input, manageable amount of memory,and in parallel. For real time performance we use Hadoop, Clouds and GPUs.
In this talk I will give a summary of some core-set techniques and their implementations in real-time systems. I will also share some social and technical challenges that I had to deal with in the industry and the robotics lab of MIT while implementing such core-sets.
Eldar Fischer, Technion - Israel Institute of Technology
Partial testing, universal testing and decomposability
Preprint
Some properties do not admit tests with a small number of queries - but can they be partitioned into a small number of properties that do admit such tests? With some properties this is indeed the case. For example, being the concatenation of two palindromes has no test with less than the square root of n queries, but it can be partitioned into n properties with constant query complexity tests.
Here we show a property that cannot be thus partitioned. Even more so, our property does not have any large sub-property that admits a small query complexity test. In fact, we show that all large dual distance codes are such properties. The proof uses methods not used before for defeating the most general tests; the entropy method is used to deal with possibly 2-sided tests, and a concept of a reader (basically a decision tree where we are interested in the "stream of bits read" rather than its final output) is used to analyze and defeat adaptive tests.
We also provide a general result (but with weak parameters) that gives an intuition why the partitionable property above still has a sublinear query complexity test: Any property partitionable into not too many sub-properties admitting proximity oblivious tests would admit a sublinear complexity test. The proof is by showing that, for a cost, a "one size fits all" test can be used for all properties with a proximity oblivious test.
Joint work with Yonatan Goldhirsh and Oded Lachish.
Elena Grigorescu, Purdue University
Tight lower bounds for testing linear isomorphism
Preprint
We study lower bounds for testing membership in families of linear/affine-invariant Boolean functions over the hypercube. Motivated by the recent resurgence of attention to the permutation isomorphism problem, we first focus on families that are linearly/affinely isomorphic to some fixed function. Our main result is a tight adaptive, two-sided Ω(n^2) lower bound for testing linear isomorphism to the inner-product function. This is the first lower bound for testing linear isomorphism to a specific function that matches the trivial upper bound. Our proof exploits the elegant connection between testing and communication complexity discovered by Blais et al. (Computational Complexity, 2012.) Our second result shows an Ω(2^n/4) query lower bound for any adaptive, two-sided tester for membership in the Maiorana-McFarland class of bent functions. This class of Boolean functions is also affine-invariant and its rich structure and pseudorandom properties have been well-studied in mathematics, coding theory and cryptography. Joint work with Karl Wimmer and Ning Xie.
Venkatesan Guruswami, Carnegie Mellon University
Reed-Muller testing: implications for small set expansion & hypergraph coloring
The problem of testing proximity to low-degree multivariate polynomials was one of the original problems considered in (and arguably motivated) the property testing model. Recently, an algorithm for testing binary Reed-Muller codes with optimal trade-off between soundness and distance to the closest low-degree multilinear polynomial was given (Bhattacharyya, Kopparty, Schoenebeck, Sudan, Zuckerman, FOCS’10). Via derandomization of the hypercube by restricting to Reed-Muller codewords, this result has found some unexpected applications in approximability. For example, the subgraph of the noisy hypercube induced by Reed-Muller codewords is a small set expander with the currently largest known count of eigenvalues (of the associated random walk matrix) close to 1 (Barak, Gopalan, Hastad, Raghavendra, Meka, Steurer, FOCS’12). Replacing the long code (hypercube) gadget in PCP reductions in the classical ``Label Cover + Fourier Analysis'' framework by the low-degree version, together with strong soundness guarantees on structured tests for Reed-Muller codes has to led to more size-efficient PCPs and improved hardness results for hypergraph coloring (Dinur, G., FOCS’13; G., Harsha, Håstad, Srinivasan, Varma, STOC’14).
In this talk, I’ll attempt to provide a glimpse of the relevant Reed-Muller testing results and some of these connections.
Piotr Indyk, MIT
Approximation-Tolerant Model-Based Compressive Sensing
Preprint
TBA
Valerie King
We consider the CONGEST model of communication networks, where each node knows the names of its neighbors. We give some new simple techniques to reduce the number of bits communicated between nodes for repairing and constructing a minimum spanning tree or unweighted spanning forest which are subgraphs of the communication network. In particular we show that o(m) bits of communication suffice to construct a minimum spanning tree.
Edo Liberty, Yahoo Labs
Simple and Deterministic Matrix Sketching
Preprint
TBA
Andrew McGregor, University of Massachusetts, Amherst
We present the first algorithms for processing graphs in the sliding-window model. The sliding window model, introduced by Datar et al. (SICOMP 2002), has become a popular model for processing infinite data streams in small space when older data items (i.e., those that predate a sliding window containing the most recent data items) are considered “stale” and should implicitly be ignored. While processing massive graph streams is an active area of research, it was hitherto unknown whether it was possible to analyze graphs in the sliding- window model. In this talk we focus on algorithms for connectivity and finding large matchings.
Joint work with Michael Crouch and Dan Stubbs.
Andrea Montanari
Principal Component Analysis with Structured Factors
Many modern data sets are naturally presented as data matrices. Examples include recommendation systems (with rows corresponding to products, and columns to customers), hyper-spectral imaging (with rows corresponding to pixels, and columns to frequencies), gene expression data (with rows corresponding to patients, and columns to genes). Principal component analysis aims at reducing the dimensionality of such datasets by projecting samples in a few directions of maximum variability.
The principal vectors are often interpretable and convey important information about the data. This in turn imposes constraints on these vectors: for instance, in some cases the principal directions are known to be non-negative or sparse. Substantial work has been devoted to exploiting these constraints to improve statistical accuracy. Despite this, it is still poorly understood whether and when this is a good idea. I will discuss three examples that illustrate the mathematical richness of this problem, and its far reaching practical implications. [Based on joint work with Yash Deshpande and Emile Richard.]
Jelani Nelson, Harvard University
Dimensionality reduction techniques are used to obtain algorithmic speedup and storage savings in high-dimensional computational geometry, numerical linear algebra, compressed sensing, manifold learning, and clustering and several other machine learning problems. A common method for doing dimensionality reduction is to apply a random linear map to the input data (so-called "Johnson-Lindenstrauss transforms"). In this talk we discuss ways of designing such linear maps which are extremely sparse but still provide provable guarantees, thus speeding up the time to do the dimensionality reduction.
Based on joint works with Jean Bourgain, Daniel Kane, and Huy Lê Nguyễn.
Eric Price, IBM Almaden
Sharp bounds for learning a mixture of two Gaussians
Preprint
We consider the problem of identifying the parameters of an unknown mixture of two arbitrary $d$-dimensional Gaussians from a sequence of random samples. Our main result is a computationally efficient moment-based estimator with an optimal convergence rate thus resolving a problem introduced by Pearson (1894). Denoting by $\sigma^2$ the variance of the unknown mixture, we prove that $\Theta(\sigma^{12})$ samples are necessary and sufficient to estimate each parameter up to constant additive error when $d=1.$ Our upper bound extends to arbitrary dimension $d higher than 1$ up to a (necessary) logarithmic loss in $d$ using a novel---yet simple---dimensionality reduction technique. Strikingly, our estimator turns out to be very similar to the one Pearson proposed in 1894 which reduces the one-dimensional problem to solving and analyzing a tractable system of polynomial equations. Our result greatly improves on the exponent in the sample size of the best previous estimator due to Kalai, Moitra and Valiant (2010).
Ronitt Rubinfeld, MIT and Tel Aviv University
Testing distributions in a kinder, gentler world
Many problems in testing distributions over large domains require a number of samples that is daunting. We survey recent and not-so-recent results on testing distributions given access to more powerful oracle-queries about the distributions in question.
Joint work with Clement Canonne.
Shubhangi Saraf
Breaking the quadratic barrier for 3-LCCs over the Reals
Preprint
The classical Sylvester-Gallai Theorem states the following: given a finite set of points in the Euclidean plane, if the line through every pair of points passes through a third point, then all the points must be collinear. At a high level the result shows that many ``local" linear dependencies implies a ``global" bound on the dimension of the entire set of points. In recent years variants of the theorem have also found applications to several structural results in theoretical computer science.
In this talk I will describe some high dimensional and quantitative extensions to the Sylvester-Gallai theorem and show strong connections to the problem of locally correctable codes. In particular I will describe a recent result showing that 3-query linear locally correctable codes over the Reals of dimension d require block length n > d^{2+ε} for some positive ε > 0. This improves the known quadratic lower bounds which hold for any linear code. Our proof uses a powerful result by Barthe from convex geometry to derive a structure theorem on the decoding query triples. This structure supports stronger random restriction arguments, and hence stronger lower bounds than previously available.
Based on joint work with Zeev Dvir and Avi Wigderson
Gregory Valiant,
Finding correlations in subquadratic time: new thoughts and applications
Preprint
TBA
Paul Valiant,
We consider the problem of verifying the identity of a distribution: Given the description of a distribution p over discrete support, how many samples (independent draws) must one obtain from an unknown distribution q, to distinguish, with high probability, the case that p=q from the case that the total variation distance (L1 distance) between p and q is at least epsilon? We resolve this question, up to constant factors, on an instance-by-instance basis: there exist universal constants c, c', and a function f(p,epsilon) on distributions and error parameters, such that our tester distinguishes p=q from |p-q|>epsilon using f(p,epsilon) samples with success probability >2/3, but no tester can distinguish p=q from |p-q|> c*epsilon when given c'*f(p,epsilon) samples. The function f has an unusual form, depending on the 2/3 norm of the distribution p, after the removing the largest single domain element, and the smallest domain elements cumulatively comprising epsilon mass; because this result is tight to constant factors, the unusual nature of f is in some sense a law of nature, and not a side-effect of our analysis. This result significantly generalizes and tightens previous results, and also introduces a new analysis technique that may be of independent interest.
The analysis of our estimator by comparing its variance to the square of its mean produces (as it usually does!) several hairy inequalities. To facilitate this analysis, we give a complete characterization of a general class of inequalities--generalizing Cauchy-Schwarz, Holder's inequality, and the monotonicity of Lp norms. Essentially, any inequality between several sequences of arbitrary positive numbers which is of the form where it "might" be proven as a product of Holder and Lp monotonicity inequalities, is true if and only if it is the product of such inequalities. We further give a polynomial time algorithm for deriving or refuting such inequalities, that in practice may often be carried out by hand, despite the fact that the shortest refutation of such inequalities may be doubly-exponential in length when written out explicitly. We hope that this tool will remove a hurdle from future results. [Joint work with Gregory Valiant]
Suresh Venkatasubramanian, University of Utah
A directed isoperimetric inequality with application to Bregman near neighbor lower bounds
Preprint
We present lower bounds for approximate near neighbor search for Bregman divergences. We show that under the cell probe model, any non-adaptive data structure (like locality-sensitive hashing) for c-approximate near-neighbor search that admits r probes must use space Ω(n^{1 + u/cr}) (where u is a structure constant). In contrast, for LSH under ℓ1 the best bound is Ω(n^{1 + 1/r}).
Our new tool is a directed variant of the standard boolean noise operator. We show that a generalization of the Bonami-Beckner hypercontractivity inequality exists ``in expectation'' or upon restriction to certain subsets of the Hamming cube, and that this is sufficient to prove the desired isoperimetric inequality that we use in our data structure lower bound.
We also present a structural result reducing the Hamming cube to a Bregman cube. This structure allows us to obtain lower bounds for problems under Bregman divergences from their ℓ1 analog. In particular, we get a (weaker) lower bound for approximate near neighbor search of the form Ω(n^{1 + 1/cr}) for an r-query non-adaptive data structure, and new cell probe lower bounds for a number of other near neighbor questions in Bregman space.
Rachel Ward
Linear dimension reduction in the l1 norm: When is it possible? How is it possible?
Relevant preprint
TBA
Yuichi Yoshida
A Characterization of Locally Testable Affine-Invariant Properties via Decomposition Theorems
Preprint
Let P be a property of function F_p^n → {0, 1} for a fixed prime p. An algorithm is called a tester for P if, given a query access to the input function f, with high probability, it accepts when f satisfies P and rejects when f is “far” from satisfying P. In this paper, we give a characterization of affine-invariant properties that are (two-sided error) testable with a constant number of queries. The characterization is stated in terms of decomposition theorems, which roughly claim that any function can be decomposed into a structured part that is a function of a constant number of polynomials, and a pseudo-random part whose Gowers norm is small. We first give an algorithm that tests whether the structured part of the input function has a specific form. Then we show that an affine-invariant property is testable with a constant number of queries if and only if it can be reduced to the problem of testing whether the structured part of the input function is close to one of a constant number of candidates.
Qin Zhang, Indiana University Bloomington
New Directions in Distributed Monitoring
In this talk we will discuss the distributed monitoring model, where we have k sites, each receiving a stream of elements over time. There is a designated coordinator who would like to track, that is, maintain continuously at all times, some function f of all the elements received from the k sites. There is a two-way communication channel between each site and the coordinator. The primary goal is to track f with minimum communication. We also want to minimize the space and processing time per item used at all sites. This model is motivated by applications in network monitoring, content delivery services, sensor networks, distributed databases, etc.
In this talk we will first briefly survey the existing results of this area, and then illustrate a few simple upper and lower bound examples. Next, we discuss several new research directions for distributed monitoring, including the search for generic monitoring algorithms, extending the study to problems of richer structures, and exploring/comparing variants of this model.
N00224162.jpg was taken on May 19, 2014 and received on Earth May 19, 2014. The camera was pointing toward TITAN at approximately 444,108 miles (714,722 kilometers) away, and the image was taken using the CL1 and UV3 filters. Image Credit: NASA/JPL/Space Science Institute
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.