Anna Gilbert let me know that the slides of the SPARC 2013 meeting are up (Thank you to Anna Gilbert, S. Muthukrishnan, Hung Q. Ngo, Ely Porat, Atri Rudra, Martin Strauss for making the slides available)

Mary Wootters | High dimensional probability in theoretical computer science |

Abstract Recently, techniques from probability in Banach spaces and geometric functional analysis have found applications in compressed sensing, coding theory, and other areas of theoretical computer science. In this tutorial, I will go over some useful tricks and techniques, with examples.

Slides are available here.

Swastik Kopparty | Polynomials and Complexity Theory |

Abstract This tutorial will discuss some of the amazing and unexpected applications of polynomials in complexity theory.

Slides are available here.

Jin-Yi Cai | Dichotomy for Counting Problems |

Abstract Emil Post initiated the study of whether there are problems whose Turing degrees of computability are stirctly between Decidable and the Halting Problem. The famous solution came independently from Friedberg and Muchnik, who invented the finite injury priority argument. I would like to revisit Post's Problem, in the context of the complexity of counting probems. I will survey a number of recent dichotomy theorems which have the general flavor that, in a broad class of counting problems within #P, there are really only two levels of complexity: one is tractable, the other is complete. This is philosophically an anti-Friedberg-Muchnik theory, and the frameworks considered include (1) Graph Homomorphisms,(2) Counting, and (3) Holant Problems. http://arxiv.org/abs/0903.4728 http://arxiv.org/abs/1008.0683 http://arxiv.org/abs/1204.6445

Slides are available here.

Jelani Nelson | OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings |

Abstract Let E be a d-dimensional linear subspace of R^n. A subspace embedding for E is a linear map that preserves the l_2 norms of all vectors in E up to 1+eps. We improve a recent result of [Clarkson-Woodruff, STOC'13] and with a much simpler proof: i.e. we show that the Thorup-Zhang sketch (a random sign matrix with one non-zero per column) is a subspace embedding with good probability when the number of rows is O(d^2/eps^2). Once the theorem is formulated the right way, the proof is a simple second moment computation. We then show our main result: that when one has m rows and s non-zeroes per column (the sparse Johnson-Lindenstrauss matrices of [Kane-Nelson, SODA'12]), one can take m = O(d^{1.01}/eps^2), s = O(1/eps). These are the first subspace embeddings to have m = o(d^2) with s = o(d). Our bounds imply improved running times for several numerical linear algebra problems, including approximating leverage scores, least squares regression, l_p regression for p in [1, infty), and low-rank approximation.

Slides are available here.

Andrew McGregor | Homomorphic Sketches: Shrinking Big Data without Sacrificing Structure |

Abstract Fingerprinting is a widely-used technique for efficiently verifying that two large files are identical. More generally, sketching is a form of lossy compression based on random linear projections that, for example, could be used to estimate the Hamming distance between the files. Both techniques are particularly useful for monitoring data streams or minimizing communication in distributed and parallel computing. Many applications have been studied extensively including sparse signal recovery, locality sensitive hashing, and linear regression. Our goal is to extend these popular techniques and design random projections that preserve combinatorial and group-theoretic structure in large data sets. In this talk, we present recent results on analyzing spectral graph properties and fingerprinting misaligned text. Both results rely on developing homomorphic sketches that support operations on the compressed data that correspond to operations on the uncompressed data. Applications include the first sublinear space algorithms for processing dynamic graph streams. Includes joint work with Kook Jin Ahn, Alexandr Andoni, Assaf Goldberger, Sudipto Guha, and Ely Porat.

Slides are available here.

Shubhangi Saraf | Incidence geometry and applications to theoretical computer science |

Abstract The classical theorem of Sylvester-Gallai states the following: given a finite set of points in the Euclidean plane, not all on the same line, there exists a line passing through exactly two of the points. This result has a rich and interesting history in mathematics. Surprisingly in recent years variants of the theorem have been influential in various diverse areas of theoretical computer science. In this talk I will describe several extensions to the Sylvester-Gallai theorem - quantitative versions, high dimensional versions, colorful versions and average case versions. I will also describe some of the applications and connections in theoretical computer science to areas such as polynomial identity testing, and local algorithms for error correcting codes. Based on joint works with Albert Ai, Zeev Dvir, Neeraj Kayal and Avi Wigderson.

Slides are available here.

Ronitt Rubinfeld | Local Decompression |

Abstract We present a simple adaptation of the Lempel Ziv 78' (LZ78) compression scheme ({\em IEEE Transactions on Information Theory, 1978}) that supports efficient random access to the input string. Namely, given query access to the compressed string, it is possible to efficiently recover any symbol of the input string. The compression algorithm is given as input a parameter $\eps >0$, and with very high probability increases the length of the compressed string by at most a factor of $(1+\eps)$. The access time is $O(\log n + 1/\eps^2)$ in expectation, and $O(\log n/\eps^2)$ with high probability. The scheme relies on sparse transitive-closure spanners. Any (consecutive) substring of the input string can be retrieved at an additional additive cost in the running time of the length of the substring. We also formally establish the necessity of modifying LZ78 so as to allow efficient random access. Specifically, we construct a family of strings for which $\Omega(n/\log n)$ queries to the LZ78-compressed string are required in order to recover a single symbol in the input string. The main benefit of the proposed scheme is that it preserves the online nature and simplicity of LZ78, and that for {\em every} input string, the length of the compressed string is only a small factor larger than that obtained by running LZ78. Joint work with Akashnil Dutta, Reut Levi and Dana Ron.

Slides are available here.

Piotr Indyk | Sample-Optimal Average-Case Sparse Fourier Transform |

Abstract We present the first sample-optimal sub-linear time algorithms for the sparse Discrete Fourier Transform over a two-dimensional sqrt{n} x sqrt{n} grid. Our algorithms are ??analyzed for the average case signals. For signals whose spectrum is exactly sparse, we present algorithms that use O(k) samples and run in O(klogk) time, where k is the expected sparsity of the signal. For signals whose spectrum is approximately sparse, we have an algorithm that uses O(k log n) samples and runs in O(k log^2 n) time, for a wide range of k. All presented algorithms match the lower bounds on sample complexity for their respective signal models. The talk will cover the material from the joint papers with Badih Ghazi, Haitham Hassanieh, Dina Katabi, Eric Price and Lixin Shi. The papers are available at http://groups.csail.mit.edu/netmit/sFFT/

Slides are available here.

David Woodruff | How Robust are Linear Sketches to Adaptive Inputs? |

Abstract Linear sketches turn an n-dimensional input into a concise lower-dimensional representation via a linear transformation. In almost any realistic setting a linear sketch faces the possibility that its inputs are correlated with previous evaluations of the sketch. Known techniques no longer guarantee the correctness of the output in the presence of such correlations. We therefore ask: are linear sketches inherently non-robust to adaptively chosen inputs? We give a strong affirmative answer to this question. Specifically, we show that no linear sketch approximates the Euclidean norm of its input to within an arbitrary multiplicative approximation factor on a polynomial number of adaptively chosen inputs. The result remains true even if the dimension of the sketch is d = n - o(n) and the sketch is given unbounded computation time. Our result is based on an algorithm with running time polynomial in d that adaptively finds a distribution over inputs on which the sketch is incorrect with constant probability. Our result implies several corollaries for related problems including lp-norm estimation and l2/l2 compressed sensing in the presence of adaptively chosen inputs. Joint work with Moritz Hardt.

Slides are available here.

Eric Price | Sparse Recovery and Fourier Sampling |

Abstract The goal of "stable sparse recovery" or "compressive sensing" is to recover a K-sparse approximation x' of a vector x in R^N from M linear measurements of x. This problem has a wide variety of applications, including streaming algorithms, image acquisition, and genetic testing. A related problem is that of computing a "sparse" Fourier transform, which approximates the Fast Fourier Transform in less than N log N time. In this talk I will survey the results in the area and describe general techniques used in solving them.

Slides are available here.

Adam Smith | Privacy and the Analysis of High-Dimensional Data |

Abstract I'll explain why the curse of dimensionality takes on new dimensions in the context of private data analysis. After a general introduction, I will discuss recent results on differentially private sparse regression. Our results shed new light on the robustness of common techniques for solving sparse regression problems, such as the L1 regularization. Based on joint work with Dan Kifer (Penn State) and Abhradeep Guha Thakurta (now at MSR and Stanford).

Slides are available here.

Richard Lipton | New and Old Graph Products |

Abstract Graph products take two or more graphs and create a new graph. They have many applications in combinatorics and in theory. For example, the strong graph product plays a central role in the definition of the Shannon Capacity of a graph. In this talk I will discuss some of the classic graph products and then introduce several new ones. These have some interesting properties, have some interesting applications, and remain difficult to understand. I will present what I know about these new products and connect them to some important open problems from complexity theory.

Slides are available here.

**Join the CompressiveSensing subreddit or the Google+ Community and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment