Thursday, April 26, 2018

Mini-workshop: The Future of Random Projections II, 1pm-4pm, May 2nd, 2018, Paris, France

Florent Krzakala and I are organizing a second mini-workshop on The Future of Random Projections  un-originally titled "The Future of Random Projections II".

As data is getting richer, the need to make sense of it is becoming paramount in many different areas. In this context of large scale learning, Random Projections offer a way to be useful in a variety of unsupervised and supervised learning techniques. In this workshop, we will explore the different use of this transform  from the point of you of several research areas featured in each talk.

We will be streaming it during the event and the video will then be on YouTube. For those of you in Paris, it is going to be on May 2nd, 2018 at IPGG. you can register here whether you are in Paris or not so as to receive information on the link for the streaming. The workshop is hosted by LightOn.

Streaming video:

Here is the four main speakers we will have. The event will start at 1:00PM Paris time and should stop on or before 4:00PM.

1:00 pm - 1:30 pm
Title: "Time for dithering! Quantized random embeddings with RIP random matrices."

Abstract: Quantized compressive sensing (QCS) deals with the problem of coding compressive measurements of low-complexity signals (e.g., sparse vectors in a given basis, low-rank matrices) with quantized, finite precision representations, i.e., a mandatory process involved in any practical sensing model. While the resolution of this quantization clearly impacts the quality of signal reconstruction, there even exist incompatible combinations of quantization functions and sensing matrices that proscribe arbitrarily low reconstruction error when the number of measurements increases.

In this introductory talk, we will see that a large class of random matrix constructions, i.e., known to respect the restricted isometry property (RIP) in the compressive sensing literature, can be made "compatible" with a simple scalar and uniform quantizer (e.g., a rescaled rounding operation). This compatibility is simply ensured by the addition of a uniform random vector, or random "dithering", to the compressive signal measurements before quantization.

In this context, we will first study how quantized, dithered random projections of "low-complexity" signals is actually an efficient dimensionality reduction technique that preserves the distances of low-complexity signals up to some controllable additive and multiplicative distortions. Second, the compatibility of RIP sensing matrices with the dithered quantization process will be demonstrated by the existence of (at least) one signal reconstruction method, the projected back projection (PBP), which achieves low reconstruction error, decaying when the number of measurements increases. Finally, by leveraging the quasi-isometry property reached by quantized, dithered random embeddings, we will show how basic signal classification (or clustering) can be realized from their QCS observations, i.e., without a reconstruction step. Here also the complexity, or intrinsic dimension, of the observed signals drives the final classification accuracy.

1:30pm - 2:00pm  Julien Mairal, Inria Grenoble
Title: Foundations of Deep Learning from a Kernel Point of View.

Abstract: In the past years, deep neural networks such as convolutional or recurrent ones have become highly popular for solving various prediction problems, notably in computer vision and natural language processing. Conceptually close to approaches that were developed several decades ago, they greatly benefit from the large amounts of labeled data that have been available recently, allowing to learn huge numbers of model parameters without worrying too much about overfitting. Before the resurgence of neural networks, non-parametric models based on positive definite kernels were one of the most dominant topics in machine learning. These approaches are still widely used today because of several attractive features. Kernel methods are indeed versatile; as long as a positive definite kernel is specified for the type of data considered—e.g., vectors, sequences, graphs, or sets—a large class of machine learning algorithms originally defined for linear models may be used. Kernel methods also admit natural mechanisms to control the learning capacity and reduce overfitting. In this talk, we will consider both paradigms and show how they are related. We will notably show that the reproducing kernel point of view allows to derive theoretical results for classical convolutional neural networks.

2:00pm - 2:10 pm small break

2:10pm - 2:40pm: Dmitry Ulyanov, Skoltech Institute
Title: Deep Image Prior

Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them and to restore images based on flash-no flash input pairs.Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity.

2:40pm - 3:10pm: Kurt Cutajar , EURECOM,
Title “Random Feature Expansions for Deep Gaussian Processes”

Abstract: The widespread application of machine learning in safety-critical domains such as medical diagnosis and autonomous driving has sparked a renewed interest in probabilistic models which produce principled uncertainty estimates alongside predictions. The composition of multiple Gaussian processes as a deep Gaussian process (DGP) enables a deep probabilistic nonparametric approach to flexibly tackle complex machine learning problems with sound quantification of uncertainty. However, traditional inference approaches for DGP models have limited scalability and are notoriously cumbersome to construct. Inspired by recent advances in the field of Bayesian deep learning, in this talk I shall present an alternative formulation of DGPs based on random feature expansions. This yields a practical learning framework which significantly advances the state-of-the-art in inference for DGPs, and enables accurate quantification of uncertainty. The scalability and performance of our proposal is showcased on several datasets with up to 8 million observations, and various DGP architectures with up to 30 hidden layers.

3:10pm - 4:00pm Coffee break.

Credit image: Rich Baraniuk

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments: