Pages

Monday, April 30, 2018

Video, Preprint and Implementation: Measuring the Intrinsic Dimension of Objective Landscapes

While waiting for the Workshop on the Future of Random Projections this coming Wednesday (you can register here) . Here is a video that present a paper that will be featured at ICLR this week that talks about random projections !





In this video from Uber AI Labs, researchers Chunyuan Li and Jason Yosinski describe their ICLR 2018 paper "Measuring the Intrinsic Dimension of Objective Landscapes". The research, performed with co-authors Heerad Farkhoor and Rosanne Liu, develops intrinsic dimension as a fundamental property of neural networks. Intrinsic dimension quantifies the complexity of a model in a manner decoupled from its raw parameter count, and the paper provides a simple way of measuring this dimension using random projections. Many problems have smaller intrinsic dimension than one might suspect. By using intrinsic dimension to compare across problem domains, one may measure, for example, that solving the inverted pendulum problem is about 100 times easier than classifying digits from MNIST, and playing Atari Pong from pixels is about as hard as classifying CIFAR-10.





Many recently trained neural networks employ large numbers of parameters to achieve good performance. One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem. But how accurate are such notions? How many parameters are really needed? In this paper we attempt to answer this question by training networks not in their native parameter space, but instead in a smaller, randomly oriented subspace. We slowly increase the dimension of this subspace, note at which dimension solutions first appear, and define this to be the intrinsic dimension of the objective landscape. The approach is simple to implement, computationally tractable, and produces several suggestive conclusions. Many problems have smaller intrinsic dimensions than one might suspect, and the intrinsic dimension for a given dataset varies little across a family of models with vastly different sizes. This latter result has the profound implication that once a parameter space is large enough to solve a problem, extra parameters serve directly to increase the dimensionality of the solution manifold. Intrinsic dimension allows some quantitative comparison of problem difficulty across supervised, reinforcement, and other types of learning where we conclude, for example, that solving the inverted pendulum problem is 100 times easier than classifying digits from MNIST, and playing Atari Pong from pixels is about as hard as classifying CIFAR-10. In addition to providing new cartography of the objective landscapes wandered by parameterized models, the method is a simple technique for constructively obtaining an upper bound on the minimum description length of a solution. A byproduct of this construction is a simple approach for compressing networks, in some cases by more than 100 times.

implementation is here: https://github.com/uber-research/intrinsic-dimension



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, April 27, 2018

Quantized Compressive K-Means

Laurent, a long time reader of Nuit Blanche and one of the speakers at the workshop on the Future of Random Projection II this coming wednesday ( you can register here whether you are in Paris or not so as to receive information on the link for the streaming ) has just released an arxiv on the subject area:



The recent framework of compressive statistical learning aims at designing tractable learning algorithms that use only a heavily compressed representation-or sketch-of massive datasets. Compressive K-Means (CKM) is such a method: it estimates the centroids of data clusters from pooled, non-linear, random signatures of the learning examples. While this approach significantly reduces computational time on very large datasets, its digital implementation wastes acquisition resources because the learning examples are compressed only after the sensing stage. The present work generalizes the sketching procedure initially defined in Compressive K-Means to a large class of periodic nonlinearities including hardware-friendly implementations that compressively acquire entire datasets. This idea is exemplified in a Quantized Compressive K-Means procedure, a variant of CKM that leverages 1-bit universal quantization (i.e. retaining the least significant bit of a standard uniform quantizer) as the periodic sketch nonlinearity. Trading for this resource-efficient signature (standard in most acquisition schemes) has almost no impact on the clustering performances, as illustrated by numerical experiments.





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Thursday, April 26, 2018

Mini-workshop: The Future of Random Projections II, 1pm-4pm, May 2nd, 2018, Paris, France




Florent Krzakala and I are organizing a second mini-workshop on The Future of Random Projections  un-originally titled "The Future of Random Projections II".

As data is getting richer, the need to make sense of it is becoming paramount in many different areas. In this context of large scale learning, Random Projections offer a way to be useful in a variety of unsupervised and supervised learning techniques. In this workshop, we will explore the different use of this transform  from the point of you of several research areas featured in each talk.

We will be streaming it during the event and the video will then be on YouTube. For those of you in Paris, it is going to be on May 2nd, 2018 at IPGG. you can register here whether you are in Paris or not so as to receive information on the link for the streaming. The workshop is hosted by LightOn.


Streaming video:




Here is the four main speakers we will have. The event will start at 1:00PM Paris time and should stop on or before 4:00PM.

1:00 pm - 1:30 pm
Title: "Time for dithering! Quantized random embeddings with RIP random matrices."

Abstract: Quantized compressive sensing (QCS) deals with the problem of coding compressive measurements of low-complexity signals (e.g., sparse vectors in a given basis, low-rank matrices) with quantized, finite precision representations, i.e., a mandatory process involved in any practical sensing model. While the resolution of this quantization clearly impacts the quality of signal reconstruction, there even exist incompatible combinations of quantization functions and sensing matrices that proscribe arbitrarily low reconstruction error when the number of measurements increases.

In this introductory talk, we will see that a large class of random matrix constructions, i.e., known to respect the restricted isometry property (RIP) in the compressive sensing literature, can be made "compatible" with a simple scalar and uniform quantizer (e.g., a rescaled rounding operation). This compatibility is simply ensured by the addition of a uniform random vector, or random "dithering", to the compressive signal measurements before quantization.

In this context, we will first study how quantized, dithered random projections of "low-complexity" signals is actually an efficient dimensionality reduction technique that preserves the distances of low-complexity signals up to some controllable additive and multiplicative distortions. Second, the compatibility of RIP sensing matrices with the dithered quantization process will be demonstrated by the existence of (at least) one signal reconstruction method, the projected back projection (PBP), which achieves low reconstruction error, decaying when the number of measurements increases. Finally, by leveraging the quasi-isometry property reached by quantized, dithered random embeddings, we will show how basic signal classification (or clustering) can be realized from their QCS observations, i.e., without a reconstruction step. Here also the complexity, or intrinsic dimension, of the observed signals drives the final classification accuracy.

1:30pm - 2:00pm  Julien Mairal, Inria Grenoble
Title: Foundations of Deep Learning from a Kernel Point of View.

Abstract: In the past years, deep neural networks such as convolutional or recurrent ones have become highly popular for solving various prediction problems, notably in computer vision and natural language processing. Conceptually close to approaches that were developed several decades ago, they greatly benefit from the large amounts of labeled data that have been available recently, allowing to learn huge numbers of model parameters without worrying too much about overfitting. Before the resurgence of neural networks, non-parametric models based on positive definite kernels were one of the most dominant topics in machine learning. These approaches are still widely used today because of several attractive features. Kernel methods are indeed versatile; as long as a positive definite kernel is specified for the type of data considered—e.g., vectors, sequences, graphs, or sets—a large class of machine learning algorithms originally defined for linear models may be used. Kernel methods also admit natural mechanisms to control the learning capacity and reduce overfitting. In this talk, we will consider both paradigms and show how they are related. We will notably show that the reproducing kernel point of view allows to derive theoretical results for classical convolutional neural networks.

2:00pm - 2:10 pm small break

2:10pm - 2:40pm: Dmitry Ulyanov, Skoltech Institute
Title: Deep Image Prior

Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them and to restore images based on flash-no flash input pairs.Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity.

2:40pm - 3:10pm: Kurt Cutajar , EURECOM,
Title “Random Feature Expansions for Deep Gaussian Processes”

Abstract: The widespread application of machine learning in safety-critical domains such as medical diagnosis and autonomous driving has sparked a renewed interest in probabilistic models which produce principled uncertainty estimates alongside predictions. The composition of multiple Gaussian processes as a deep Gaussian process (DGP) enables a deep probabilistic nonparametric approach to flexibly tackle complex machine learning problems with sound quantification of uncertainty. However, traditional inference approaches for DGP models have limited scalability and are notoriously cumbersome to construct. Inspired by recent advances in the field of Bayesian deep learning, in this talk I shall present an alternative formulation of DGPs based on random feature expansions. This yields a practical learning framework which significantly advances the state-of-the-art in inference for DGPs, and enables accurate quantification of uncertainty. The scalability and performance of our proposal is showcased on several datasets with up to 8 million observations, and various DGP architectures with up to 30 hidden layers.

3:10pm - 4:00pm Coffee break.


Credit image: Rich Baraniuk

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, April 20, 2018

Videos: Computational Theories of the Brain, Simons Institute for the Theory of Computing



Monday, April 16th, 20188:30 am – 8:50 am
Coffee and Check-In
8:50 am – 9:00 am
Opening Remarks
9:00 am – 9:45 am
The Prefrontal Cortex as a Meta-Reinforcement Learning SystemMatthew Botvinick, DeepMind Technologies Limited, London and University College London9:45 am – 10:30 am
Working Memory Influences Reinforcement Learning Computations in Brain and BehaviorAnne Collins, UC Berkeley10:30 am – 11:00 am
Break
11:00 am – 11:45 am
Predictive Coding Models of PerceptionDavid Cox, Harvard University11:45 am – 12:30 pm
TBASophie Denève, Ecole Normale Supérieure12:30 pm – 2:30 pm
Lunch
2:30 pm – 3:15 pm
Towards Biologically Plausible Deep Learning: Early Inference in Energy-Based Models Approximates Back-PropagationAsja Fischer, University of Bonn
3:15 pm – 4:00 pm
Neural Circuitry Underling Working Memory in the Dorsolateral Prefrontal CortexVeronica Galvin, Yale University
4:00 pm – 5:00 pm
Reception

Tuesday, April 17th, 20188:30 am – 9:00 am
Coffee and Check-In
9:00 am – 9:45 am
TBASurya Ganguli, Stanford University9:45 am – 10:30 am
Does the Neocortex Use Grid Cell-Like Mechanisms to Learn the Structure of Objects?Jeff Hawkins, Numenta
10:30 am – 11:00 am
Break
11:00 am – 11:45 am
Dynamic Neural Network Structures Through Stochastic RewiringRobert Legenstein, Graz University of Technology11:45 am – 12:30 pm
Backpropagation and Deep Learning in the BrainTimothy Lillicrap, DeepMind Technologies Limited, London12:30 pm – 2:30 pm
Lunch
2:30 pm – 3:15 pm
An Algorithmic Theory of Brain NetworksNancy Lynch, Massachusetts Institute of Technology3:15 pm – 4:00 pm
Networks of Spiking Neurons Learn to Learn and RememberWolfgang Maass, Graz University of Technology4:00 pm – 4:30 pm
Break
4:30 pm – 5:30 pm
Plenary Discussion: What Is Missing in Current Theories of Brain Computation?

Wednesday, April 18th, 20188:30 am – 9:00 am
Coffee and Check-In
9:00 am – 9:45 am
Functional Triplet Motifs Underlie Accurate Predictions of Single-Trial Responses in Populations of Tuned and Untuned v1 NeuronsJason MacLean, University of Chicago9:45 am – 10:30 am
The Sparse Manifold TransformBruno Olshausen, UC Berkeley10:30 am – 11:00 am
Break
11:00 am – 11:45 am
Playing Newton: Automatic Construction of Phenomenological, Data-Driven Theories and ModelsIlya Nemenman, Emory University11:45 am – 12:30 pm
A Functional Classification of Glutamatergic Circuits in Cortex and ThalamusS. Murray Sherman, University of Chicago12:30 pm – 2:30 pm
Lunch
2:30 pm – 3:15 pm
On the Link Between Energy & Information for the Design of Neuromorphic SystemsNarayan Srinivasa, Eta Compute3:15 pm – 4:00 pm
Neural Circuit Representation of Multiple Cognitive Tasks: Clustering and CompositionalityXJ Wang, New York University4:00 pm – 4:30 pm
Break
4:30 pm – 5:30 pm
Plenary Discussion: How Can One Test/Falsify Current Theories of Brain Computation?

Thursday, April 19th, 20188:30 am – 9:00 am
Coffee and Check-In
9:00 am – 9:45 pm
Control of Synaptic Plasticity in Deep Cortical NetworksPieter Roelfsema, University of Amsterdam9:45 am – 10:30 am
Computation with AssembliesChristos Papadimitriou, Columbia University10:30 am – 11:00 am
Break
11:00 am – 11:45 am
Capacity of Neural Networks for Lifelong Learning of Composable TasksLes Valiant, Harvard University11:45 am – 12:30 pm
An Integrated Cognitive ArchitectureGreg Wayne, Columbia University

Tuesday, April 17, 2018

Revisiting Skip-Gram Negative Sampling Model With Regularization



Matt just sent me the following

Hi Igor  
I would like to point you to our recent paper on the arXiv: Revisiting Skip-Gram Negative Sampling Model With Regularization (https://arxiv.org/pdf/1804.00306.pdf), which essentially deals with one specific low-rank matrix factorization model.  
The abstract is as follows:
We revisit skip-gram negative sampling (SGNS), a popular neural-network based approach to learning distributed word representation. We first point out the ambiguity issue undermining the SGNS model, in the sense that the word vectors can be entirely distorted without changing the objective value. To resolve this issue, we rectify the SGNS model with quadratic regularization. A theoretical justification, which provides a novel insight into quadratic regularization, is presented. Preliminary experiments are also conducted on Google’s analytical reasoning task to support the modified SGNS model.  
Your opinion will be much appreciated! 
Thanks, Matt Mu

Monday, April 16, 2018

Ce soir/Today: Paris Machine Learning Meetup Hors Série #4 Saison 5: Le Canada et l'IA



C'est un meetup Hors Série exceptionnel organisé conjointement avec l'Ambassade du Canada en France. Nous serons accueillis dans les locaux de Xebia (merci à eux! et leur événement dataXday)  Nous commencerons le meetup à 19h00 et ouvrirons les portes vers 18h30. La video en streaming est ici (les presentations seront sur cette page avant le meetup)





Voici le programme technique:


Le détail des quatre présentations techniques:

Title: “ How we solve Poker”
SPEAKER: Prof. Mike Bowling

Cepheus is our new poker-playing program capable of playing a nearly perfect game of heads-up limit Texas hold'em. It is so close to perfect that even after an entire human lifetime of playing against it, you couldn't be statistically certain it wasn't perfect. We call such a game essentially solved. This work just appeared in Science. You can read the paper. You can query Cepheus about how it plays and play against it. Or you can read the many news articles on the result. Site: http://poker.srv.ualberta.ca/

SPEAKER : Vadim Bulitko, Associate Professor at the University of Alberta, Department of Computing Science

ABSTRACT: Artificial Intelligence is rapidly entering our daily life in the form of smartphone assistants, self-driving cars, etc. While such AI assistants can make our lives easier and safer, there is a growing interest in understanding how long they will remain our intellectual servants. With the powerful applications of self-training and self-learning (e.g., the recent work by Deep Mind on self-learning to play several board games at a championship level), what behaviors will such self-learning AI agents learn? Will there be genuine knowledge discoveries made by them? How much understanding of their novel behavior will we, as humans, be able to gather?
This project builds on our group's 12 years of expertise in developing AI agents learning in a real-time setting and takes a step towards investigating the grand yet pressing questions listed above. We are developing a video-game-like testbed in which we allow our AI agents to evolve over time and learn from their life experience. The agents use genetically encoded deep neural networks to represent behaviors and pass them onto their off-springs in the simulated evolution. A separate deep neural network is then trained to watch the simulation and flag emergence of any unusual behaviours. We expect to study emergence of novel behaviors such as development of friend-foe identification techniques, simple forms of communication, apprenticeship learning and others.
site: http://agi-lab.net

SPEAKER: Martin Müller, Computing Science, University of Alberta

ABSTRACT: I will give a brief overview of recent work in my research group. While the applications are diverse and range from games and Monte Carlo Tree Search to SAT solving, a common goal drives much of the work: to better understand the use of exploration in very large search spaces.
Site: https://webdocs.cs.ualberta.ca/~mmueller/




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, April 15, 2018

CSjob: Research Fellow in Machine Listening, University of Surrey, UK

Mark sent me the following a few days ago:

Dear Igor, 
I thought that Nuit Blanche readers may be interested in this job opportunity.I will be at ICASSP in Calgary next week, in case anyone would like more details. Our "Making Sense of Sounds" project will have presentations  related to this post, including in poster session AASP-P3  ("DCASE I", Wednesday, April 18, 08:30 - 10:30) and lecture Session  AASP-L6 ("DCASE II", Thursday, April 19, 13:30-15:30, where I co-chair). 
Best wishes, Mark
Sure Mark !

============================================================
Research Fellow in Machine Listening
University of Surrey, UK
Salary: GBP 30,688 to GBP 38,833 per annum
Closing Date: 1 May 2018 (23:00 BST)
https://jobs.surrey.ac.uk/021518 

Applications are invited for a Research Fellow in Machine Listening to work full-time on an EPSRC-funded project "Making Sense of Sounds", to start as soon as possible, for 9.75 months until 13 March 2019. This project is investigating how to make sense from sound data, focussing on how to allow people to search, browse and interact with sounds. The candidate will be responsible for investigating and developing machine learning methods for analysis of everyday sounds, leading to new representations to support search, retrieval and interaction with sound. 
The successful applicant is expected to have a PhD or equivalent in electronic engineering, computer science or a related subject, and is expected to have significant research experience in audio signal processing and machine learning. Research experience in one or more of the following is desirable: deep learning; blind source separation, blind de-reverberation, sparse and/or non-negative representations, audio feature extraction. 
The project is being led by Prof Mark Plumbley in the Centre for Vision Speech and Signal Processing (CVSSP) at the University of Surrey, in collaboration with the Digital World Research Centre (DWRC) at Surrey, and the University of Salford. The postholder will be based in CVSSP and work under the direction of Prof Plumbley and Co-Investigators Dr Wenwu Wang and Dr Philip Jackson. For more about the project see:
http://cvssp.org/projects/making_sense_of_sounds/ 
CVSSP is an International Centre of Excellence for research in Audio-Visual Machine Perception, with 125 researchers, a grant portfolio of £20M. The Centre has state-of-the-art acoustic capture and analysis facilities enabling research into audio source separation, music transcription and spatial audio. Audio-visual compute includes 700 cores and a 50GPU machine learning cluster with 500TB of online storage. Informal enquires are welcome, to: Prof Mark Plumbley (m.plumbley@surrey.ac.uk), Dr Wenwu Wang (w.wang@surrey.ac.uk), or Dr Philip Jackson (p.jackson@surrey.ac.uk).
For more information and to apply online, please visit:
https://jobs.surrey.ac.uk/021518
We acknowledge, understand and embrace diversity.
============================================================
--
Prof Mark D Plumbley
Professor of Signal Processing
Centre for Vision, Speech and Signal Processing (CVSSP)
University of Surrey, Guildford, Surrey, GU2 7XH, UK
Email: m.plumbley@surrey.ac.uk
===========================================================
LVA/ICA 2018
14th International Conference on Latent Variable Analysis and Signal Separation
July 2-6, 2018, University of Surrey, Guildford, UK
http://cvssp.org/events/lva-ica-2018===========================================================




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, April 11, 2018

Paris Machine Learning #8 Season 5, Chat with Self Driving Cars Engineers at Voyage, Funding AI at Zeroth, NVIDIA GTC, Quality inspection at Scortex and Low Rank Matrices


This meetup is going to be pretty exciting !




We will have a chat with four Self Driving Car engineers from Voyage, then we will get to learn what happened at NVIDIA's GTC event, then will talk about quality inspection with Scortex, then will hear about funding with zeroth.AI and finally we'll hear about finding the low rank of matrices without factorization !

Here is the schedule:
6:30 PM doors open ; 6:45 PM : talks begin ; 9:00 PM : talks end 10:00 PM : end

Presentation of the meetup: Franck Bardol, Jacqueline Forien, Igor Carron

Short announcement: Sami Moustachir, Data for Good - Annonce du projet d'un serment d'hippocrate pour les travailleurs de la donnée

Dans le cadre d'un projet de l'association Data For Good, nous portons un projet de code de conduite ou "check-list" pour data scientists ou toute personne travaillant avec la donnée. Pour cela, nous avons crée un premier formulaire pour le data scientist ou toute personne travaillant la donnée pour nous aider à bâtir une première proposition.

Streaming is here:

Roundtable start at 7:10PM Paris time.

---Chat with Self Driving Car engineers  Tarin Ziyaee,, Emrah AdameyNishanth Alapati Tarek El-Gaaly of Voyage (https://voyage.auto/). If you want to ask questions and you are not on site, send your question with the #MLParis on Twitter

We're bringing self-driving cars to a retirement community (and city) like no other: The Villages, Florida. With 125,000 residents, 750 miles of road and 3 distinct downtowns, The Villages is a truly special place to live.
Talks :

--- Guillaume Barat, NVIDIA, NVIDIA updates (https://www.nvidia.com/en-us/gtc/topics/deep-learning-and-ai/) - How to accelerate AI ?

NVIDIA will come back on GTC annoucements (GPU Technology Conference) and how to accelerate AI workload.

--- Pierre Gutierrez, Scortex.io (http://scortex.io), Automating quality visual inspection using deep learning

Driven by Industry 4.0, Scortex deploys artificial intelligence at the heart of factories.
We offer a smart visual inspection solution for quality control. Scortex turnkey platform enables manufacturing companies to automate their most complex inspection tasks.
In this talk, we’ll share Scortex experience on computer vision for visual inspection in factory environment. We will explain what are our current challenges and how we plan to solve them.
Then, on a real use case example, we will discuss how we generate data through our own acquisition system and what are the advantages and drawbacks of this from the machine learning point of view. We will also discuss our labelling process as well as the leads we have to reduce the labelling efforts on our side.


Talk about the AI investments we do at Zeroth

-- Wenjie Zheng, Learning Low-rank Matrices Distributedly without Factorization

Learning low-rank matrices is a problem of great importance in statistics, machine learning, computer vision and recommender systems.
Because of its NP-hard nature, a principled approach is to solve its tightest convex relaxation: trace norm minimization.
Among various algorithms capable of solving this optimization is the Frank-Wolfe method, which is particularly suitable for high-dimensional matrices.
In preparation for the usage of distributed infrastructures to further accelerate the computation, this study aims at exploring the possibility of executing the Frank- Wolfe algorithm in a star network with the Bulk Synchronous Parallel (BSP) model and investigating its efficiency both theoretically and empirically.







Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, April 10, 2018

Videos: The "Institute for Advanced Study - Princeton University Joint Symposium on 'The Mathematical Theory of Deep Neural Networks'"

Adam followed through with videos of the awesome workshop he co-organized last month:
Hi Igor,

Thanks for posting about our recent workshop --- The "Institute for Advanced Study - Princeton University Joint Symposium on 'The Mathematical Theory of Deep Neural Networks'" --- last month. I just wanted to follow up and let you know that for those that missed the live-stream, we have put videos of all the talks up online:

https://www.youtube.com/playlist?list=PLWQvhvMdDChyI5BdVbrthz5sIRTtqV6Jw

I hope you and your readers enjoy!

Cheers,

-Adam
----------------------------Adam CharlesPost-doctoral associatePrinceton Neuroscience InstitutePrinceton, NJ, 08550 

Thanks Adam ! Here are the videos:

9:10 Adam Charles: Introductory remarks


2
56:17 Sanjeev Arora: Why do deep nets generalize, that is, predict well on unseen data


3
59:34 Sebastian Musslick: Multitasking Capability vs Learning Efficiency in Neural Network Architectures


4
48:01 Joan Bruna: On the Optimization Landscape of Neural Networks


5
59:44 Andrew Saxe: A theory of deep learning dynamics: Insights from the linear case


6
51:13 Anna Gilbert: Toward Understanding the Invertibility of Convolutional Neural Networks


7
1:03:57 Nadav Cohen: On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, April 09, 2018

Efficient Neural Architecture Search via Parameter Sharing - implementation -

Melody mentions on her twitter feed that an implementation of her work is now available.




We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Thanks to parameter sharing between child models, ENAS is fast: it delivers strong empirical performances using much fewer GPU-hours than all existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On the Penn Treebank dataset, ENAS discovers a novel architecture that achieves a test perplexity of 55.8, establishing a new state-of-the-art among all methods without post-training processing. On the CIFAR-10 dataset, ENAS designs novel architectures that achieve a test error of 2.89%, which is on par with NASNet (Zoph et al., 2018), whose test error is 2.65%.

The implementation in TensorFlow is here: https://github.com/melodyguan/enas
and in PyTorch: https://github.com/carpedm20/ENAS-pytorch







Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Saturday, April 07, 2018

Saturday Morning Videos: Bandit Convex Optimization, PGMO Lecture 1 and 2


Sebastien did four lectures on Bandit Convex Optimization for the Gaspard Monge Program in Optimization. Two of them are on Sebastien YouTube channel. Here is the abstract:

The multi-armed bandit and its variants have been around for more than 80 years, with applications ranging from medial trials in the 1930s to ad placement in the 2010s. In this mini-course I will focus on a groundbreaking model introduced in the 1990s which gets rid of the unrealistic i.i.d. assumption that is standard in statistics and learning theory. This paradigm shift leads to exciting new mathematical and algorithmic challenges. I will focus the lectures on the foundational results of this burgeoning field, as well as their connections with classical problems in mathematics such as the geometry of martingales and high dimensional phenomena. 
  • Lecture 1: Introduction to regret. Game theoretic viewpoint (duality, Bayesian version of the game) and derivation of the minimax regret via geometry of martingales (brief recall of type/cotype and entropic proof for ell_1). 
  • Lecture 2: Introduction to the mirror descent algorithm. Connections with competitive analysis in online computations will also be discussed. 
  • Lecture 3: Bandit Linear Optimization. Two proofs of optimal regret: one via low-rank decomposition in the information theoretic argument, and the other via mirror descent with self-concordant barriers. 
  • Lecture 4 : Bandit Convex optimization 1. Kernel methods for online learning, Bernoulli convolution based kernel. 2. Gaussian approximation of Bernoulli convolutions, and restart type strategies.


Bandit Convex Optimization, PGMO Lecture 1 (slides)




Bandit Convex Optimization, PGMO Lecture 2 (slides)



Bandit Convex Optimization, PGMO Lecture 3 slides and Lecture 4 slides..



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, April 06, 2018

Blocked Direct Feedback Alignment: Exploring the Benefits of Direct Feedback Alignment

Interesting exploration of DFA concepts !


Blocked Direct Feedback Alignment:Exploring the Benefits of Direct Feedback Alignment by Mateo Espinosa Zarlenga, Eyvind Niklasson

Backpropagation is undoubtedly the preferred method for training deep feedforward neural networks. While this method has proven its effectiveness on applications ranging over a myriad of different fields, it has some well-known drawbacks. Moreover, this algorithm is arguably far from being biologically plausible, which makes it very unattractive as a crucial step of any attempt for an accurate model of our brain. Alternatives like feedback alignment and direct feedback alignment has then been proposed recently as possible methods that are more biologically plausible than backpropagation while also correcting some of the know drawbacks of this algorithm. For this project, we explore the uses of this last method, direct feedback alignment (DFA), by looking at variants of the same that could lead to improvements in both training convergence times and testing-time accuracies. We present two main variants: Feedback Propagation (FP) and Blocked Direct Feedback Alignment (BDFA). These variants of DFA attempt to find some sort of equilibrium between DFA and backpropagation that takes advantage of the benefits in both methods. In our experiments we manage to empirically show that BDFA outperforms both DFA and backpropagation in terms of convergence time and testing performance when used to train very deep neural networks with fully connected layers on MNIST and notMNIST. 




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

SUNLayer: Stable denoising with generative networks

Dustin just let me know of the following item to be added to The Great Convergence:

Hi Igor, 
I wanted to point you to a recent paper on the arXiv: 
I think you'll like Figure 1 in particular.
Apparently, GANs provide signal models that allow for extremely good denoising in a high-noise regime. To denoise, we hunt for the point in the GAN model that's closest to the noisy image. Surprisingly, local minimization works well in practice. To help explain this, we provide theory for a certain model of neural networks using techniques from spherical harmonics. This is joint work with Soledad Villar (NYU).
Cheers,
Dustin

Yes, you're right, I do like Figure 1 ! 



It has been experimentally established that deep neural networks can be used to produce good generative models for real world data. It has also been established that such generative models can be exploited to solve classical inverse problems like compressed sensing and super resolution. In this work we focus on the classical signal processing problem of image denoising. We propose a theoretical setting that uses spherical harmonics to identify what mathematical properties of the activation functions will allow signal denoising with local methods.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, April 05, 2018

Processeurs optiques et traitement de données de grande dimension/ Optical Co-Processors and High Dimensional Data Processing, Paris, April 5, 2018

So today, we'll do a presentation of where we are at LightOn. Both Laurent and I will be speaking at the Paris Science and Data eventHere is the anouncement on Inria's website. Nicolas Keriven is one of one of our first alpha users of LightOn Cloud.




Paris Science & Data est une série d’événements organisés conjointement par le pôle Cap Digital, l’Inria et PSL, et destinés à présenter des recherches concernant la science des données, ainsi que leurs applications dans le monde académique et dans celui des entreprises.
Au programme de cette 8e conférence différents intervenants prendront la parole sur les sujets suivants :
  • From computational imaging to optical computing (Laurent Daudet - Professeur Paris Diderot/Institut Langevin & CTO LightOn)
  • Online sketches with random features (Nicolas Keriven - Chercheur ENS, CFM-ENS ''Laplace'' chair in Data Science)
  • Lighton : une nouvelle génération de coprocesseurs optiques (Igor Carron - CEO LightOn)




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.