Tuesday, June 25, 2019

CfP: The 2019 Conference on Mathematical Theory of Deep Neural Networks (DeepMath 2019) Princeton Club, New York City, Oct 31-Nov 1 2019.

** Nuit Blanche is now on Twitter: @NuitBlog **



Adam let me know of the following:

Dear Igor
I hope this email finds you well. Thank you for posing our conference on your blog earlier this year. As the deadline for submission is fast approaching (June 28th), I was hoping you would post once again. We have an amazing lineup of speakers (below) and I expect the subject matter is very much in line with your readership!
Cheers!

-Adam


Here is the announcement:

ANNOUNCEMENT
The 2019 Conference on Mathematical Theory of Deep Neural Networks (DeepMath 2019)
Princeton Club, New York City, Oct 31-Nov 1 2019.
Web: https://www.deepmath-conference.com/
======= Important Dates =======
Submission deadline for 1-page abstracts: June 28, 2019Notification: TBA.
Conference: Oct 31-Nov 1 2019.
======= Speakers =======
Sanjeev Arora (Princeton University, Keynote Speaker), Anima Anandkumar (CalTech), Yasaman Bahri (Google), Minmin Chen (Google), Michael Elad (Technion), Surya Ganguli (Stanford), Tomaso Poggio (MIT), David Schwab (CUNY), Shai Shalev-Shwartz (Hebrew University), Haim Sompolinsky (Hebrew University and Harvard), and Naftali Tishby (Hebrew University).
======= Call for Abstracts =======
In addition to these high-profile invited speakers, we invite 1-page non-archival abstract submissions. Abstracts will be reviewed double-blind and presented as posters.
To complement the wealth of conferences focused on applications, all submissions for DeepMath 2019 must target theoretical and mechanistic understanding of the underlying properties of neural networks.
Insights may come from any discipline and we encourage submissions from researchers working in computer science, engineering, mathematics, neuroscience, physics, psychology, statistics, or related fields.
Topics may address any area of deep learning theory, including architectures, computation, expressivity, generalization, optimization, representations, and may apply to any or all network types including fully connected, recurrent, convolutional, randomly connected, or other network topologies.
======= Conference Topic =======
Recent advances in deep neural networks (DNNs), combined with open, easily-accessible implementations, have made DNNs a powerful, versatile method used widely in both machine learning and neuroscience. These advances in practical results, however, have far outpaced a formal understanding of these networks and their training. Recently, long-past-due theoretical results have begun to emerge, shedding light on the properties of large, adaptive, distributed learning architectures.
Following the success of the 2018 IAS-Princeton joint symposium on the same topic (https://sites.google.com/site/princetondeepmath/home), the 2019 meeting is more centrally located and broader in scope, but remains focused on rigorous theoretical understanding of deep neural networks.

Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn


Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog

Monday, June 24, 2019

Meta-learning of textual representations - implementation -

** Nuit Blanche is now on Twitter: @NuitBlog **



Recent progress in AutoML has lead to state-of-the-art methods (e.g., AutoSKLearn) that can be readily used by non-experts to approach any supervised learning problem. Whereas these methods are quite effective, they are still limited in the sense that they work for tabular (matrix formatted) data only. This paper describes one step forward in trying to automate the design of supervised learning methods in the context of text mining. We introduce a meta learning methodology for automatically obtaining a representation for text mining tasks starting from raw text. We report experiments considering 60 different textual representations and more than 80 text mining datasets associated to a wide variety of tasks. Experimental results show the proposed methodology is a promising solution to obtain highly effective off the shell text classification pipelines.
 An implementation is here: https://github.com/jorgegus/autotext

Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

One-Shot Neural Architecture Search via Compressive Sensing

** Nuit Blanche is now on Twitter: @NuitBlog **

Using sparse recovery in network architecture search, there has to be a meta-thread somewhere here !


Neural architecture search (NAS), or automated design of neural network models, remains a very challenging meta-learning problem. Several recent works (called "one-shot" approaches) have focused on dramatically reducing NAS running time by leveraging proxy models that still provide architectures with competitive performance. In our work, we propose a new meta-learning algorithm that we call CoNAS, or Compressive sensing-based Neural Architecture Search. Our approach merges ideas from one-shot approaches with iterative techniques for learning low-degree sparse Boolean polynomial functions. We validate our approach on several standard test datasets, discover novel architectures hitherto unreported, and achieve competitive (or better) results in both performance and search time compared to existing NAS approaches. Further, we support our algorithm with a theoretical analysis, providing upper bounds on the number of measurements needed to perform reliable meta-learning; to our knowledge, these analysis tools are novel to the NAS literature and may be of independent interest.


Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn


Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog

Friday, June 21, 2019

An Open Source AutoML Benchmark - implementation -

** Nuit Blanche is now on Twitter: @NuitBlog **




In recent years, an active field of research has developed around automated machine learning (AutoML). Unfortunately, comparing different AutoML systems is hard and often done incorrectly. We introduce an open, ongoing, and extensible benchmark framework which follows best practices and avoids common mistakes. The framework is open-source, uses public datasets and has a website with up-to-date results. We use the framework to conduct a thorough comparison of 4 AutoML systems across 39 datasets and analyze the results. 
An implementation of the benchmark is here: https://github.com/openml/automlbenchmark/


Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Probabilistic Rollouts for Learning Curve Extrapolation Across Hyperparameter Settings - implementation -

** Nuit Blanche is now on Twitter: @NuitBlog **



We propose probabilistic models that can extrapolate learning curves of iterative machine learning algorithms, such as stochastic gradient descent for training deep networks, based on training data with variable-length learning curves. We study instantiations of this framework based on random forests and Bayesian recurrent neural networks. Our experiments show that these models yield better predictions than state-of-the-art models from the hyperparameter optimization literature when extrapolating the performance of neural networks trained with different hyperparameter settings.
An implementation is here: https://github.com/gmatilde/vdrnn

Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Thursday, June 20, 2019

Improving Automated Variational Inference with Normalizing Flows - implementation -

** Nuit Blanche is now on Twitter: @NuitBlog **



We describe a framework for performing automatic Bayesian inference in probabilistic programs with fixed structure. Our framework takes a probabilistic program with fixed structure as input and outputs a learnt variational distribution approximating the posterior. For this purpose, we exploit recent advances in representing distributions with neural networks. We implement our approach in the Pyro probabilistic programming language, and validate it on a diverse collection of Bayesian regression models translated from Stan, showing improved inference and predictive performance relative to the existing state-of-the-art in automated inference for this class of models.

 An implementation is here: https://github.com/stefanwebb/autoguides

Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Efficient Forward Architecture Search - implementation -

** Nuit Blanche is now on Twitter: @NuitBlog **



In this work, we propose a neural architecture search (NAS) algorithm that iteratively augments existing networks by adding shortcut connections and layers. At each iteration, we greedily select among the most cost-efficient models a parent model, and insert into it a number of candidate layers. To learn which combination of additional layers to keep, we simultaneously train their parameters and use feature selection techniques to extract the most promising candidates which are then jointly trained with the parent model. The result of this process is excellent statistical performance with relatively low computational cost. Furthermore, unlike recent studies of NAS that almost exclusively focus on the small search space of repeatable network modules (cells), this approach also allows direct search among the more general (macro) network structures to find cost-effective models when macro search starts with the same initial models as cell search does. Source code is available at https://github.com/microsoft/petridishnn

Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Wednesday, June 19, 2019

Bayesian Optimization over Sets - implementation -

** Nuit Blanche is now on Twitter: @NuitBlog **


We propose a Bayesian optimization method over sets, to minimize a black-box function that can take a set as single input. Because set inputs are permutation-invariant and variable-length, traditional Gaussian process-based Bayesian optimization strategies which assume vector inputs can fall short. To address this, we develop a Bayesian optimization method with set kernel that is used to build surrogate functions. This kernel accumulates similarity over set elements to enforce permutation-invariance and permit sets of variable size, but this comes at a greater computational cost. To reduce this burden, we propose a more efficient probabilistic approximation which we prove is still positive definite and is an unbiased estimator of the true set kernel. Finally, we present several numerical experiments which demonstrate that our method outperforms other methods in various applications. 
The attendant implementation is here: https://github.com/jungtaekkim/bayeso



Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Graduated Optimisation of Black-Box Functions - implementation -

** Nuit Blanche is now on Twitter: @NuitBlog **



Motivated by the problem of tuning hyperparameters in machine learning, we present a new approach for gradually and adaptively optimizing an unknown function using estimated gradients. We validate the empirical performance of the proposed idea on both low and high dimensional problems. The experimental results demonstrate the advantages of our approach for tuning high dimensional hyperparameters in machine learning. 
 The attendant implementation is here: https://github.com/christiangeissler/gradoptbenchmark



Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Accelerating the Nelder - Mead Method with Predictive Parallel Evaluation - implementation -

** Nuit Blanche is now on Twitter: @NuitBlog **



The Nelder–Mead (NM) method has been recently proposed for application in hyperparameter optimization (HPO) of deep neural networks. However, the NM method is not suitable for parallelization, which is a serious drawback for its practical application in HPO. In this study, we propose a novel approach to accelerate the NM method with respect to the parallel computing resources. The numerical results indicate that the proposed method is significantly faster and more efficient when compared with the previous naive approaches with respect to the HPO tabular benchmarks.
 The attendant implementaiton is here.



Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email.

Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup< br/> About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog
About myselfLightOn || Google Scholar || LinkedIn ||@IgorCarron ||Homepage||ArXiv

Printfriendly