The organizers (David Blei, Ian Goodfellow, Balaji Lakshminarayanan, Shakir Mohamed, Rajesh Ranganath, Dustin Tran ) made the papers of the ICML2017 workshop on Implicit Models available here:
- A-NICE-MC: Adversarial Training for MCMC Jiaming Song, Shengjia Zhao, Stefano Ermon
- ABC-GAN: Adaptive Blur and Control for improved training stability of Generative Adversarial Networks Igor Susmelj, Eirikur Agustsson, Radu Timofte
- Adversarial Inversion for Amortized Inference Zenna Tavares, Armando Solar Lezama
- Adversarial Variational Inference for Tweedie Compound Poisson Models Yaodong Yang, Sergey Demyanov, Yuanyuan Liu, Jun Wang
- Adversarially Learned Boundaries in Instance Segmentation Amy Zhang
- Approximate Inference with Amortised MCMC Yingzhen Li, Richard E. Turner, Qiang Liu
- Can GAN Learn Topological Features of a Graph? Weiyi Liu, Pin-Yu Chen, Hal Cooper, Min Hwan Oh, Sailung Yeung, Toyotaro Suzumura
- Conditional generation of multi-modal data using constrained embedding space mapping Subhajit Chaudhury, Sakyasingha Dasgupta, Asim Munawar, Md. A. Salam Khan and Ryuki Tachibana
- Deep Hybrid Discriminative-Generative Models for Semi-Supervised Learning Volodymyr Kuleshov, Stefano Ermon
- ELFI, a software package for likelihood-free inference Jarno Lintusaari, Henri Vuollekoski, Antti Kangasrääsiö, Kusti Skyten, Marko Järvenpää, Michael Gutmann, Aki Vehtari, Jukka Corander, Samuel Kaski
- Flow-GAN: Bridging implicit and prescribed learning in generative models Aditya Grover, Manik Dhar, Stefano Ermon
- GANs Powered by Autoencoding — A Theoretic Reasoning Zhifei Zhang, Yang Song, and Hairong Qi
- Geometric GAN Jae Hyun Lim and Jong Chul Ye
- Gradient Estimators for Implicit Models Yingzhen Li, Richard E. Turner
- Implicit Manifold Learning on Generative Adversarial Networks Kry Yik Chau Lui, Yanshuai Cao, Maxime Gazeau, Kelvin Shuangjian Zhang
- Implicit Variational Inference with Kernel Density Ratio Fitting Jiaxin Shi, Shengyang Sun, Jun Zhu
- Improved Network Robustness with Adversarial Critic Alexander Matyasko, Lap-Pui Chau
- Improved Training of Wasserstein GANs Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron Courville
- Inference in differentiable generative models Matthew M. Graham and Amos J. Storkey
- Joint Training in Generative Adversarial Networks R Devon Hjelm, Athul Paul Jacob, Yoshua Bengio
- Latent Space GANs for 3D Point Clouds Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, Leonidas Guibas
- Likelihood Estimation for Generative Adversarial Networks Hamid Eghbal-zadeh, Gerhard Widmer
- Maximizing Independence with GANs for Non-linear ICA Philemon Brakel, Yoshua Bengio
- Non linear Mixed Effects Models: Bridging the gap between Independent Metropolis Hastings and Variational Inference Belhal Karimi
- Practical Adversarial Training with Empirical Distribution Ambrish Rawat, Mathieu Sinn, Maria-Irina Nicolae
- Recursive Cross-Domain Facial Composite and Generation from Limited Facial Parts Yang Song, Zhifei Zhang, Hairong Qi
- Resampled Proposal Distributions for Variational Inference and Learning Aditya Grover, Ramki Gummadi, Miguel Lazaro-Gredil, Dale Schuurmans, Stefano Ermon
- Rigorous Analysis of Adversarial Training with Empirical Distributions Mathieu Sinn, Ambrish Rawat, Maria-Irina Nicolae
- Robust Controllable Embedding of High-Dimensional Observations of Markov Decision Processes Ershad Banijamali, Rui Shu, Mohammad Ghavamzadeh, Hung Bui
- Spectral Normalization for Generative Adversarial Network Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida
- Stabilizing the Conditional Adversarial Network by Decoupled Learning Zhifei Zhang, Yang Song, and Hairong Qi
- Stabilizing Training of Generative Adversarial Networks through Regularization Kevin Roth, Aurelien Lucchi, Sebastian Nowozin & Thomas Hofmann
- Stochastic Reconstruction of Three-Dimensional Porous Media using Generative Adversarial Networks Lukas Mosser, Olivier Dubrule, Martin J. Blunt
- The Amortized Bootstrap Eric Nalisnick, Padhraic Smyth
- The Numerics of GANs Lars Mescheder, Sebastian Nowozin, Andreas Geiger
- Towards the Use of Gaussian Graphical Models in Variational Autoencoders Alexandra Pește, Luigi Malagò
- Training GANs with Variational Statistical Information Minimization Michael Ghaben
- Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks Jun-Yan Zhu*, Taesung Park*, Phillip Isola, Alexei A. Efros
- Unsupervised Domain Adaptation Using Approximate Label Matching Jordan T. Ash, Robert E. Schapire, Barbara E. Englhardt
- Variance Regularizing Adversarial Learning Karan Grewal, R Devon Hjelm, Yoshua Bengio
- Variational Representation Autoencoders to Reduce Mode Collapse in GANs Akash Srivastava, Lazar Valkov, Chris Russell, Michael U. Gutmann, Charles Sutton
Dougal Sutherland, Evaluating and Training Implicit Generative Models with Two-Sample Tests
Samples from implicit generative models are difficult to judge quantitatively: particularly for images, it is typically easy for humans to identify certain kinds of samples which are very unlikely under the reference distribution, but very difficult for humans to identify when modes are missing, or when types are merely under- or over-represented. This talk will overview different approaches towards evaluating the output of an implicit generative model, with a focus on identifying ways in which the model has failed. Some of these approaches also form the basis for the objective functions of GAN variants which can help avoid some of the issues of stability and mode-dropping in the original GAN.Kerrie Mengerson, Probabilistic Modelling in the Real World
Interest is intensifying in the development and application of Bayesian approaches to estimation of real-world processes using probabilistic models. This presentation will focus on three substantive case studies in which we have been involved: protecting the Great Barrier Reef in Australia from impacts such as crown of thorns starfish and industrial dredging, reducing congestion at international airports, and predicting survival of jaguars in the Peruvian Amazon. Through these examples, we will explore current ideas about Approximate Bayesian Computation, Populations of Models, Bayesian priors and p-values, and Bayesian dynamic networks.
Sanjeev Arora, Do GANs actually learn the distribution? Some theory and empirics
The Generative Adversarial Nets or GANs framework (Goodfellow et al'14) for learning distributions differs from older ideas such as autoencoders and deep Boltzmann machines in that it scores the generated distribution using a discriminator net, instead of a perplexity-like calculation. It appears to work well in practice, e.g., the generated images look better than older techniques. But how well do these nets learn the target distribution?
Our paper 1 (ICML'17) shows GAN training may not have good generalization properties; e.g., training may appear successful but the trained distribution may be far from target distribution in standard metrics. We show theoretically that this can happen even though the 2-person game between discriminator and generator is in near-equilibrium, where the generator appears to have "won" (with respect to natural training objectives).
Paper2 (arxiv June 26) empirically tests whether this lack of generalization occurs in real-life training. The paper introduces a new quantitative test for diversity of a distribution based upon the famous birthday paradox. This test reveals that distributions learnt by some leading GANs techniques have fairly small support (i.e., suffer from mode collapse), which implies that they are far from the target distribution.
Paper 1: "Equilibrium and Generalization in GANs" by Arora, Ge, Liang, Ma, Zhang. (ICML 2017)
Paper 2: "Do GANs actually learn the distribution? An empirical study." by Arora and Zhang (https://arxiv.org/abs/1706.08224)
Stefano Ermon, Generative Adversarial Imitation Learning
Consider learning a policy from example expert behavior, without interaction with the expert or access to a reward or cost signal. One approach is to recover the expert’s cost function with inverse reinforcement learning, then compute an optimal policy for that cost function. This approach is indirect and can be slow. In this talk, I will discuss a new generative modeling framework for directly extracting a policy from data, drawing an analogy between imitation learning and generative adversarial networks. I will derive a model-free imitation learning algorithm that obtains significant performance gains over existing methods in imitating complex behaviors in large, high-dimensional environments. Our approach can also be used to infer the latent structure of human demonstrations in an unsupervised way. As an example, I will show a driving application where a model learned from demonstrations is able to both produce different driving styles and accurately anticipate human actions using raw visual inputs.
Qiang Liu
Wild Variational Inference with Expressive Variational Families
Variational inference (VI) provides a powerful tool for reasoning with highly complex probabilistic models in machine learning. The basic idea of VI is to approximate complex target distributions with simpler distributions found by minimizing the KL divergence within some predefined parametric families. A key limitation of the typical VI techniques, however, is that they require the variational family to be simple enough to have tractable likelihood functions, which excludes a broad range of flexible, expressive families such as these defined via implicit models. In this talk, we will discuss a general framework for (wild) variational inference that works for much more expressive, implicitly defined variational families with intractable likelihood functions. Our key idea is to first lift the optimization problem into the infinite dimensional space, solved using nonparametric particle methods, and then project the update back to the finite dimensional parameter space that we want to optimize with. Our framework is highly general and allows us to leverage any existing particle methods as the inference engine for wild variational inference, including MCMC and Stein variational gradient methods.
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment