Pages

Friday, August 04, 2017

Papers: ICML2017 workshop on Implicit Models

The organizers (David Blei, Ian Goodfellow, Balaji Lakshminarayanan, Shakir Mohamed, Rajesh Ranganath, Dustin Tran ) made the papers of the ICML2017 workshop on Implicit Models available here:

  1. A-NICE-MC: Adversarial Training for MCMC Jiaming Song, Shengjia Zhao, Stefano Ermon
  2. ABC-GAN: Adaptive Blur and Control for improved training stability of Generative Adversarial Networks Igor Susmelj, Eirikur Agustsson, Radu Timofte
  3. Adversarial Inversion for Amortized Inference Zenna Tavares, Armando Solar Lezama
  4. Adversarial Variational Inference for Tweedie Compound Poisson Models Yaodong Yang, Sergey Demyanov, Yuanyuan Liu, Jun Wang
  5. Adversarially Learned Boundaries in Instance Segmentation Amy Zhang
  6. Approximate Inference with Amortised MCMC Yingzhen Li, Richard E. Turner, Qiang Liu
  7. Can GAN Learn Topological Features of a Graph? Weiyi Liu, Pin-Yu Chen, Hal Cooper, Min Hwan Oh, Sailung Yeung, Toyotaro Suzumura
  8. Conditional generation of multi-modal data using constrained embedding space mapping Subhajit Chaudhury, Sakyasingha Dasgupta, Asim Munawar, Md. A. Salam Khan and Ryuki Tachibana
  9. Deep Hybrid Discriminative-Generative Models for Semi-Supervised Learning Volodymyr Kuleshov, Stefano Ermon
  10. ELFI, a software package for likelihood-free inference Jarno Lintusaari, Henri Vuollekoski, Antti Kangasrääsiö, Kusti Skyten, Marko Järvenpää, Michael Gutmann, Aki Vehtari, Jukka Corander, Samuel Kaski
  11. Flow-GAN: Bridging implicit and prescribed learning in generative models Aditya Grover, Manik Dhar, Stefano Ermon
  12. GANs Powered by Autoencoding — A Theoretic Reasoning Zhifei Zhang, Yang Song, and Hairong Qi
  13. Geometric GAN Jae Hyun Lim and Jong Chul Ye
  14. Gradient Estimators for Implicit Models Yingzhen Li, Richard E. Turner
  15. Implicit Manifold Learning on Generative Adversarial Networks Kry Yik Chau Lui, Yanshuai Cao, Maxime Gazeau, Kelvin Shuangjian Zhang
  16. Implicit Variational Inference with Kernel Density Ratio Fitting Jiaxin Shi, Shengyang Sun, Jun Zhu
  17. Improved Network Robustness with Adversarial Critic Alexander Matyasko, Lap-Pui Chau
  18. Improved Training of Wasserstein GANs Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron Courville
  19. Inference in differentiable generative models Matthew M. Graham and Amos J. Storkey
  20. Joint Training in Generative Adversarial Networks R Devon Hjelm, Athul Paul Jacob, Yoshua Bengio
  21. Latent Space GANs for 3D Point Clouds Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, Leonidas Guibas
  22. Likelihood Estimation for Generative Adversarial Networks Hamid Eghbal-zadeh, Gerhard Widmer
  23. Maximizing Independence with GANs for Non-linear ICA Philemon Brakel, Yoshua Bengio 
  24. Non linear Mixed Effects Models: Bridging the gap between Independent Metropolis Hastings and Variational Inference Belhal Karimi
  25. Practical Adversarial Training with Empirical Distribution Ambrish Rawat, Mathieu Sinn, Maria-Irina Nicolae
  26. Recursive Cross-Domain Facial Composite and Generation from Limited Facial Parts Yang Song, Zhifei Zhang, Hairong Qi
  27. Resampled Proposal Distributions for Variational Inference and Learning Aditya Grover, Ramki Gummadi, Miguel Lazaro-Gredil, Dale Schuurmans, Stefano Ermon
  28. Rigorous Analysis of Adversarial Training with Empirical Distributions Mathieu Sinn, Ambrish Rawat, Maria-Irina Nicolae
  29. Robust Controllable Embedding of High-Dimensional Observations of Markov Decision Processes Ershad Banijamali, Rui Shu, Mohammad Ghavamzadeh, Hung Bui
  30. Spectral Normalization for Generative Adversarial Network Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida
  31. Stabilizing the Conditional Adversarial Network by Decoupled Learning Zhifei Zhang, Yang Song, and Hairong Qi
  32. Stabilizing Training of Generative Adversarial Networks through Regularization Kevin Roth, Aurelien Lucchi, Sebastian Nowozin & Thomas Hofmann
  33. Stochastic Reconstruction of Three-Dimensional Porous Media using Generative Adversarial Networks Lukas Mosser, Olivier Dubrule, Martin J. Blunt
  34. The Amortized Bootstrap Eric Nalisnick, Padhraic Smyth 
  35. The Numerics of GANs Lars Mescheder, Sebastian Nowozin, Andreas Geiger
  36. Towards the Use of Gaussian Graphical Models in Variational Autoencoders Alexandra Pește, Luigi Malagò
  37. Training GANs with Variational Statistical Information Minimization Michael Ghaben
  38. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks Jun-Yan Zhu*, Taesung Park*, Phillip Isola, Alexei A. Efros
  39. Unsupervised Domain Adaptation Using Approximate Label Matching Jordan T. Ash, Robert E. Schapire, Barbara E. Englhardt
  40. Variance Regularizing Adversarial Learning Karan Grewal, R Devon Hjelm, Yoshua Bengio
  41. Variational Representation Autoencoders to Reduce Mode Collapse in GANs Akash Srivastava, Lazar Valkov, Chris Russell, Michael U. Gutmann, Charles Sutton


Dougal Sutherland, Evaluating and Training Implicit Generative Models with Two-Sample Tests
Samples from implicit generative models are difficult to judge quantitatively: particularly for images, it is typically easy for humans to identify certain kinds of samples which are very unlikely under the reference distribution, but very difficult for humans to identify when modes are missing, or when types are merely under- or over-represented. This talk will overview different approaches towards evaluating the output of an implicit generative model, with a focus on identifying ways in which the model has failed. Some of these approaches also form the basis for the objective functions of GAN variants which can help avoid some of the issues of stability and mode-dropping in the original GAN.
Kerrie Mengerson, Probabilistic Modelling in the Real World
Interest is intensifying in the development and application of Bayesian approaches to estimation of real-world processes using probabilistic models. This presentation will focus on three substantive case studies in which we have been involved: protecting the Great Barrier Reef in Australia from impacts such as crown of thorns starfish and industrial dredging, reducing congestion at international airports, and predicting survival of jaguars in the Peruvian Amazon. Through these examples, we will explore current ideas about Approximate Bayesian Computation, Populations of Models, Bayesian priors and p-values, and Bayesian dynamic networks.

Sanjeev Arora, Do GANs actually learn the distribution? Some theory and empirics
The Generative Adversarial Nets or GANs framework (Goodfellow et al'14) for learning distributions differs from older ideas such as autoencoders and deep Boltzmann machines in that it scores the generated distribution using a discriminator net, instead of a perplexity-like calculation. It appears to work well in practice, e.g., the generated images look better than older techniques. But how well do these nets learn the target distribution?
Our paper 1 (ICML'17) shows GAN training may not have good generalization properties; e.g., training may appear successful but the trained distribution may be far from target distribution in standard metrics. We show theoretically that this can happen even though the 2-person game between discriminator and generator is in near-equilibrium, where the generator appears to have "won" (with respect to natural training objectives).
Paper2 (arxiv June 26) empirically tests whether this lack of generalization occurs in real-life training. The paper introduces a new quantitative test for diversity of a distribution based upon the famous birthday paradox. This test reveals that distributions learnt by some leading GANs techniques have fairly small support (i.e., suffer from mode collapse), which implies that they are far from the target distribution.
Paper 1: "Equilibrium and Generalization in GANs" by Arora, Ge, Liang, Ma, Zhang. (ICML 2017)
Paper 2: "Do GANs actually learn the distribution? An empirical study." by Arora and Zhang (https://arxiv.org/abs/1706.08224)

Stefano Ermon, Generative Adversarial Imitation Learning
Consider learning a policy from example expert behavior, without interaction with the expert or access to a reward or cost signal. One approach is to recover the expert’s cost function with inverse reinforcement learning, then compute an optimal policy for that cost function. This approach is indirect and can be slow. In this talk, I will discuss a new generative modeling framework for directly extracting a policy from data, drawing an analogy between imitation learning and generative adversarial networks. I will derive a model-free imitation learning algorithm that obtains significant performance gains over existing methods in imitating complex behaviors in large, high-dimensional environments. Our approach can also be used to infer the latent structure of human demonstrations in an unsupervised way. As an example, I will show a driving application where a model learned from demonstrations is able to both produce different driving styles and accurately anticipate human actions using raw visual inputs.
Qiang Liu
Wild Variational Inference with Expressive Variational Families
Variational inference (VI) provides a powerful tool for reasoning with highly complex probabilistic models in machine learning. The basic idea of VI is to approximate complex target distributions with simpler distributions found by minimizing the KL divergence within some predefined parametric families. A key limitation of the typical VI techniques, however, is that they require the variational family to be simple enough to have tractable likelihood functions, which excludes a broad range of flexible, expressive families such as these defined via implicit models. In this talk, we will discuss a general framework for (wild) variational inference that works for much more expressive, implicitly defined variational families with intractable likelihood functions. Our key idea is to first lift the optimization problem into the infinite dimensional space, solved using nonparametric particle methods, and then project the update back to the finite dimensional parameter space that we want to optimize with. Our framework is highly general and allows us to leverage any existing particle methods as the inference engine for wild variational inference, including MCMC and Stein variational gradient methods.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

No comments:

Post a Comment