Pages

Saturday, December 13, 2014

NIPS2014 Poster papers


While I listed the proceedings of NIPS recently, it did not seem to include posters at workshops that are taking place today (and took place yesterday). Here are the ones I could find from the workshop pages, enjoy !

Deep Learning and Representation Learning Workshop: NIPS 2014

OPT2014 Optimization for Machine Learning

Modern Nonparametrics 3: Automating the Learning Pipeline

  1. The Randomized Causation Coefficient. (talk)
    David Lopez-Paz, Krikamol Muandet, Benjamin Recht.
    [paper]
  2. Influence Functions for Nonparametric Estimation. (spotlight, poster)
    Kirthevasan Kandasamy, Akshay Krishnamurthy, Barnabás Póczos, Larry Wasserman, James M. Robins.
    [paper]
  3. Are You Still Tuning Hyperparameters? Parameter-free Model Selection and Learning. (talk)
    Francesco Orabona.
    [paper]
  4. Uncertainty Propagation in Gaussian Process Pipelines. (spotlight, poster)
    Andreas C. Damianou, Neil D. Lawrence.
    [paper, poster]
  5. Kernel non-parametric tests of relative dependency. (spotlight, poster)
    Wacha Bounliphone, Arthur Gretton, Matthew Blaschko.
    [paper, spotlight, poster]
  6. Generalized Product of Experts for Automatic and Principled Fusion of Gaussian Process Predictions. (spotlight, poster)
    Yanshuai Cao, David J. Fleet.
    [paper]
  7. Computable Learning via Metric Measure Theory. (spotlight, poster)
    Arijit Das, Achim Tresch.
    [paper]
  8. Nonparametric Maximum Margin Similarity for Semi-Supervised Learning. (spotlight, poster)
    Yingzhen Yang, Xinqi Chu, Zhangyang Wang, Thomas S. Huang.
    [paper]
  9. Learning with Deep Trees.  (spotlight, poster)
    Giulia DeSalvo, Mehryar Mohri, Umar Syed.
    [paper, spotlight, poster]
  10. Theoretical Foundations for Learning Kernels in Supervised Kernel PCA. (spotlight, poster)
    Mehryar Mohri, Afshin Rostamizadeh, Dmitry Storcheus.
    [paper]
  11. Kernel Selection in Support Vector Machines Using Gram-Matrix Properties. (spotlight, poster)
    Roberto Valerio, Ricardo Vilalta.
    [paper, spotlight (pdf, pptx), poster (pdf, pptx)]

 

Optimal transport and machine learning


8:30 - 9:10 Crash-Course in Optimal Transport (Organizers)
9:10 - 10:00 Piotr Indyk
Geometric representations of the Earth-Mover Distance

Coffee Break

10:30 - 11:20 Alexander Barvinok
Non-negative matrices with prescribed row and column sums (slides)
11:20 - 11:40 Zheng, Pestilli, Rokem
Quantifying error in estimates of human brain fiber directions using the EMD (paper)
11:40 - 12:00 Courty, Flamary, Rakotomamonjy, Tuia
Optimal transport for Domain adaptation

Lunch Break

14:40 - 15:00 Flamary, Courty, Rakotomamonjy, Tuia
Optimal transport with Laplacian regularization
15:00 - 15:20 Rabin, Papadakis
Non-convex relaxation of optimal transport for color transfer (paper)
15:20 - 15:40 Lescornel, Loubès
Estimation of deformations between distributions with the Wasserstein distance (paper)
15:40 - 16:30 Adam Oberman
Numerical methods for the optimal transportation problem

Coffee Break

17:00 - 17:50 Robert McCann
Optimal transport: old and new
17:50 - 18:30 Panel Discussion

Spotlights I

Spotlights II

Spotlights III

Invited Talks

Volkan Cevher: A totally unimodular view of structured sparsity

We describe a simple framework for structured sparse recovery based on convex optimization. We show that many interesting structured sparsity models can be naturally represented by linear matrix inequalities on the support of the unknown parameters, where the constraint matrix has a totally unimodular (TU) structure. For such structured models, tight convex relaxations can be obtained in polynomial time via linear programming. Our modeling framework unifies the prevalent structured sparsity norms in the literature, introduces new interesting ones, and renders their tightness and tractability arguments transparent.
Based on
http://infoscience.epfl.ch/record/202767?ln=en http://infoscience.epfl.ch/record/184981?ln=en
Web:
http://lions.epfl.ch/publications

Tom McCormick: A survey of minimizing submodular functions on lattices

We usually talk about submodular functions on the subsets of a finite set, the so-called Boolean lattice, and there are many applications of them. There has been a lot of progress made in the last thirty years in understanding the complexity of, and algorithms for, submodular function minimization (SFM) on the Boolean lattice.
But there are other applications where more general notions of submodularity are important. Suppose that we take the usual definition of submodularity and replace the notion of intersection and union of subsets by the "meet" and "join" in a finite lattice. For example, we could take a rectangular "box" of integer points in n-space, with meet being component-wise min, and join being component-wise max. Then we would like to generalize the results about SFM from the Boolean lattice to more general lattices.
There has been a lot of work on such questions in the last ten years, and this talk will survey some of this work. We start with the case where the lattice is distributive. Here Birkhoff's Theorem allows a clever reduction from SFM on a distributive lattice to ordinary SFM on the Boolean lattice. We then consider lattices that are "oriented" or "signed" generalizations of the Boolean lattice, leading to "bisubmodularity", where we do have good characterizations and algorithms. We also briefly cover recent efforts to further generalize bisubmodularity to "k-submodularity", and to product lattices of "diamonds".
Finally we come back to lattices of integer points in n-space. We demonstrate that submodularity by itself is not enough to allow for efficient minimization, even if we also assume coordinate-wise convexity. This leads to considering concepts from "discrete convexity" such as "L-natural convexity", where we do gain enough structure to allow for efficient minimization.

Yaron Singer: Adaptive seeding: a new framework for stochastic submodular optimization

In this talk we will introduce a new framework for stochastic optimization called adaptive seeding. The framework was originally designed to enable substantial improvements to influence maximization by leveraging a remarkable structural phenomenon in social networks known as the "friendship paradox" (or "your friends have more friends than you"). At a high level, adaptive seeding is the task of making choices in the present that lead to favorable outcomes in the future, and may be of independent interest to those curious about stochastic optimization, submodular maximization, and machine learning. In the talk we will give a taste to some of the problems that arise in this rich domain. We will discuss key algorithmic ideas, as well as empirical and experimental results.

Sebastian Nowozin: Trade-offs in Structured Prediction

Structured learning and prediction problems often turn out to be challenging computationally. Yet, there are different sources of structure and it pays to examine them more closely. One source is a fixed or given or physical constraint on feasible decisions, for example when an actual physical system is being controlled. Another source is the assumption that by modelling the structured domain through latent variables and constraints provides statistical advantages because a small number of parameters can now describe the target domain more accurately and we can learn more effectively from fewer samples by exposing this structure. A third source of combinatorial structure is often the loss function that is used; a structured loss invariably leads to a structured decision problem. When designing our models we are often free to select trade-offs in terms of model structure, capacity, number of parameters, and difficulty of inference; in this talk I will argue that by thinking about the above sources of combinatorial structure we can make more sensible trade-offs and I will illustrate these using example applications from computer vision.

Dhruv Batra: Submodular meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets

Perception problems are hard and notoriously ambiguous. Robust prediction methods in Computer Vision and Natural Language Processing often search for a diverse set of high-quality candidate solutions or proposals. In structured prediction problems, this becomes a daunting task, as the solution space (image labelings, sentence parses, etc.) is exponentially large.
We study greedy algorithms for finding a diverse subset of solutions in structured-output spaces by drawing new connections between submodular functions over combinatorial item sets and High-Order Potentials (HOPs) studied for graphical models. Specifically, we show that when marginal gains of submodular diversity functions allow structured representations, this enables efficient (sub-linear time) approximate maximization by reducing the greedy augmentation step to inference in a factor graph with appropriately constructed HOPs.
Joint work with Adarsh Prasad (UT-Austin) and Stefanie Jegelka (UC Berkeley).
  
08:30--09:15 Invited Talk: Honglak Lee.
Multimodal Deep Learning with Implicit Output Representations
09:15--10:00 Invited Talk: Francesco Dinuzzo.
Output Kernel Learning with Structural Constraints
10:00--10:30 Coffee Break

Session 2
10:30--11:15 Invited Talk: Hal Daume III.
Structured Latent Representations in NLP
11:15--11:30 Contributed Talk: Corinna Cortes, Vitaly Kuznetsov and Mehryar Mohri.
On-line Learning Approach to Ensemble Methods for Structured Prediction
11:30--12:00 Spotlights
Hanchen Xiong, Sandor Szedmak and Justus Piater.
Implicit Learning of Simpler Output Kernels for Multi-Label Prediction
Fabio Massimo Zanzotto and Lorenzo Ferrone.
Output Distributed Convolution Structure Learning for Syntactic Analysis of Natural Language
François Laviolette, Emilie Morvant, Liva Ralaivola and Jean-Francis Roy.
On Generalizing the C-Bound to the Multiclass and Multilabel Settings
Hongyu Guo, Xiaodan Zhu and Martin Renqiang Min.
A Deep Learning Model for Structured Outputs with High-order Interaction
Thomas Unterthiner, Andreas Mayr, Günter Klambauer, Marvin Steijaert, Hugo Ceulemans, Jörg Wenger and Sepp Hochreiter.
Deep Learning for Drug Target Prediction
12:00--14:00 Lunch Break
14:00--15:00 Poster Session

Session 3
15:00--15:45 Invited Talk: Jia Deng.
Learning Visual Models with a Knowledge Graph
15:45--16:00 Contributed Talk: Calvin Murdock and Fernando De La Torre.
Semantic Component Analysis
16:00--16:15 Contributed Talk: Jiaqian Yu and Matthew Blaschko.
Lova ́sz Hinge for Learning Submodular Losses
16:15--16:30 Invited Talk: Rich Sutton.
Representation Learning in Reinforcement Learning
16:30--17:00 Coffee Break

Session 4
17:00--17:45 Invited Talk: Noah Smith.
Loss Functions and Regularization for Less-than-Supervised NLP
17:45--18:00 Contributed Talk: Hichem Sahbi.
Finite State Machines for Structured Scene Decoding
18:00--18:15 Contributed Talk: Luke Vilnis, Nikos Karampatziakis and Paul Mineiro.
Generalized Eigenvectors for Large Multiclass Problems


Distributed Machine Learning and Matrix Computations

Session 1
========
08:15-08:30 Introduction, Reza Zadeh
08:30-09:00 Ameet Talwalkar, MLbase: Simplified Distributed Machine Learning
09:00-09:30 David Woodruff, Principal Component Analysis and Higher Correlations for Distributed Data
09:30-10:00 Virginia Smith, Communication-Efficient Distributed Dual Coordinate Ascent

10:00-10:30 Coffee Break

Session 2
========
10:30-11:30 Jeff Dean (Keynote), Techniques for Training Neural Networks Quickly
11:30-12:00 Reza Zadeh, Distributing the Singular Value Decomposition with Spark
12:00-12:30 Jure Leskovec, In-memory graph analytics

12:30-14:30 Lunch Break

Session 3
========
14:30-15:00 Carlos Guestrin, SFrame and SGraph: Scalable, Out-of-Core, Unified Tabular and Graph Processing
15:00-15:30 Inderjit Dhillon, NOMAD: A Distributed Framework for Latent Variable Models
15:30-16:00 Ankur Dave, GraphX: Graph Processing in a Distributed Dataflow Framework
16:00-16:30 Jeremy Freeman, Large-scale decompositions of brain activity

Minerva: A Scalable and Highly Efficient Training Platform for Deep Learning
Minjie Wang, Tianjun Xiao, Jianpeng Li, Jiaxing Zhang, Chuntao Hong, Zheng Zhang
Maxios: Large Scale Nonnegative Matrix Factorization for Collaborative Filtering
Simon Shaolei Du, Boyi Chen, Yilin Liu, Lei Li
Factorbird - a Parameter Server Approach to Distributed Matrix Factorization
Sebastian Schelter, Venu Satuluri, Reza Zadeh
Improved Algorithms for Distributed Boosting
Jeff Cooper, Lev Reyzin
Parallel and Distributed Inference in Coupled Tensor Factorization Models supplementary
Umut Simsekli, Beyza Ermis, Ali Taylan Cemgil, Figen Oztoprak, S. Ilker Birbil
Dogwild! — Distributed Hogwild for CPU and GPU
Cyprien Noel, Simon Osindero
Generalized Low Rank Models
Madeleine Udell, Corinne Horn, Reza Zadeh, Stephen Boyd
Elastic Distributed Bayesian Collaborative Filtering
Alex Beutel, Markus Weimer, Tom Minka, Yordan Zaykov, Vijay Narayanan
LOCO: Distributing Ridge Regression with Random Projections
Brian McWilliams, Christina Heinze, Nicolai Meinshausen, Gabriel Krummenacher, Hastagiri P. Vanchinathan
Logistic Matrix Factorization for Implicit Feedback Data
Christopher C. Johnson
Tighter Low-rank Approximation via Sampling the Leveraged Element
Srinadh Bhojanapalli, Prateek Jain, Sujay Sanghavi

Networks: From Graphs to Rich Data

NIPS 2014 Workshop


Accepted Papers

 Representation and Learning Methods for Complex Outputs


Flexible Transfer Learning under Support and Model Shift. 
Xuezhi Wang and Jeff Schneider

Daniel Hernández-Lobato, José Miguel Hernández-Lobato and Zoubin Ghahramani.

Song Liu, Taiji suzuki and Masashi Sugiyama

Thomas Unterthiner, Andreas Mayr,  Günter Klambauer, Marvin Steijaert, Jörg Wenger, Hugo Ceulemans and Sepp Hochreiter

Pascal Germain, Amaury Habrard, François Laviolette and Emilie Morvant

Anant Raj, Vinary Namboodiri and Tinne Tuytelaars

Michael Goetz, Christian Weber, Bram Stieltjes and Klaus Maier-Hein

Active Nearest Neighbors in Changing environments
Christopher Berlind, Ruth Urner

Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette and Mario Marchand

Sigurd Spieckermann, Steffen Udluft and Thomas Runkler

Daniele calndirello, Alessandro Lazaric and Marcello Restelli

Marta Soare, Ouais Alsharif, Alsessandro Lazaric and Joelle Pineau

Piyush Rai, Wenzhao Lian and Lawrence Carin

Yujia Li, Kevin Swersky and Richard Zernel
Vitaly Kuznetsov and Mehryar Mohri


AKBC 2014

4th Workshop on Automated Knowledge Base Construction (AKBC) 2014


  • Contributed Talks
  • Arvind Neelakantan, Benjamin Roth and Andrew Mccallum. Knowledge Base Completion using Compositional Vector Space Models PDF
  • Peter Clark, Niranjan Balasubramanian, Sumithra Bhakthavatsalam, Kevin Humphreys, Jesse Kinkead, Ashish Sabharwal, and Oyvind Tafjord. Automatic Construction of Inference-Supporting Knowledge Bases PDF
  • Ari Kobren, Thomas Logan, Siddarth Sampangi and Andrew McCallum. Domain Specific Knowledge Base Construction via Crowdsourcing PDF
  • Posters
  • Adrian Benton, Jay Deyoung, Adam Teichert, Mark Dredze, Benjamin Van Durme, Stephen Mayhew and Max Thomas. Faster (and Better) Entity Linking with Cascades PDF
  • Ning Gao, Douglas Oard and Mark Dredze. A Test Collection for Email Entity Linking PDF
  • Derry Tanti Wijaya, Ndapandula Nakashole and Tom Mitchell. Mining and Organizing a Resource of State-changing Verbs PDF
  • Jakob Huber, Christian Meilicke and Heiner Stuckenschmidt. Applying Markov Logic for Debugging Probabilistic Temporal Knowledge Bases PDF
  • Ramanathan Guha. Correlating Entities PDF
  • Tomasz Tylenda, Sarath Kumar Kondreddi and Gerhard Weikum. Spotting Knowledge Base Facts in Web Texts PDF
  • Bhavana Dalvi and William Cohen. Multi-view Exploratory Learning for AKBC Problems PDF
  • Alexander G. Ororbia Ii, Jian Wu and C. Lee Giles. CiteSeerX: Intelligent Information Extraction and Knowledge Creation from Web-Based Data PDF
  • Adam Grycner, Gerhard Weikum, Jay Pujara, James Foulds and Lise Getoor. A Unified Probabilistic Approach for Semantic Clustering of Relational Phrases PDF
  • Ndapandula Nakashole and Tom M. Mitchell. Micro Reading with Priors: Towards Second Generation Machine Readers PDF
  • Edouard Grave. Weakly supervised named entity classification PDF
  • Lucas Sterckx, Thomas Demeester, Johannes Deleu and Chris Develder. Using Semantic Clustering and Active Learning for Noise Reduction in Distant Supervision PDF
  • Francis Ferraro, Max Thomas, Matthew Gormley, Travis Wolfe, Craig Harman and Benjamin Van Durme. Concretely Annotated Corpora PDF
  • Luis Galárraga and Fabian M. Suchanek. Towards a Numerical Rule Mining Language PDF
  • Bishan Yang and Claire Cardie. Improving on Recall Errors for Coreference Resolution PDF
  • Chandra Sekhar Bhagavatula, Thanapon Noraset and Doug Downey. TextJoiner: On-demand Information Extraction with Multi-Pattern Queries PDF
  • Benjamin Roth, Emma Strubell, Katherine Silverstein and Andrew Mccallum. Minimally Supervised Event Argument Extraction using Universal Schema PDF
  • Mathias Niepert and Sameer Singh. Out of Many, One: Unifying Web-Extracted Knowledge Bases PDF
  • Laura Dietz, Michael Schuhmacher and Simone Paolo Ponzetto. Queripidia: Query-specific Wikipedia Construction PDF
  • Jay Pujara and Lise Getoor. Building Dynamic Knowledge Graphs PDF 

NIPS2014 - Out of the Box: Robustness in High Dimension

 
Session 1
========
8:30-9:15 Pradeep Ravikumar, "Dirty Statistical Models"
9:15-10:00 Donald Goldfarb, 
"Low-rank Matrix and Tensor Recovery: Theory and Algorithms"

Session 2
========
10:30-11:15 Alessandro Chiuso, "Robustness issues in Kernel Tuning: SURE vs. Marginal Likelihood"
11:15-12:00 Po-Ling Loh, "Local optima of nonconvex M-estimators"

Session 3 (contributed talks)
========
15:00-15:20 Vassilis Kalofolias, "Matrix Completion on Graphs"
15:20-15:40 Aleksandr Aravkin, "Learning sparse models using general robust losses"
15:40-16:00 Stephen Becker, "Robust Compressed Least Squares Regression"
16:00-16:20 Ahmet Iscen, "Exploiting high dimensionality for similarity search"

Session 4
========
17:00-17:45 Rina Foygel Barber, "Controlling the False Discovery Rate via Knockoffs" (joint with Emmanuel Candes)
18:45-18:30 Noureddine El Karoui, "On high-dimensional robust regression and inefficiency of maximum likelihood methods"
 Modern ML + NLP 2014

Learning Semantics 2014
 
Machine Reasoning & Artificial Intelligence
  • 08:30a Pedro DomingosUniversity of Washington, Symmetry-Based Semantic Parsing
  • 08:50a Tomas Mikolov, Facebook, Challenges in Development of Machine Intelligence
  • 09:10a Luke Zettlemoyer, University of Washington, Semantic Parsing for Knowledge Extraction
  • 09:30a Panel Discussion
Contributed Posters
  • 10:00a Contributed Posters, Coffee Break
Natural Language Processing & Semantics from Text Corpora 
Afternoon

Personal Assistants, Dialog Systems, and Question Answering
  • 03:00p Susan Hendrich, Microsoft Cortana
  • 03:20p Ashutosh SaxenaCornell, Tell Me Dave: Context-Sensitive Grounding of Natural Language into Robotic Tasks
  • 03:40p Jason Weston, Facebook, Memory Networks
  • 04:00p Panel Discussion
Contributed Posters
  • 04:30p Contributed Posters, Coffee Break
Reasoning from Visual Scenes
  • 05:00p Alyosha Efros, UC Berkeley, Towards The Visual Memex
  • 05:20p Jeffrey Siskind, Purdue University, Learning to Ground Sentences in Video
  • 05:40p Larry ZitnickMicrosoft Research, Forget Reality: Learning from Visual Abstraction
  • 06:00p Panel Discussion

Contributed Posters


    Morning Session (10:00p-10:30p)


    Afternoon Session (4:10p-5:00p)

 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment