So #NIPS2013 is starting today with a set of tutorials, and a set of workshops listed below. Two words first, if you are prude or at work don't go watch the photos on Twitter (on your desktop) for the #NIPS2013 hashtag just yet! Second, for those of you in Paris next week, we'll have our 6th ML meetup. Third Andrej Karpathy has a nicer way of viewing the NIPS proceedings. It is here.
Without further due:
 eXtreme Classification 2013
 Advances in Machine Learning for Sensorimotor Control
 Big Learning: Advances in Algorithms and Data Management
 Crowdsourcing: Theory, Algorithms and Applications
 Deep Learning
 Discrete Optimization in Machine Learning: Connecting Theory and Practice
 Frontiers of Network Analysis: Methods, Models, and Applications
 Highdimensional Statistical Inference in the Brain
 Large Scale Matrix Analysis and Inference
 MLINI13: Machine Learning and Interpretation in Neuroimaging
 Modern Nonparametric Methods in Machine Learning
 NIPS 2013 Workshop on Causality: Largescale Experiment Design and Inference of Causal Mechanisms
 OPT2013: Optimization for Machine Learning
 Output Representation Learning
 Perturbations, Optimization, and Statistics
 Planning with Information Constraints for Control, Reinforcement Learning, Computational Neuroscience, Robotics and Games.
 Randomized Methods for Machine Learning
 What Difference Does Personalization Make?
 Acquiring and Analyzing the Activity of Large Neural Ensembles
 Probabilistic Models for Big Data
 Bayesian Optimization in Theory and Practice
 Constructive Machine Learning
 Data Driven Education
 Greedy Algorithms, FrankWolfe and Friends  A modern perspective
 Learning Faster From Easy Data
 Machine Learning for Clinical Data Analysis and Healthcare
 Machine Learning for Sustainability
 Machine Learning in Computational Biology
 Machine Learning Open Source Software: Towards Open Workflows
 Neural Information Processing Scaled for Bioacoustics : NIPS4B
 New Directions in Transfer and MultiTask: Learning Across Domains and Tasks
 ResourceEfficient Machine Learning
 Topic Models: Computation, Application, and Evaluation
 Workshop on Spectral Learning
Here are a few papers I found interesting but the whole electronic proceeding is here (the whole pdf is here):
 Faster Ridge Regression via the Subsampled Randomized Hadamard Transform Paramveer Dhillon, Dean P. Foster, Yichao Lu, Lyle Ungar
 Dropout Training as Adaptive Regularization Percy Liang, Stefan Wager, Sida Wang
 Generalized Denoising AutoEncoders as Generative Models Pascal Vincent, Yoshua Bengio, Guillaume Alain, Li Yao
 Simultaneous Rectification and Alignment via Robust Recovery of Lowrank Tensors Yi Ma, Di Wang, Xiaoqin Zhang, Zhengyuan Zhou
 Bayesian optimization explains human active search Laurent Itti, Ali Borji
 When in Doubt, SWAP: HighDimensional Sparse Recovery from Correlated Measurements Richard Baraniuk, Divyanshu Vats
 Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching Guillermo Sapiro, Marcelo Fiori, Pablo Muse, Pablo Sprechmann, Joshua T. Vogelstein
 New Subsampling Algorithms for Fast Least Squares Regression Paramveer Dhillon, Dean P. Foster, Yichao Lu, Lyle Ungar
 Understanding variable importances in forests of randomized trees Pierre Geurts, Gilles Louppe, Antonio Sutera, Louis Wehenkel
 Blind Calibration in Compressed Sensing using Message Passing Algorithms Francesco Caltagirone, Florent Krzakala, Christophe Schulke, Lenka Zdeborova
 Neural representation of action sequences: how far can a simple snippetmatching model take us? Thomas Serre, Tomaso Poggio, David Sheinberg, Jedediah M. Singer, Cheston Tan
 Beyond Pairwise: Provably Fast Algorithms for Approximate kWay Similarity Search Anshumali Shrivastava, Ping Li
 Learning Multilevel Sparse Representations Fred A. Hamprecht, Ferran Diego Andilla
 Wavelets on Graphs via Deep Learning Leonidas Guibas, Raif Rustamov
 Designed Measurements for Vector Count Data Lawrence Carin, Liming Wang, Robert Calderbank, David Carlson, Miguel Rodrigues, David Wilcox
 NearOptimal Entrywise Sampling for Data Matrices Dimitris Achlioptas, Zohar S. Karnin, Edo Liberty
 Approximate Dynamic Programming Finally Performs Well in the Game of Tetris Bruno Scherrer, Mohammad Ghavamzadeh, Victor Gabillon
 The Randomized Dependence Coefficient Philipp Hennig, David LopezPaz, Bernhard Schölkopf
 Provable Subspace Clustering: When LRR meets SSC Chenlei Leng, Huan Xu, YuXiang Wang
 Generalized Random Utility Models with Multiple Types David C. Parkes, Hossein Azari Soufiani, Hansheng Diao, Zhenyu Lai
 Polar Operators for Structured Sparse Estimation Dale Schuurmans, Xinhua Zhang, YaoLiang Yu
 On Decomposing the Proximal Map YaoLiang Yu
 More data speeds up training time in learning halfspaces over sparse vectors Amit Daniely, Nati Linial, Shai ShalevShwartz
 Causal Inference on Time Series using Restricted Structural Equation Models Dominik Janzing, Jonas Peters, Bernhard Schölkopf
 Deep Fisher Networks for LargeScale Image Classification Andrea Vedaldi, Andrew Zisserman, Karen Simonyan
 Sparse Additive Text Models with Low Rank Background Lei Shi
 Variance Reduction for Stochastic Gradient Optimization Chong Wang, Xi Chen, Alex Smola, Eric Xing
 Training and Analysing Deep Recurrent Neural Networks Benjamin Schrauwen, Michiel Hermans
 Decision Jungles: Compact and Rich Models for Classification Antonio Criminisi, John Winn, Pushmeet Kohli, Sebastian Nowozin, Toby Sharp, Jamie Shotton
 ActorCritic Algorithms for RiskSensitive MDPs Mohammad Ghavamzadeh, Prashanth L.A.
 Oneshot learning and big data with n=2 Dean P. Foster, Lee H. Dicker
 Variational Inference for Mahalanobis Distance Metrics in Gaussian Process Regression Miguel LazaroGredilla, Michalis Titsias
 Optimal Neural Population Codes for Highdimensional Stimulus Variables Alan Stocker, Daniel Lee, Zhuo Wang
 Accelerating Stochastic Gradient Descent using Predictive Variance Reduction Rie Johnson, Tong Zhang
 Using multiple samples to learn mixture models Jason Lee, Rich Caruana, Ran GiladBachrach
 Learning Hidden Markov Models from Nonsequence Data via Tensor Decomposition TzuKuo Huang, Jeff Schneider
 Accelerated MiniBatch Stochastic Dual Coordinate Ascent Shai ShalevShwartz, Tong Zhang
 Online Robust PCA via Stochastic Optimization Huan Xu, Shuicheng Yan, Jiashi Feng
 A Scalable Approach to Probabilistic Latent Space Inference of LargeScale Networks Junming Yin, Qirong Ho, Eric Xing
 Correlated random features for fast semisupervised learning David Balduzzi, Joachim Buhmann, Brian McWilliams
 Better Approximation and Faster Algorithm Using the Proximal Average YaoLiang Yu
 Rapid DistanceBased Outlier Detection via Sampling Karsten Borgwardt, Mahito Sugiyama
 Regularized Mestimators with nonconvexity: Statistical and algorithmic theory for local optima Martin J. Wainwright, PoLing Loh
 Auditing: Active Learning with OutcomeDependent Query Costs Nati Srebro, Sivan Sabato, Anand D. Sarwate
 A messagepassing algorithm for multiagent trajectory planning Jonathan S. Yedidia, Javier AlonsoMora, Jose Bento, Nate Derbinsky
 Learning Stochastic Feedforward Neural Networks Ruslan Salakhutdinov, Yichuan Tang
 Inferring neural population dynamics from multiple partial recordings of the same neural circuit Lars Buesing, Henry Dalgleish, Michael Hausser, Jakob Macke, Adam M. Packer, Noah Pettit, Srini Turaga
 MultiPrediction Deep Boltzmann Machines Ian Goodfellow, Yoshua Bengio, Aaron Courville, Mehdi Mirza
 Higher Order Priors for Joint Intrinsic Image, Objects, and Attributes Estimation Carsten Rother, Philip Torr, Vibhav Vineet
 Learning Trajectory Preferences for Manipulators via Iterative Improvement Ashutosh Saxena, Thorsten Joachims, Ashesh Jain, Brian Wojcik
 Large Scale Distributed Sparse Precision Estimation Arindam Banerjee, Huahua Wang, Inderjit Dhillon, ChoJui Hsieh, Pradeep Ravikumar
 On Algorithms for Sparse Multifactor NMF Siwei Lyu, Xin Wang
 Dirty Statistical Models Eunho Yang, Pradeep Ravikumar
 Structured Learning via Logistic Regression Justin Domke
 Reinforcement Learning in Robust Markov Decision Processes Huan Xu, Shie Mannor, Shiau Hong Lim
 On the Linear Convergence of the Proximal Gradient Method for Trace Norm Regularization Ke Hou, ZhiQuan Luo, Anthony ManCho So, Zirui Zhou
 Recurrent networks of coupled WinnerTakeAll oscillators for solving constraint satisfaction problems Giacomo Indiveri, Hesham Mostafa, Lorenz. K. Mueller
 Latent Structured Active Learning Alex Schwing, Raquel Urtasun, Wenjie Luo
 A Gang of Bandits Claudio Gentile, Nicolò CesaBianchi, Giovanni Zappella
 Learning Feature Selection Dependencies in Multitask Learning Daniel HernándezLobato, José Miguel HernándezLobato
 Online PCA for Contaminated Data Huan Xu, Shie Mannor, Shuicheng Yan, Jiashi Feng
 Nonstronglyconvex smooth stochastic approximation with convergence rate O(1/n) Eric Moulines, Francis Bach
 Efficient Algorithm for Privately Releasing Smooth Queries Liwei Wang, Kai Fan, Ziteng Wang, Jiaqi Zhang
 Unsupervised Spectral Learning of Finite State Transducers Ariadna Quattoni, Xavier Carreras, Raphael Bailly
 Learning a Deep Compact Image Representation for Visual Tracking DitYan Yeung, Naiyan Wang
 Robust DataDriven Dynamic Programming Grani Adiwena Hanasusanto, Daniel Kuhn
 LowRank Matrix and Tensor Completion via Adaptive Sampling Aarti Singh, Akshay Krishnamurthy
 Probabilistic LowRank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron, Marie Chavent, Adrien Todeschini
 Distributed Exploration in MultiArmed Bandits Tomer Koren, Eshcar Hillel, Zohar S. Karnin, Ronny Lempel, Oren Somekh
 The Pareto Regret Frontier Wouter M. Koolen
 Direct 01 Loss Minimization and Margin Maximization with Boosting Ming Tan, Shaojun Wang, Tian Xia, Shaodan Zhai
 Regret based Robust Solutions for Uncertain Markov Decision Processes Yossiri Adulyasak, Asrar Ahmed, Patrick Jaillet, Pradeep VarakanthamSupervised Sparse Analysis and Synthesis Operators Guillermo Sapiro, Tal Ben Yakar, Alexander M. Bronstein, Roee Litman, Pablo Sprechmann
 Lowrank matrix reconstruction and clustering via approximate message passing Toshiyuki Tanaka, Ryosuke Matsushita
 Reasoning With Neural Tensor Networks for Knowledge Base Completion Christopher D. Manning, Richard Socher, Danqi Chen, Andrew Ng
 ZeroShot Learning Through CrossModal Transfer Christopher D. Manning, Richard Socher, Milind Ganjoo, Andrew Ng
 Estimating LASSO Risk and Noise Level Andrea Montanari, Mohsen Bayati, Murat A. Erdogdu
 Learning Adaptive Value of Information for Structured Prediction Ben Taskar, David J. Weiss
 Efficient Online Inference for Bayesian Nonparametric Relational Models Prem Gopalan, David Blei, Dae Il Kim, Erik Sudderth
 Approximate inference in latent GaussianMarkov models from continuous time observations Botond Cseke, Guido Sanguinetti, Manfred Opper
 Linear Convergence with Condition Number Independent Access of Full Gradients Mehrdad Mahdavi, Rong Jin, Lijun Zhang
 Robust Spatial Filtering with Beta Divergence KlausRobert Müller, Motoaki Kawanabe, Duncan Blythe, Wojciech Samek
 Convex Relaxations for Permutation Problems Rodolphe Jenatton, Francis Bach, Alexandre D'Aspremont, Fajwel Fogel
 HighDimensional Gaussian Process Bandits Andreas Krause, Volkan Cevher, Josip Djolonga
 A memory frontier for complex synapses Surya Ganguli, Subhaneil Lahiri
 A Comparative Framework for Preconditioned Lasso Algorithms Fabian L. Wauthier, Nebojsa Jojic, Michael Jordan
 Lasso Screening Rules via Dual Polytope Projection Jiayu Zhou, Jieping Ye, Peter Wonka, Jie Wang
 Efficient Optimization for Sparse Gaussian Process Regression Aaron Hertzmann, Marcus A. Brubaker, Yanshuai Cao, David Fleet
 Lexical and Hierarchical Topic Regression Jordan BoydGraber, VietAn Nguyen, Philip Resnik
 Stochastic Convex Optimization with Multiple Objectives Mehrdad Mahdavi, Rong Jin, Tianbao Yang
 A Kernel Test for ThreeVariable Interactions Arthur Gretton, Dino Sejdinovic, Wicher Bergsma
 Robust Transfer Principal Component Analysis with Rank Constraints Yuhong Guo
 Online Learning with Switching Costs and Other Adaptive Adversaries Ofer Dekel, Ohad Shamir, Nicolò CesaBianchi
 Learning Prices for Repeated Auctions with Strategic Buyers Afshin Rostamizadeh, Umar Syed, Kareem Amin
 Probabilistic Principal Geodesic Analysis P.T. Fletcher, Miaomiao Zhang
 Confidence Intervals and Hypothesis Testing for HighDimensional Statistical Models Adel Javanmard, Andrea Montanari
 Learning with Noisy Labels Ambuj Tewari, Inderjit Dhillon, Nagarajan Natarajan, Pradeep Ravikumar
 Tracking Timevarying Graphical Structure David Danks, Erich Kummerfeld
 Online Learning with Costly Features and Labels András György, Russell Greiner, Gabor Bartok, Csaba Szepesvari, Navid Zolghadr
 Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions Liam Paninski, Eftychios A. Pnevmatikakis
 A Novel TwoStep Method for Cross Language Representation Learning Yuhong Guo, Min Xiao
 Statistical Active Learning Algorithms MariaFlorina Balcan, Vitaly Feldman
 Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits Liam Paninski, Brooks Paige, Ari Pakman, Ben Shababo
 Reflection methods for userfriendly submodular optimization Stefanie Jegelka, Suvrit Sra, Francis Bach
 Unsupervised Structure Learning of Stochastic AndOr Grammars Maria Pavlovskaia, Kewei Tu, SongChun Zhu
 Convex Tensor Decomposition via Structured Schatten Norm Regularization Ryota Tomioka, Taiji Suzuki
 Stochastic Ratio Matching of RBMs for Sparse HighDimensional Inputs Yoshua Bengio, Yann Dauphin
 A Deep Architecture for Matching Short Texts Hang Li, Zhengdong Lu
 Reservoir Boosting : Between Online and Offline Ensemble Learning Leonidas Lefakis, François Fleuret
 Multiclass Total Variation Clustering David Uminsky, Thomas Laurent, Xavier Bresson, James von Brecht
 Approximate Inference in Continuous Determinantal Processes Ben Taskar, Raja Hafiz Affandi, Emily Fox
 Global Solver and Its Efficient Approximation for Variational Bayesian Lowrank Subspace Clustering Ichiro Takeuchi, Masashi Sugiyama, Shinichi Nakajima, S. Derin Babacan, Akiko Takeda
 Thompson Sampling for 1Dimensional Exponential Family Bandits Emilie Kaufmann, Nathaniel Korda, Remi Munos
 It is all in the noise: Efficient multitask Gaussian process inference with structured residuals Christoph Lippert, Oliver Stegle, Karsten Borgwardt, Barbara Rakitsch
 Convex Calibrated Surrogates for LowRank Loss Matrices with Applications to Subset Ranking Losses Ambuj Tewari, Harish G. Ramaswamy, Shivani Agarwal
 Inverse Density as an Inverse Problem: the Fredholm Equation Approach Mikhail Belkin, Qichao Que
 Robust Image Denoising with MultiColumn Deep Neural Networks Honglak Lee, Forest Agostinelli, Michael R. Anderson
 EDML for Learning Parameters in Directed and Undirected Graphical Models Adnan Darwiche, Arthur Choi, Khaled Refaat
 Similarity Component Analysis Fei Sha, Soravit Changpinyo, Kuan Liu
 Approximate Bayesian Image Interpretation using Generative Probabilistic Graphics Programs Tejas D. Kulkarni, Vikash Mansinghka, Yura N. Perov, Josh Tenenbaum
 Local Privacy and Minimax Bounds: Sharp Rates for Probability Estimation Martin J. Wainwright, John Duchi, Michael Jordan
 Firing rate predictions in optimal balanced networks Christian K. Machens, Sophie Denève, David G. Barrett
 Manifoldbased Similarity Adaptation for Label Propagation Masayuki Karasuyama, Hiroshi Mamitsuka
 NonUniform Camera Shake Removal Using a SpatiallyAdaptive Sparse Penalty David Wipf, Haichao Zhang
 Learning to Prune in Metric and NonMetric Spaces Leonid Boytsov, Bilegsaikhan Naidan
 Online learning in episodic Markovian decision processes by relative entropy policy search Gergely Neu, Alexander Zimin
 Optimistic policy iteration and natural actorcritic: A unifying view and a nonoptimality result Paul Wagner
 Bayesian Hierarchical Community Discovery Charles Blundell, Yee Whye Teh
 From Bandits to Experts: A Tale of Domination and Independence Claudio Gentile, Noga Alon, Yishay Mansour, Nicolò CesaBianchi
 Predictive PAC Learning and Process Decompositions Cosma Shalizi, Aryeh Kontorovitch
 Passefficient unsupervised feature selection Crystal Maung, Haim Schweitzer
 Solving inverse problem of Markov chain with partial observations Takayuki Osogami, Tetsuro Morimura, Tsuyoshi Ide
 Mapping paradigm ontologies to and from the brain Gael Varoquaux, Yannick Schwartz, Bertrand Thirion
 NoiseEnhanced Associative Memories Lav R. Varshney, Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi
 Exact and Stable Recovery of Pairwise Interaction Tensors Shouyuan Chen, Irwin King, Michael R. Lyu, Zenglin Xu
 Perfect Associative Learning with SpikeTimingDependent Plasticity Christian Albers, Klaus Pawelzik, Maren Westkott
 On Poisson Graphical Models Eunho Yang, Zhandong Liu, Genevera I. Allen, Pradeep Ravikumar
 Streaming Variational Bayes Andre Wibisono, Nicholas Boyd, Tamara Broderick, Michael Jordan, Ashia C. Wilson
 Gaussian Process Conditional Copulas with Applications to Financial Time Series Daniel HernándezLobato, José Miguel HernándezLobato, James R. Lloyd
 Extracting regions of interest from biological images with convolutional sparse block coding Maneesh Sahani, Marius Pachitariu, Henry Dalgleish, Michael Hausser, Adam M. Packer, Noah Pettit
 DESPOT: Online POMDP Planning with Regularization Nan Ye, Wee Sun Lee, David Hsu, Adhiraj Somani
 Matrix Completion From any Given Set of Observations Troy Lee, Adi Shraibman
 Regressiontree Tuning in a Streaming Setting Francesco Orabona, Samory Kpotufe
 Multiscale Dictionary Learning for Estimating Conditional Distributions Francesca Petralia, David Dunson, Joshua T. Vogelstein
 Stochastic Optimization of PCA with Capped MSG Nati Srebro, Raman Arora, Andy Cotter
 Visual Concept Learning: Combining Machine Vision and Bayesian Generalization on Concept Hierarchies Joshua T. Abbott, Joseph Austerweil, Trevor Darrell, Thomas Griffiths, Yangqing Jia
 Robust Bloom Filters for Large MultiLabel Classification Tasks Nicolas Usunier, Patrick Gallinari, Thierry Artières, Moustapha M. Cisse
 TopDown Regularization of Deep Belief Networks Matthieu Cord, Hanlin Goh, JooHwee Lim, Nicolas Thome
 Learning Efficient Random Maximum APosteriori Predictors with NonDecomposable Loss Functions Joseph Keshet, Tamir Hazan, Tommi Jaakkola, Subhransu Maji
 Machine Teaching for Bayesian Learners in the Exponential Family Xiaojin Zhu
 Scoring Workers in Crowdsourcing: How Many Control Questions are Enough? Alex Ihler, Mark Steyvers, Qiang Liu
 Action from Still Image Dataset and Inverse Optimal Control to Learn Task Specific Visual Scanpaths Cristian Sminchisescu, Stefan Mathe
 Robust Sparse Principal Component Regression under the High Dimensional Elliptical Model Fang Han, Han Liu
 Global MAPOptimality by Shrinking the Combinatorial Search Area with Convex Relaxation Jörg Hendrik Kappes, Bogdan Savchynskyy, Christoph Schnörr, Paul Swoboda
 Nearoptimal Anomaly Detection in Graphs using Lovasz Extended Scan Statistic Aarti Singh, Akshay Krishnamurthy, James L. Sharpnack
 Demixing odors  fast inference in olfaction Alexandre Pouget, Jeff Beck, Agnieszka GrabskaBarwinska, Peter Latham
 Learning Multiple Models via Regularized Weighting Huan Xu, Shie Mannor, Daniel Vainsencher
 When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured SparsityAnima Anandkumar, Sham M. Kakade, Daniel Hsu, Majid Janzamin
 Learning with Invariance via Linear Functionals on Reproducing Kernel Hilbert Space Wee Sun Lee, Xinhua Zhang, Yee Whye Teh
 Distributed Submodular Maximization: Identifying Representative Elements in Massive Data Andreas Krause, Amin Karbasi, Baharan Mirzasoleiman, Rik Sarkar
 Adaptive Market Making via Online Learning Satyen Kale, Jacob Abernethy
 On the Sample Complexity of Subspace Learning Lorenzo Rosasco, Guillermo D. Canas, Alessandro Rudi
 Embed and Project: Discrete Sampling with Universal Hashing Ashish Sabharwal, Bart Selman, Carla P. Gomes, Stefano Ermon
 Discriminative Transfer Learning with Treebased Priors Nitish Srivastava, Ruslan Salakhutdinov
 DeViSE: A Deep VisualSemantic Embedding Model Andrea Frome, Samy Bengio, Greg S. Corrado, Jeff Dean, Tomas Mikolov, Marc'Aurelio Ranzato, Jon Shlens
 Minimax Theory for Highdimensional Gaussian Mixtures with Sparse Mean Separation Aarti Singh, Larry Wasserman, Martin Azizyan
 Predicting Parameters in Deep Learning Nando de Freitas, Misha Denil, Laurent Dinh, Marc'Aurelio Ranzato, Babak Shakibi
 Estimating the Unseen: Improved Estimators for Entropy and other Properties Gregory Valiant, Paul Valiant
 What do row and column marginals reveal about your dataset? John Byers, Behzad Golshan, Evimaria Terzi
 RNADE: The realvalued neural autoregressive densityestimator Hugo Larochelle, Iain Murray, Benigno Uria
 TwoTarget Algorithms for InfiniteArmed Bandits with Bernoulli Rewards Thomas Bonald, Alexandre Proutiere
 Reconciling priors'' & "priors" without prejudice? Remi Gribonval, Pierre Machart
 Sparse Overlapping Sets Lasso for Multitask Learning and its Application to fMRI Analysis Timothy T. Rogers, Christopher Cox, Rob Nowak, Nikhil Rao
 Sensor Selection in HighDimensional Gaussian Trees with Nuisances Jonathan P. How, Daniel S. Levine
 Sequential Transfer in Multiarmed Bandit with Finite Set of Models Alessandro Lazaric, Emma Brunskill, Mohammad Gheshlaghi azar
 BuyinBulk Active Learning Liu Yang, Jaime Carbonell
 Contrastive Learning Using Spectral Methods David C. Parkes, Ryan P. Adams, Daniel Hsu, James Y. Zou
 Sparse Inverse Covariance Estimation with Calibration Han Liu, Tuo Zhao
 Stochastic MajorizationMinimization Algorithms for LargeScale Optimization Julien Mairal
 Sinkhorn Distances: Lightspeed Computation of Optimal Transportation Marco Cuturi
 Speedup Matrix Completion with Side Information: Application to MultiLabel Learning Rong Jin, Miao Xu, ZhiHua Zhou
 Compete to Compute Jürgen Schmidhuber, Faustino Gomez, Sohrob Kazerounian, Jonathan Masci, Rupesh K. Srivastava
 Informationtheoretic lower bounds for distributed statistical estimation with communication constraints Martin J. Wainwright, Yuchen Zhang, John Duchi, Michael Jordan
 Projected Natural ActorCritic Philip S. Thomas, Sridhar Mahadevan, William C. Dabney, Stephen Giguere
 How to Hedge an Option Against an Adversary: BlackScholes Pricing is Minimax Optimal Andre Wibisono, Jacob Abernethy, Peter Bartlett, Rafael Frongillo
 Discovering Hidden Variables in NoisyOr Networks using Quartet Tests David Sontag, Yonatan Halpern, Yacine Jernite
 ErrorMinimizing Estimates and Universal EntryWise Error Bounds for LowRank Matrix Completion Franz Kiraly, Louis Theran
 Learning the Local Statistics of Optical Flow Yair Weiss, Dan Rosenbaum, Daniel Zoran
 Aggregating Optimistic Planning Trees for Solving Markov Decision Processes Raphael Fonteneau, Gunnar Kedenburg, Remi Munos
 Robust learning of lowdimensional dynamics from large neural ensembles Liam Paninski, David Pfau, Eftychios A. Pnevmatikakis
 Estimation Bias in MultiArmed Bandit Algorithms for Search Advertising Min Xu, TieYan Liu, Tao Qin
 Action is in the Eye of the Beholder: Eyegaze Driven Model for SpatioTemporal Action Localization Greg Mori, Leonid Sigal, Michalis Raptis, Nataliya Shapovalova
 A* Lasso for Learning a Sparse Bayesian Network Structure for Continuous Variables Seyoung Kim, Jing Xiang
 The Total Variation on Hypergraphs  Learning on Hypergraphs Revisited Matthias Hein, Simon Setzer, Leonardo Jost, Syama Sundar Rangapuram
 Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints Jeff A. Bilmes, Rishabh K. Iyer
 Synthesizing Robust Plans under Incomplete Domain Models Minh Do, Subbarao Kambhampati, Tuan A. Nguyen
 Symbolic Opportunistic Policy Iteration for FactoredAction MDPs Prasad Tadepalli, Roni Khardon, Alan Fern, Aswin Raghavan
 Oneshot learning by inverting a compositional causal process Ruslan Salakhutdinov, Brenden M. Lake, Josh Tenenbaum
 Statistical analysis of coupled time series with Kernel CrossSpectral Density operators Michel Besserve, Nikos K. Logothetis, Bernhard Schölkopf
 Fast Algorithms for Gaussian Noise Invariant Independent Component Analysis Mikhail Belkin, Luis Rademacher, James R. Voss
 Deep Neural Networks for Object Detection Dumitru Erhan, Christian Szegedy, Alexander Toshev
 Geometric optimisation on positive definite matrices for elliptically contoured distributions Suvrit Sra, Reshad Hosseini
 Sign Cauchy Projections and ChiSquare Kernel Ping Li, John Hopcroft, Gennady Samorodnitsk
 Relevance Topic Model for Unstructured Social Group Activity Recognition Yongzhen Huang, Tieniu Tan, Liang Wang, Fang Zhao
 kPrototype Learning for 3D Rigid Structures Jinhui Xu, Ronald Berezney, Hu Ding
 Forgetful Bayes and myopic planning: Human learning and decisionmaking in a bandit setting Angela J. Yu, Shunan Zhang
 Probabilistic Movement Primitives Gerhard Neumann, Christian Daniel, Alexandros Paraschos, January Peters
 Policy Shaping: Integrating Human Feedback with Reinforcement Learning Shane Griffith, Charles Isbell, Jonathan Scholz, Kaushik Subramanian, Andrea L. Thomaz
 Multilinear Dynamical Systems for Tensor Time Series Lei Li, Mark Rogers, Stuart Russell
 Deep contentbased music recommendation Benjamin Schrauwen, Sander Dieleman, Aaron van den Oord
 A Stabilitybased Validation Procedure for Differentially Private Machine Learning Kamalika Chaudhuri, Staal A. Vinterbo
 Capacity of strong attractor patterns to model behavioural and cognitive prototypes Abbas Edalat
 Fantope Projection and Selection: A nearoptimal convex relaxation of sparse PCA Jing Lei, Vincent Q. Vu, Juhee Cho, Karl Rohe
 Cluster Trees on Manifolds Aarti Singh, Alessandro Rinaldo, Larry Wasserman, Sivaraman Balakrishnan, Srivatsan Narayanan
 Bayesian inference for low rank spatiotemporal neural receptive fields Mijung Park, Jonathan Pillow
 Adaptive Submodular Maximization in Bandit Setting Branislav Kveton, Brian Eriksson, Victor Gabillon, S. Muthukrishnan, Zheng Wen
 Analyzing Hogwild Parallel Gaussian Gibbs Sampling Matthew Johnson, James Saunderson, Alan Willsky
 Minimax Optimal Algorithms for Unconstrained Linear Optimization Jacob Abernethy, Brendan McMahan
 (Nearly) Optimal Algorithms for Private Online Learning in Fullinformation and Bandit Settings Abhradeep Guha Thakurta, Adam Smith
 Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions Jeff A. Bilmes, Stefanie Jegelka, Rishabh K. Iyer
 Learning Kernels Using Local Rademacher Complexity Corinna Cortes, Marius Kloft, Mehryar Mohri
 Annealing between distributions by averaging moments Ruslan Salakhutdinov, Roger B. Grosse, Chris J. Maddison
 Optimizing Instructional Policies Harold Pashler, Michael Mozer, William J. Huggins, Robert Lindsey
 Translating Embeddings for Modeling Multirelational Data Antoine Bordes, Jason Weston, Nicolas Usunier, Alberto GarciaDuran, Oksana Yakhnenko
 Phase Retrieval using Alternating Minimization Sujay Sanghavi, Prateek Jain, Praneeth Netrapalli
 RealTime Inference for a Gamma Process Model of Neural Spiking Lawrence Carin, Vinayak Rao, David Carlson, Joshua T. Vogelstein
 Understanding Dropout Pierre Baldi, Peter J. Sadowski
 The Power of Asymmetry in Binary Hashing Nati Srebro, Ruslan Salakhutdinov, Yury Makarychev, Behnam Neyshabur, Payman Yadollahpour
 Estimation, Optimization, and Parallelism when Data is Sparse John Duchi, Michael Jordan, Brendan McMahan
 A multiagent control framework for coadaptation in braincomputer interfaces Liam Paninski, Tony Jebara, Roy Fox, Josh S. Merel
 Modeling Overlapping Communities with Node Popularities Chong Wang, Prem Gopalan, David Blei
 Learning from Limited Demonstrations Doina Precup, Joelle Pineau, Amir massoud Farahmand, Beomjoon Kim
 Memory Limited, Streaming PCA Constantine Caramanis, Prateek Jain, Ioannis Mitliagkas
 An Approximate, Efficient LP Solver for LP Rounding Christopher Re, Ji Liu, Stephen Wright, Victor Bittorf, Srikrishna Sridhar, Ce Zhang
 Bayesian inference as iterated random functions with applications to sequential inference in graphical models Xuanlong Nguyen, Arash Amini
 Compressive Feature Learning Trevor Hastie, John C. Mitchell, Hristo S. Paskov, Robert West
 Momentbased Uniform Deviation Bounds for kmeans and Friends Sanjoy Dasgupta, Matus Telgarsky
 Fast Template Evaluation with Vector Quantization David Forsyth, Mohammad Amin Sadeghi
 Contextsensitive active sensing in humans Angela J. Yu, Sheeraz Ahmad, He Huang
 A New Convex Relaxation for Tensor Completion Massimiliano Pontil, Bernardino RomeraParedes
 Variational Planning for Graphbased MDPs Alex Ihler, Qiang Cheng, Qiang Liu, Feng Chen
 Convex TwoLayer Modeling Dale Schuurmans, Xinhua Zhang, Özlem Aslan, Hao Cheng
 Sketching Structured Matrices for Faster Nonlinear Regression Vikas Sindhwani, Haim Avron, David Woodruff
 (More) Efficient Reinforcement Learning via Posterior Sampling Ian Osband, Dan Russo, Benjamin Van Roy
 Model Selection for HighDimensional Regression under the Generalized Irrepresentability Condition Adel Javanmard, Andrea Montanari
 Efficient Exploration and Value Function Generalization in Deterministic Systems Benjamin Van Roy, Zheng Wen
 Bellman Error Based Feature Generation using Random Projections on Sparse Spaces Doina Precup, Joelle Pineau, Amir massoud Farahmand, Yuri Grinberg, Mahdi Milani Fard
 Learning invariant representations and applications to face verification Joel Z. Leibo, Tomaso Poggio, Qianli Liao
 Optimization, Learning, and Games with Predictable Sequences Karthik Sridharan, Sasha Rakhlin
 Adaptivity to Local Smoothness and Dimension in Kernel Regression Samory Kpotufe, Vikas Garg
 Adaptive dropout for training deep neural networks Jimmy Ba, Brendan Frey
 Hierarchical Modular Optimization of Convolutional Networks Achieves Representations Similar to Macaque IT and Human Ventral Stream Charles Cadieu, James J. DiCarlo, Ha Hong, Daniel L. Yamins
 Distributed Representations of Words and Phrases and their Compositionality Ilya Sutskever, Kai Chen, Greg S. Corrado, Jeff Dean, Tomas Mikolov
 Regularized Spectral Clustering under the DegreeCorrected Stochastic Blockmodel Tai Qin, Karl Rohe
 Analyzing the Harmonic Structure in GraphBased Learning Zhenguo Li, ShihFu Chang, XiaoMing Wu
 Recurrent linear models of simultaneouslyrecorded neural populations Biljana Petreska, Maneesh Sahani, Marius Pachitariu
 BIG & QUIC: Sparse Inverse Covariance Estimation for a Million Variables Inderjit Dhillon, ChoJui Hsieh, Russell Poldrack, Pradeep Ravikumar, Matyas A. Sustik
 The Fast Convergence of Incremental PCA Sanjoy Dasgupta, Yoav Freund, Akshay Balsubramani
 Multisensory Encoding, Decoding, and Identification Aurel A. Lazar, Yevgeniy Slutskiy
 Optimal integration of visual speed across different spatiotemporal frequency channels Alan Stocker, Matjaz Jogan
 Matrix factorization with binary components Martin Slawski, Matthias Hein, Pavlo Lutsik
 Learning to Pass Expectation Propagation Messages Daniel Tarlow, John Winn, Nicolas Heess
 Robust Low Rank Kernel Embeddings of Multivariate Distributions Le Song, Bo Dai
 Mu Li, Li Zhou, Zichao Yang, Aaron Li, Fei Xia, David Andersen and Alexander Smola.
Parameter Server for Distributed Machine Learning
We propose a parameter server framework to solve distributed machine learning problems. Both data and workload are distributed into client nodes, while server nodes maintain globally shared parameters, which are represented as sparse vectors and matrices. The framework manages asynchronous data communications between clients and servers. Flexible consistency models, elastic scalability and fault tolerance are supported by this framework. We present algorithms and theoretical analysis for challenging nonconvex and nonsmooth problems. To demonstrate the scalability of the proposed framework, we show experimental results on real data with billions of parameters.
PDF  Yarin Gal and Zoubin Ghahramani.
Pitfalls in the use of Parallel Inference for the Dirichlet Process
Recent work done by Lovell, Adams, and Mansingka [2012] and Williamson, Dubey, and Xing [2013] has suggested an alternative parametrisation for the Dirichlet process in order to derive nonapproximate parallel MCMC inference for it. This approach to parallelisation has been pickedup and implemented in several different fields [Chahuneau et al., 2013, Pan et al., 2013]. In this paper we show that the approach suggested is impractical due to an extremely unbalanced distribution of the data. We characterise the requirements of efficient parallel inference for the Dirichlet process and show that the proposed inference fails most of these conditions (while approximate approaches often satisfy most of them). We present both theoretical and experimental evidence of this, analysing the load balance for the inference showing that it is independent of the size of the dataset and the number of nodes available in the parallel implementation, and end with preliminary suggestions of alternative paths of research for efficient nonapproximate parallel inference for the Dirichlet process.
PDF  Yingyu Liang, MariaFlorina Balcan and Vandana Kanchanapally.
Distributed PCA and kMeans Clustering
This paper proposes a distributed PCA algorithm, with the theoretical guarantee that any good approximation solution on the projected data for kmeans clustering is also a good approximation on the original data, while the projected dimension required is independent of the original dimension. When combined with the distributed coresetbased clustering approach in [3], this leads to an algorithm in which the number of vectors communicated is independent of the size and the dimension of the original data. Our experiment results demonstrate the effectiveness of the algorithm.
PDF  JulienCharles Lévesque, Christian Gagné and Robert Sabourin.
Ensembles of Budgeted Kernel Support Vector Machines for Parallel Large Scale Learning
In this work, we propose to combine multiple budgeted kernel support vector machines (SVMs) trained with stochastic gradient descent (SGD) in order to exploit large databases and parallel computing resources. The variance induced by budget restrictions of the kernel SVMs is reduced through the averaging of predictions, resulting in greater generalization performance. The variance of the trainings results in a diversity of predictions, which can help explain the better performance. Finally, the proposed method is intrinsically parallel, which means that parallel computing resources can be exploited in a straightforward manner.
PDF  Zhen Qin, Vaclav Petricek, Nikos Karampatziakis, Lihong Li and John Langford.
Efficient Online Bootstrapping for Large Scale Learning
Bootstrapping is a useful technique for estimating the uncertainty of a predictor, for example, confidence intervals for prediction. It is typically used on small to moderate sized datasets, due to its high computation cost. This work describes a highly scalable online bootstrapping strategy, implemented inside Vowpal Wabbit, that is several times faster than traditional strategies. Our experiments indicate that, in addition to providing a black boxlike method for estimating uncertainty, our implementation of online bootstrapping may also help to train models with better prediction performance due to model averaging.
PDF  Arun Kumar, Nikos Karampatziakis, Paul Mineiro, Markus Weimer and Vijay Narayanan.
Distributed and Scalable PCA in the Cloud
Principal Component Analysis (CA) is a popular technique with many applications. Recent randomized PCA algorithms scale to large datasets but face a bottleneck when the number of features is also large. We propose to mitigate this issue using a composition of structured and unstructured randomness within a randomized PCA algorithm. Initial experiments using a large graph dataset from Twitter show promising results. We demonstrate the scalability of our algorithm by implementing it both on Hadoop, and a more flexible platform named REEF.
PDF  Nedim Lipka.
Towards Distributed Reinforcement Learning for Digital Marketing with Spark
A variety of problems in digital marketing can be modeled as Markov decision processes and solved by dynamic programming with the goal of calculating the policy that maximizes the expected discounted reward. Algorithms, such as policy iteration, require a state transition and a reward model, which can be estimated based on a given data set. In this paper, we compare the execution times for estimating the transition function in a mapreduce fashion if the data set becomes large in terms of the number of records and features. Therefore, we create differentsized Spark and Hadoop clusters in the Amazon cloud computing environment. The inmemory clustering system Spark is outperforming Hadoop and runs up to 71% faster. Furthermore, we study the execution times of policy iteration running on Spark clusters and show the execution time reduction gained by increasing the number of instances in the cluster.
PDF  Tuukka Ruotsalo, Jaakko Peltonen, Manuel J. A. Eugster, Dorota Glowacka, Giulio Jacucci, Aki Reijonen and Samuel Kaski.
Lost in Publications? How to Find Your Way in 50 Million Scientific Documents
Researchers must navigate big data. Current scientific knowledge includes 50 million published articles. How can a system help a researcher find relevant documents in her field? We introduce IntentRadar, an interactive search user interface and search engine that anticipates userâ™s search intents by estimating them form userâ™s interaction with the interface. The estimated intents are visualized on a radial layout that organizes potential intents as directions in the information space. The intent radar assists users to direct their search by allowing feedback to be targeted on keywords that represent the potential intents. Users can provide feedback by manipulating the position of the keywords on the radar. The system then learns and visualizes improved estimates and corresponding documents. IntentRadar has been shown to significantly improve usersâ™ task performance and the quality of retrieved information without compromising task execution time.
PDF  Michael Kane and Bryan Lewis.
cnidaria: A Generative Communication Approach to Scalable, Distributed Learning
This paper presents a scalable, software framework that facilitates largescale learning and numerical computing. Unlike existing MapReduce frameworks our design is not limited to embarrassingly parallel computing challenges. The framework sits on top of existing storage infrastructures and results of a computation may left out on the cluster (a reduce step is not required). Unlike existing distributed numerical frameworks the proposed framework is elastic and works with both dense and sparse data representations. This generality is achieved through a generative communication scheme whose expressions are either consumed by the distributed computing environment or used to move data, in a peertopeer (P2P) fashion, between nodes in a cluster/cloud. This approach integrates advances in the both cloud computing and the distributed numerical computing community and can be applied to a general class of learning challenges.
PDF  Anshumali Shrivastava and Ping Li.
Beyond Pairwise: Provably Fast Algorithms for Approximate kWay Similarity Search
We go beyond the notion of pairwise similarity and look into search problems with kway similarity functions. In this paper, we focus on problems related to 3way Jaccard similarity. We show that approximate R3way similarity search problems admit fast algorithms with provable guarantees, analogous to the pairwise case. Our analysis and speedup guarantees naturally extend to kway resemblance. In the process, we extend traditional framework of locality sensitive hashing (LSH) to handle higherorder similarities, which could be of independent theoretical interest. The applicability of R3way search is shown on the Google Sets application as well as in an application for improving retrieval quality.
PDF  Wei Dai, Jinliang Wei, Xun Zheng, Jin Kyu Kim, Seunghak Lee, Junming Yin, Qirong Ho and Eric Xing.
Petuum: A System for IterativeConvergent Distributed ML
A major bottleneck to applying advanced ML programs at industrial scales is the migration of an academic implementation, often specialized for a small, wellcontrolled computer platform such as desktop PCs and small labclusters, to a big, less predicable platform such as a corporate cluster or the cloud. This poses enormous challenges: how does one train huge models with billions of parameters on massive data, especially when substantial expertise is required to handle many lowlevel systems issues? We propose a new architecture of systems components that systematically addresses these challenges, thus providing a generalpurpose distributed platform for Big Machine Learning. Our architecture specifically exploits the fact that many ML programs are fundamentally loss function minimization problems, and that their iterativeconvergent nature presents many unique opportunities to minimize loss, such as via dynamic variable scheduling and errorbounded consistency models for synchronization. Thus, we treat data, parameter and variable blocks as computing units to be dynamically scheduled and updated in an errorbounded manner, with the goal of minimizing the loss function as quickly as possible.
PDF  Haiqin Yang, Junjie Hu, Michael Lyu and Irwin King.
Online Imbalanced Learning with Kernels
Imbalanced learning, or learning from imbalanced data, is a challenging problem in both academy and industry. Nowadays, the streaming imbalanced data become popular and trigger the volume, velocity, and variety issues of learning from these data. To tackle these issues, online learning algorithms are proposed to learn a linear classifier via maximizing the AUC score. However, the developed linear classifiers ignore the learning power of kernels. In this paper, we therefore propose online imbalanced learning with kernels (OILK) to exploit the nonlinearity and heterogeneity embedded in the imbalanced data. Different from previously proposed work, we optimize the AUC score to learn a nonlinear representation via the kernel trick. To relieve the computational and storing cost, we also investigate different buffer update policies, including firstinfirstout (FIFO) and reservoir sampling (RS), to maintain a fixed budgeted buffer on the number of support vectors. We demonstrate the properties of our proposed OILK through detailed experiments.
PDF  Alex Beutel, Abhimanu Kumar, Evangelos Papalexakis, Partha Pratim Talukdar, Christos Faloutsos and Eric Xing.
FLEXIFACT: Scalable Flexible Factorization of Coupled Tensors on Hadoop
Given multiple data sets of relational data that share a number of dimensions, how can we efficiently decompose our data into the latent factors? Factorization of a single matrix or tensor has attracted much attention, as, e.g., in the Netflix challenge, with users rating movies. However, we often have additional, side, information, like, e.g., demographic data about the users, in the Netflix example above. Incorporating the additional information leads to the coupled factorization problem. So far, it has been solved for relatively small datasets. We provide a distributed, scalable method for decomposing matrices, tensors, and coupled data sets through stochastic gradient descent on a variety of objective functions. We offer the following contributions: (1) Versatility: Our algorithm can perform matrix, tensor, and coupled factorization, with flexible objective functions including the Frobenius norm, Frobenius norm with an l1 induced sparsity, and nonnegative factorization. (2) Scalability: FLEXIFACT scales to unprecedented sizes in both the data and model, with up to billions of parameters. FLEXIFACT runs on standard Hadoop. (3) Convergence proofs showing that FLEXIFACT converges on the variety of objective functions, even with projections.
PDF  Faraz Makari Manshadi and Rainer Gemulla.
A Distributed Approximation Algorithm for Mixed PackingCovering Linear Programs
Mixed packingcovering linear programs capture a simple but expressive subclass of linear programs. They commonly arise as linear programming relaxations of a number important combinatorial problems, including various network design and generalized matching problems. In this paper, we propose an efficient distributed approximation algorithm for solving mixed packingcovering problems which requires a polylogarithmic number of passes over the input. Our algorithm is wellsuited for parallel processing on GPUs, in sharedmemory architectures, or on small clusters of commodity nodes. We report results of a case study for generalized bipartite matching problems.
PDF  Artem Sokolov and Stefan Riezler.
Taskdriven Greedy Learning of Feature Hashing Functions
Randomly hashing multiple features into one aggregated feature is routinely used in largescale machine learning tasks to both increase speed and decrease memory requirements, with little or no sacrifice in performance. In this paper we investigate whether using a learned (instead of a random) hashing function improves performance. We show experimentally that with increasing difference between the dimensionalities of the input space and the hashed space, learning hashes is increasingly useful compared to random hashing.
PDF  Ahmed Elgohary, Ahmed Farahat, Mohamed Kamel and Fakhri Karray.
Approximate Nearest Centroid Embedding for Kernel $k$Means
This paper proposes an efficient embedding method for scaling kernel kmeans on cloud infrastructures. The embedding method allows for approximating the computation of the nearest centroid to each data instance and, accordingly, it eliminates the quadratic space and time complexities of the cluster assignment step in the kernel kmeans algorithm. We show that the proposed embedding method is effective under memory and computing power constraints, and that it achieves better clustering performance compared to other approximations of the kernel kmeans algorithm.
PDF  Yisheng Liao, Alex Rubinsteyn, Russell Power and Jinyang Li.
Learning Random Forests on the GPU
Random Forests are a popular and powerful machine learning technique, with several fast multicore CPU implementations. Since many other machine learning methods have seen impressive speedups from GPU implementations, applying GPU acceleration to random forests seems like a natural fit. Previous attempts to use GPUs have relied on coarsegrained task parallelism and have yielded inconclusive or unsatisfying results. We introduce CudaTree, a GPU Random Forest implementation which adaptively switches between data and task parallelism. We show that, for larger datasets, this algorithm is faster than highly tuned multicore CPU implementations.
PDF  Shravan Narayanamurthy, Markus Weimer, Dhruv Mahajan, Tyson Condie, Sundararajan Sellamanickam and S. Sathiya Keerthi.
Towards ResourceElastic Machine Learning
PDF  Ignacio Arnaldo, Kalyan Veeramachaneni and UnaMay O'Reilly.
Building Multiclass Nonlinear Classifiers with GPUs
The adoption of multiclass classification strategies that train independent binary classifiers becomes challenging when the goal is to retrieve nonlinear models from large datasets and the process requires several passes through the data. In such scenario, the combined use of a search and score algorithm and GPUs allows to obtain binary classifiers in a reduced time. We demonstrate our approach by training a ten class classifier over more than 400K exemplars following the exhaustive Error Correcting Output Code strategy that decomposes into 511 binary problems.
PDF  John Canny and Huasha Zhao.
BIDMach: Largescale Learning with Zero Memory Allocation
This paper describes recent work on the BIDMach toolkit for largescale machine learning. BIDMach has demonstrated singlenode performance that exceeds that of published cluster systems for many common machinelearning task. BIDMach makes full use of both CPU and GPU acceleration (through a sister library BIDMat), and requires only modest hardware (commodity GPUs). One of the challenges of reaching this level of performance is the allocation barrier. While it is simple and expedient to allocate and recycle matrix (or graph) objects in expressions, this approach is too slow to match the arithmetic throughput possible on either GPUs or CPUs. In this paper we describe a caching approach that allows code with complex matrix (graph) expressions to run at massive scale, i.e. multiterabyte data, with zero memory allocation after initial startup. We present a number of new benchmarks that leverage this approach.
PDF  Shohei Hido, Satoshi Oda and Seiya Tokui.
Jubatus: An Open Source Platform for Distributed Online Machine Learning
Distributed computing is essential for handling very large datasets. Online learning is also promising for learning from rapid data streams. However, it is still an unresolved problem how to combine them for scalable learning and prediction on big data streams. We propose a general computational framework called loose model sharing for online and distributed machine learning. The key is to share only models rather than data between distributed servers. We also introduce Jubatus, an open source software platform based on the framework. Finally, we describe the details of implementing classifier and nearest neighbor algorithms, and discuss our experimental evaluations.
PDF
CONTRIBUTED TALKS
POSTERS
Accepted Papers
 Gagan Goel, Afshin Nikzad, Adish Singla; Matching Workers Expertise with Tasks: Incentives in Heterogeneous Crowdsourcing Markets.
 Hossein Azari Soufiani, William Z. Chen, David C. Parkes, Lirong Xia; Generalized MethodofMoments for Rank Aggregation.
 Genevieve Patterson, Grant Van Horn, Serge Belongie, Pietro Perona, James Hays; Bootstrapping FineGrained Classifiers: Active Learning with a Crowd in the Loop.
 ChienJu Ho, Aleksandrs Slivkins, Jennifer Wortman Vaughan; Adaptive Contract Design for Crowdsourcing.
 Paul Ruvolo, Jacob Whitehill, Javier R. Movellan; Exploiting Commonality and Interaction Effects in Crowdsourcing Tasks Using Latent Factor Models.
 Ashwinkumar Badanidiyuru, Robert Kleinberg, Aleksandrs Slivkins; Bandits with Knapsacks: Dynamic procurement for crowdsourcing.
 Nicole Immorlica, Greg Stoddard, Vasilis Syrgkanis; Social Status and the Design of Optimal Badges.
 Adish Singla, Ilija Bogunovic, Gábor Bartók, Amin Karbasi, Andreas Krause; On Actively Teaching the Crowd to Classify.
Poster presentations
 Sparse Combinatorial Autoencoders (ID 2)
Karthik Narayan, Pieter Abbeel  Grounded Compositional Semantics for Finding and Describing Images with Sentences (ID 4)
Richard Socher, Quoc Le, Christopher Manning, Andrew Ng  Curriculum Learning for Handwritten Text Line Recognition (ID 5)
Jérôme Louradour, Christopher Kermorvant  A Deep and Tractable Density Estimator (ID 7)
Benigno Uria, Iain Murray, Hugo Larochelle  MultiColumn Deep Neural Networks for Offline Handwritten Chinese Character Classification (ID 11)
Dan Ciresan, Juergen Schmidhuber  Endtoend Phoneme Sequence Recognition using Convolutional Neural Networks (ID 12)
Dimitri Palaz, Ronan Collobert, Mathew Magimai.Doss  Scalable Wide Sparse Learning for Connectomics (ID 15)
Jeremy MaitinShepard, Pieter Abbeel  Is deep learning really necessary for word embeddings? (ID 16)
Rémi Lebret, Joël Legrand, Ronan Collobert  Recurrent Conditional Random Fields (ID 18)
Kaisheng Yao, Baolin Peng, Geoffrey Zweig, Dong Yu, Xiaolong Li, Feng Gao  Recurrent Convolutional Neural Networks for Scene Parsing (ID 20)
Pedro Pinheiro, Ronan Collobert  Backpropagation in Sequential Deep Belief Networks (ID 22)
Galen Andrew, Jeff Bilmes  Learning semantic representations for the phrase translation model (ID 23)
Jianfeng Gao, Xiaodong He, Wentau Yih, Li Deng  Eventdriven Contrastive Divergence in Spiking Neural Networks (ID 25)
Emre Neftci, Bruno Pedroni, Gert Cauwenberghs, Kenneth KreutzDelgado, Srinjoy Das  Dynamics of learning in deep linear neural networks [supp] (ID 27)
Andrew Saxe, James McClelland, Surya Ganguli  Exploring Deep and Recurrent Architectures for Optimal Control (ID 28)
Sergey Levine  Analyzing noise in autoencoders and deep networks (ID 29)
Ben Poole, Jascha SohlDickstein, Surya Ganguli  Structured Recurrent Temporal Restricted Boltzmann Machines (ID 30)
Roni Mittelman, Benjamin Kuipers, Silvio Savarese, Honglak Lee  Learning Deep Representations via Multiplicative Interactions between Factors of Variation (ID 31)
Scott Reed, Honglak Lee  Learning Input and Recurrent Weight Matrices in Echo State Networks (ID 32)
Hamid Palangi, Li Deng, Rabab Ward  Learning SumProduct Networks with Direct and Indirect Variable Interactions (ID 33)
Amirmohammad Rooshenas, Daniel Lowd  Bidirectional Recursive Neural Networks for TokenLevel Labeling with Structure (ID 34)
Ozan Irsoy, Claire Cardie  Estimating Dependency Structures for nonGaussian Components (ID 38)
Hiroaki Sasaki, Michael Gutmann, Hayaru Shouno, Aapo Hyvarinen  Multimodal Neural Language Models (ID 42)
Ryan Kiros, Ruslan Salakhutdinov, Richard Zemel  Nondegenerate Priors for Arbitrarily Deep Networks (ID 43)
David Duvenaud, Oren Rippel, Ryan Adams, Zoubin Ghahramani  Learning Multilingual Word Representations using a BagofWords Autoencoder (ID 44)
Stanislas Lauly, Alex Boulanger, Hugo Larochelle  Multilingual Deep Learning (ID 45)
Sarath Chandar A P, Mitesh M. Khapra, Balaraman Ravindran, Vikas Raykar, Amrita Saha  Learnednorm pooling for deep neural networks (ID 46)
Caglar Gulcehre, Kyunghyun Cho, Razvan Pascanu, Yoshua Bengio  Transitionbased Dependency Parsing Using Recursive Neural Networks (ID 47)
Pontus Stenetorp
 Sarah R. Allen, Lisa Hellerstein.
Approximation algorithms for reducing classification cost in ensembles of classifiers  Yudong Chen, Jiaming Xu.
StatisticalComputational Tradeoffs in Planted Models: The HighDimensional Setting  Junkyu Lee, William Lam, Rina Dechter.
Benchmark on DAOOPT and GUROBI with the PASCAL2 Inference Challenge Problems  Hastagiri P. Vanchinathan, Andreas Marfurt, CharlesAntoine Robelin, Donald Kossmann, Andreas Krause
Adaptively Selecting Valuable Diverse Sets via Gaussian Processes and Submodularity  Alon Milchgrub, Rina Dechter. On Minimal TreeInducing CycleCutsets and Their Use in a CutsetDriven Local Search
 Adarsh Prasad, Stefanie Jegelka, Dhruv Batra.
Submodular Maximization and Diversity in Structured Output Spaces(Supplement)  K. S. Sesh Kumar, Francis Bach.
Maximizing submodular functions using probabilistic graphical models  Kui Tang, Tony Jebara.
Network Ranking With Bethe Pseudomarginals  Baharan Mirzasoleiman, Amin Karbasi, Andreas Krause, Rik Sarkar.
Distributed Submodular Maximization: Identifying Representative Elements in Massive Data  Hidekazu Oiwa, Issei Sato, Hiroshi Nakagawa
Novel Sparse Modeling by L2 + L0 Regularization  Vitaly Feldman, Jan Vondrak
Approximation of Submodular and XOS Functions by Juntas with Applications to Learning
Accepted Papers



 Evolving Groups of Political Interest in Social News


 Bayesian Nonparametric Random Graphs




 Sequential Monte Carlo Inference of MMSB for Dynamic Social Networks
 Variational Bayesian Inference Algorithms for Network Infinite Relational Model









Linear Bandits, Matrix Completion, and Recommendation Systems [pdf]
Efficient coordinatedescent for orthogonal matrices through Givens rotations [pdf][supplementary]
Improved Greedy Algorithms for Sparse Approximation of a Matrix in terms of Another Matrix [pdf]
Preconditioned Krylov solvers for kernel regression [pdf]
Probabilistic LowRank Matrix Completion with Adaptive Spectral Regularization Algorithms [pdf][supplementary]
Dimension Independent Matrix Square using MapReduce [pdf]
 Active Learning of Intuitive Sound Qualities (Huang, Duvenaud, Arnold, Partridge, and Oberholtzer) [pdf]
There is often a mismatch between the highlevel goals an artist wants to express and what the parameters of a synthesizer allow them to control. To enable composers to directly adjust personalized highlevel qualities during sound synthesis, our system actively learns functions that map from the space of synthesizer control parameters to perceived levels of highlevel qualities.  Automatic Construction and NaturalLanguage Summarization of Additive Nonparametric Models (Lloyd, Duvenaud, Grosse, Tenenbaum, and Ghahramani) [pdf][supplement1][supplement2]
To complement recently introduced automatic modelconstruction and search methods, we demonstrate an automatic modelsummarization procedure. After building an additive nonparametric regression model, our method constructs a report which visualizes and explains in words the meaning and relevance of each component. These reports enable human modelchecking and the understanding of complex modeling assumptions and structure. We demonstrate this procedure on two timeseries, showing that the automatically constructed models identify clearly interpretable structures that can be automatically described in simple natural language.  Designing Constructive Machine Learning Models based on Generalied Linear Learning Techniques (Kordjamshidi and Moens) [pdf]
We propose a general framework for designing machine learning models that deal with constructing complex structures in the output space. The goal is to provide an abstraction layer to easily represent and design constructive learning models. The learning approach is based on generalized linear training techniques, and exploits techniques from combinatorial optimization to deal with the complexity of the underlying inference required in this type of models. This approach also allows to consider global structural characteristics and constraints over the output elements in an efficient training and prediction setting. The use case focuses on building spatial meaning representations from text to instantiate a virtual world.  Learning Graphical Concepts (Ellis, Dechter, Adams, and Tenenbaum) [pdf]
How can machine learning techniques be used to solve problems whose solutions are best represented as computer programs? For example, suppose a researcher wants to design a probabilistic graphical model for a novel domain. Searching the space of probabilistic models automatically is notoriously difficult, especially difficult when latent variables are involved. However, researchers seem able to easily adapt commonly used modeling motifs to new domains. In doing so, they draw on abstractions such as trees, chains, grids and plates to constrain and direct the kinds of models they produce. This suggests that before we ask machine learning algorithms to discover parsimonious models of new domains, we should develop techniques that enable our algorithms to automatically learn these ?graphical concepts? in much the same way that researchers themselves do, by seeing examples in the literature. One natural way to think of these graphical concepts is as programs that take sets of random variables and produce graphical models that relate them. In this work, we describe the CEC algorithm, which attempts to learn a distribution over programs by incrementally finding program components that commonly help to solve problems in a given domain, and we show preliminary results indicating that CEC is able to discover the graphical concepts that underlie many of the common graphical model structures.  The Constructive Learning Problem: An Efficient Approach for Hypergraphs (Costa and Sorescu) [pdf]
Discriminative systems that can deal with input graphs are known, however, generative/constructive approaches that can output (hyper)graphs belonging with high probability to a desired class, are less studied. Here we propose an approach that, differently from common graph grammars inference systems, is computationally efficient and robust to the presence of outliers in the training sample. We report experimental results in a denovo molecular synthesis problem. We show that we can construct compounds that, once added to the original training set can improve the performance of a binary classification predictor.  Analyzing Probabilistic Models Generated by EDAs for Simplified Protein Folding Problems (Santana, Mendiburu, and Lozano) [pdf]
Estimation of distribution algorithms (EDAs) are optimization methods that construct at each step a probabilistic graphical model (PGM) of the best evaluated solutions. The model serves as a concise representation of the regularities shared by the good solutions and can serve to unveil structural characteristics of the problem domain. In this paper we use the PGMs learned by EDAs in the optimization of 15, 575 instances of the hydrophobicpolar (HP) functional protein folding model to analyze the relationship between the information contained in the PGMs? structures and the quality of the EDA?s solutions.  Anticipating the Future By Constructing Human Activities using Object Affordances (Koppula and Saxena) [pdf]
An important aspect of human perception is anticipation and anticipating which activities will a human do next (and how to do them) in useful for many applications, for example, anticipation enables an assistive robot to plan ahead for reactive responses in the human environments. In this work, we present a constructive approach for generating various possible future human activities by reasoning about the rich spatialtemporal relations through object affordances. We represent each possible future using an anticipatory temporal conditional random field (ATCRF) where we sample the nodes and edges corresponding to future object trajectories and human poses from a generative model. We then represent the distribution over the potential futures using a set of constructed ATCRF particles. In extensive evaluation on CAD120 human activity RGBD dataset, for new subjects (not seen in the training set), we obtain an activity anticipation accuracy (defined as whether one of top three predictions actually happened) of 75.4%, 69.2% and 58.1% for an anticipation time of 1, 3 and 10 seconds respectively. 1  Learning GlobaltoLocal Discrete Components with Nonparametric Bayesian Feature Construction (Heo, Lee, and Zhang) [pdf]
Finding common latent components from data is an important step in many data mining applications. These latent variables are typically categorical and there are many sources of categorical variables, including dichotomous, nominal, ordinal, and cardinal values. Thus it is important to be able to represent the discrete components (categories) in a flexible way. Here we propose a nonparametric Bayesian approach to learning "plastic" discrete components by considering the uncertainty of the number of components with the Indian buffet processes (IBP). As observation models, we use the product of experts (PoE) to utilize sharper representation power and sparse overcompleteness. We apply the proposed method to optical handwritten digit datasets and demonstrate its capability of finding flexible globaltolocal components that can be used to describe and generate the observed digit images faithfully.  Racing Tracks Improvisation (Wang and Missura) [pdf][supplement]
Procedural content generation is a popular technique in the game development. One of its typical applications is generation of game levels. This paper presents a method to generate tracks for racing games, by viewing racing track generation as a discrete sequence prediction problem. To solve it we combine two techniques from music improvisation. We show that this method is capable of generating new racing tracks which appear to be interesting enough.  STONES: Stochastic Technique for Generating Songs (Kamp and Manea) [pdf]
We propose a novel approach for automatically constructing new songs from a set of given compositions that involves sampling a melody line as well as the corresponding harmonies given by chords. The song is sampled from a hierarchical Markov model that captures the implicit properties of good composed songs from a set of existing ones. We empirically show that songs generated by our approach are closer to music composed by humans than those of existing methods.  Constructing Cocktails from a Cocktail Map (Paurat, Garnett, and Gärtner) [pdf]
Consider a dataset that describes cocktails by the amount of ingredients used and a lower dimensional embedding of it that can be considered a map of cocktails. The problem we tackle is to query an arbitrary point of interest in this lower dimensional embedding and retrieve a newly constructed cocktail which embeds to that queried location. To do so, we formulate the task as a constrained optimization problem and consider the resulting ingredient mix as a 'hot' candidate. Starting off with a very basic formulation that merely demands the necessities of our problem to be fulfilled, we incorporate additional desired conditions into the problem formulation and compare the resulting cocktail recipes.  Supervised graph summarization for structuring academic search results (Mirylenka and Passerini) [pdf]
In this paper we address the problem of visualizing the query results of the academic search services. We suggest representing the search results as concise topic hierarchies, and propose a method of building such hierarchies through summarization of the intermediate large topic graphs. We describe a supervised learning technique for summarizing the topic graphs in the most informative way using sequential structured prediction, and discuss our ongoing work on the interactive acquisition of the training examples.  Hybrid SRL with Optimization Modulo Theories (Teso, Sebastiani, and Passerini) [pdf]
Generally speaking, the goal of constructive learning could be seen as, given an example set of structured objects, to generate novel objects with similar properties. From a statisticalrelational learning (SRL) viewpoint, the task can be interpreted as a constraint satisfaction problem, i.e. the generated objects must obey a set of soft constraints, whose weights are estimated from the data. Traditional SRL approaches rely on (finite) FirstOrder Logic (FOL) as a description language, and on MAXSAT solvers to perform inference. Alas, FOL is unsuited for constructive problems where the objects contain a mixture of Boolean and numerical variables. It is in fact difficult to implement, e.g. linear arithmetic constraints within the language of FOL. In this paper we propose a novel class of hybrid SRL methods that rely on Satisfiability Modulo Theories, an alternative class of formal languages that allow to describe, and reason over, mixed Booleannumerical objects and constraints. The resulting methods, which we call Learning Modulo Theories, are formulated within the structured output SVM framework, and employ a weighted SMT solver as an optimization oracle to perform efficient inference and discriminative max margin weight learning. We also present a few examples of constructive learning applications enabled by our method.
 Varun Aggarwal, Shashank Srikant, and Vinay Shashidhar
Principles for using Machine Learning in the Assessment of Open Response Items: Programming Assessment as a Case Study  Sumit Basu, Chuck Jacobs and Lucy Vanderwende
Powergrading: a Clustering Approach to Amplify Human Effort for Short Answer Grading  Franck Dernoncourt, Choung Do, Sherif Halawa, UnaMay O’Reilly, Colin Taylor, Kalyan Veeramachaneni and Sherwin Wu
MOOCVIZ: A Large Scale, Open Access,Collaborative, Data Analytics Platform for MOOCs  Jorge Diez, Oscar Luaces, Amparo AlonsoBetanzos, Alicia Troncoso and Antonio Bahamonde
Peer Assessment in MOOCs Using Preference Learning via Matrix Factorization  Stephen E. Fancsali
Datadriven causal modeling of “gaming the system” and offtask behavior in Cognitive Tutor Algebra  Damien Follet
A threesteps classification algorithm to assist criteria grid assessment  Peter W. Foltz and Mark Rosenstein
Tracking Student Learning in a StateWide Implementation of Automated Writing Scoring  Jose P. GonzalezBrenes, Yun Huang and Peter Brusilovsky
FAST: FeatureAware Student Knowledge Tracing  Fang Han, Kalyan Veeramachaneni and UnaMay O’Reilly
Analyzing student behavior during problem solving in MOOCs  Mohammad Khajah, Rowan M. Wing, Robert V. Lindsey and Michael C. Mozer
Incorporating Latent Factors Into Knowledge Tracing To Predict Individual Differences In Learning  Robert V. Lindsey, Jeff D. Shroyer, Harold Pashler and Michael C. Mozer
Improving students’ longterm knowledge retention through personalized review  YunEn Liu, Travis Mandel, Zoran Popovic and Emma Brunskill
Towards Automatic Experimentation of Educational Knowledge  Andras Lorincz, Gyongyver Molnar, Laszlo A. Jeni, Zoltan Toser, Attila Rausch and Jeffrey F. Cohn
Towards entertaining and efficient educational games  Travis Mandel, YunEn Liu, Zoran Popovic, Sergey Levin and Emma Brunskill
Unbiased Offline Evaluation of Policy Representations for Educational Games  Sergiy Nesterko, Svetlana Dotsenko, Qiuyi Hu, Daniel Seaton, Justin Reich, Isaac Chuang, and Andrew Ho
Evaluating Geographic Data in MOOCs  Andy Nguyen, Christopher Piech, Jonathan Huang and Leonidas Guibas
Codewebs: Scalable Code Search for MOOCs  Zachary A. Pardos
Simulation study of a HMM based automatic resource recommendation system  Arti Ramesh, Dan Goldwasser, Bert Huang, Snigdha Chaturvedi, Hal Daume III and Lise Getoor
Modeling Learner Engagement in MOOCs using Probabilistic Soft Logic  Nihar B. Shah, Joseph K. Bradley, Abhay Parekh, Martin Wainwright and Kannan Ramchandran
A Case for Ordinal Peer Evaluation in MOOCs  Adish Singla, Ilija Bogunovic, Gabor Bartok, Amin Karbasi and Andreas Krause
On Actively Teaching the Crowd to Classify  Glenda S. Stump, Jennifer DeBoer, Jonathan Whittinghill and Lori Breslow
Development of a Framework to Classify MOOC Discussion Forum Posts: Methodology and Challenges  Weiyi Sun, Siwei Lyu, Hui Jin and Jianwei Zhang
Analyzing Online Learning Discourse using Probabilistic Topic Models  Joseph Jay Williams
Applying Cognitive Science to Online Learning  Joseph Jay Williams and Betsy Williams
Using Interventions to Improve Online Learning  Diyii Yang, Tanmay Sinha, David Adamson and Carolyn Penstein Rose
“Turn on, Tune in, Drop out”: Anticipating student dropouts in Massive Open Online Courses
Poster Session I
Yuxin Chen, Hiroaki Shioi, Cesar Antonio Fuentes Montesinos, Lian Pin Koh, Serge Wich, Andreas Krause.
Active Detection for Biodiversity Monitoring via Adaptive Submodularity.
Christopher R. Dance, Stephane Clinchant, Onno R. Zoeter.
Approximate Inference for a NonHomogeneous Poisson Model of OnStreet Parking. [pdf]
George Mathews, John Vial, Sanjeev Jha, Gregoire Mariethoz, Nickens Okello, Suhinthan Maheswararajah, Dom De Re, Michael Smith.
Bayesian Inference of the Hydraulic Properties of Deep Geological Formations.
Simon O’Callaghan, Alistair Reid, Lachlan McCalman, Edwin V. Bonilla, Fabio Ramos
Bayesian Joint Inversions for the Exploration and Characterization of Geothermal Targets. [pdf]
Jun Yu, WengKeen Wong, Steve Kelling.
Clustering Species Accumulation Curves to Identify Groups of Citizen Scientists with Similar Skill Levels. [pdf]
Kalyan Veeramachaneni, Teasha FeldmanFitzthum, UnaMay O’Reilly, Alfredo CuestaInfante.
CopulaBased Wind Resource Assessment. [pdf]
Danny Panknin, Tammo Krueger, Mikio Braun, KlausRobert Muller, Siegmund Duell.
Detecting changes in Wind Turbine Sensory Data. [pdf]
Shan Xue, Alan Fern, Daniel Sheldon.
Dynamic Resource Allocation for Optimizing Population Diffusion.
Nidhi Singh.
GreenAware Workload Prediction for Nonstationary Environments.
Mingjun Zhong, Nigel Goddard, Charles Sutton.
Interleaved Factorial NonHomogeneous Hidden Markov Models for Energy Disaggregation. [pdf]
Poster Session II
Tao Sun, Daniel Sheldon, Akshat Kumar.
Message Passing for Collective Graphical Models. [pdf]
Jun Yu, Rebecca A. Hutchinson, WengKeen Wong.
Modeling Misidentification of Bird Species by Citizen Scientists. [pdf]
Anna Ogawa, Akiko Takeda, Toru Namerikawa.
Photovoltaic Output Prediction Using Autoregression with Support Vector Machine. [pdf]
Rebecca A. Hutchinson, Thomas G. Dietterich.
Posterior Regularization for Occupancy Models.
Xiaojian Wu , Daniel Sheldon, Shlomo Zilberstein.
Stochastic Network Design for River Networks. [pdf]
Daniel Urieli, Peter Stone.
TacTex’13 An Adaptive Champion Power Trading Agent.
Bingsheng Wang, Haili Dong, ChangTien Lu.
Using Step Variant Convolutional Neural Networks for Energy Disaggregation. [pdf]
Angela Fernandez, Carlos M. Alaiz, Ana M. Gonzalez, Julia Diaz, Jose R. Dorronsoro
Local Anisotropic Diffusion Detection of Wind Ramps. [pdf]
Mahsa Ghafrianzadeh, Claire Monteleoni.
Climate Prediction via Matrix Completion. [pdf]
 8:208:40 Factorie
 8:409:00 pySPACE
 9:00  9:30 Coffee break
 9:30  10:30 Demos (15 min for highlights)
AFTERNOON SESSION (3:306:30)
 3:304:15 Invited speaker: Fernando Perez, ipython
 4:154:35 scikitlearn
 4:354:55 rOpenGov
Domain Adaptation as Learning with Auxiliary Information
Shai BenDavid, Ruth Urner
Sample Complexity of Sequential Multitask Reinforcement Learning
Emma Brunskill, Lihong Li
Sequential Transfer in Multiarmed Bandit with Logarithmic Transfer Regret
Shai BenDavid, Ruth Urner
Sample Complexity of Sequential Multitask Reinforcement Learning
Emma Brunskill, Lihong Li
Sequential Transfer in Multiarmed Bandit with Logarithmic Transfer Regret
Mohammad Gheshlaghi Azar, Alessandro Lazaric, Emma Brunskill
Classwise Densityratios for Covariate Shift
YunQian Miao, Ahmed K. Farahat, Mohamed S. Kamel
Domain adaptation for sequence labeling using hidden Markov models
Edouard Grave, Guillaume Obozinski, Francis Bach
Retrieval of Experiments: Sequential Dirichlet Process Mixtures in Model Space
Ritabrata Dutta, Sohan Seth, Samuel Kaski
Multitask Learning with Feature Selection for Groups of Related Tasks
Meenakshi Mishra, Jun Huan
Restricted Transfer Learning for Text Categorization
Rajhans Samdani, Gideon Mann
Transformbased Domain Adaptation for Big Data
Erik Rodner, Judy Hoffman, Trevor Darrell, Jeff Donahue, Kate Saenko
A PACBayesian bound for Lifelong Learning
Classwise Densityratios for Covariate Shift
YunQian Miao, Ahmed K. Farahat, Mohamed S. Kamel
Domain adaptation for sequence labeling using hidden Markov models
Edouard Grave, Guillaume Obozinski, Francis Bach
Retrieval of Experiments: Sequential Dirichlet Process Mixtures in Model Space
Ritabrata Dutta, Sohan Seth, Samuel Kaski
Multitask Learning with Feature Selection for Groups of Related Tasks
Meenakshi Mishra, Jun Huan
Restricted Transfer Learning for Text Categorization
Rajhans Samdani, Gideon Mann
Transformbased Domain Adaptation for Big Data
Erik Rodner, Judy Hoffman, Trevor Darrell, Jeff Donahue, Kate Saenko
A PACBayesian bound for Lifelong Learning
Anastasia Pentina, Christoph H. Lampert
Multitask Bilinear Classifiers for Visual Domain Adaptation
Jiaolong Xu, Sebastian Ramos, Xu Hu, David Vazquez, Antonio M. Lopez
TreeBased Ensemble MultiTask Learning Method for Classification and Regression
Multitask Bilinear Classifiers for Visual Domain Adaptation
Jiaolong Xu, Sebastian Ramos, Xu Hu, David Vazquez, Antonio M. Lopez
TreeBased Ensemble MultiTask Learning Method for Classification and Regression
Jaak Simm, Ildefons Magrans de Abril, Masashi Sugiyama
Domain Adaptation of Majority Votes via Perturbed Variationbased Label Transfer
Emilie Morvant
Multilinear Spectral Regularization for Kernelbased Multitask Learning
Marco Signoretto, Johan A.K. Suykens
Domain Adaptation of Majority Votes via Perturbed Variationbased Label Transfer
Emilie Morvant
Multilinear Spectral Regularization for Kernelbased Multitask Learning
Marco Signoretto, Johan A.K. Suykens
Reinforcement Learning with MultiFidelity Simulators
Sameer Singh, Sebastian Riedel, and Andrew McCallum. Anytime belief propagation using sparse domains.
W00085459.jpg was taken on December 02, 2013 and received on Earth December 04, 2013. The camera was pointing toward SATURN at approximately 710,353 miles (1,143,202 kilometers) away, and the image was taken using the MT2 and CL2 filters. This image has not been validated or calibrated.
Image Credit: NASA/JPL/Space Science Institute
Image Credit: NASA/JPL/Space Science Institute
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment