So the COLT conference started this morning in sweltering Paris. Many of the presentations have been featured in a fashion or another on Nuit Blanche. Here are the full proceedings:
Regular Papers
- On Consistent Surrogate Risk Minimization and Property Elicitationabs] [pdf] [
- Online Learning with Feedback Graphs: Beyond Banditsabs] [pdf] [
- Learning Overcomplete Latent Variable Models through Tensor Methodsabs] [pdf] [
- Simple, Efficient, and Neural Algorithms for Sparse Codingabs] [pdf] [
- Label optimal regret bounds for online local learningabs] [pdf] [
- Efficient Learning of Linear Separators under Bounded Noiseabs] [pdf] , [
- Efficient Representations for Lifelong Learning and Autoencodingabs] [pdf] [
- Optimally Combining Classifiers Using Unlabeled Dataabs] [pdf] [
- Minimax Fixed-Design Linear Regressionabs] [pdf] [
- Escaping the Local Minima via Simulated Annealing: Optimization of Approximately Convex Functionsabs] [pdf] [
-
Bandit Convex Optimization:
T√ Regret in One Dimension [abs] [pdf] - The entropic barrier: a simple and optimal universal self-concordant barrierabs] [pdf] [
- Optimum Statistical Estimation with Strategic Data Sourcesabs] [pdf] [
- On the Complexity of Learning with Kernelsabs] [pdf] [
- Learnability of Solutions to Conjunctive Queries: The Full Dichotomyabs] [pdf] [
- Sequential Information Maximization: When is Greedy Near-optimal?abs] [pdf] [
- Efficient Sampling for Gaussian Graphical Models via Spectral Sparsificationabs] [pdf] [
- Stochastic Block Model and Community Detection in Sparse Graphs: A spectral algorithm with optimal rate of recoveryabs] [pdf] [
- On-Line Learning Algorithms for Path Experts with Non-Additive Lossesabs] [pdf] [
- Truthful Linear Regressionabs] [pdf] , [
- A PTAS for Agnostically Learning Halfspacesabs] [pdf] [
- S2: An Efficient Graph Based Active Learning Algorithm with Application to Nonparametric Classificationabs] [pdf] [
- Improved Sum-of-Squares Lower Bounds for Hidden Clique and Hidden Submatrix Problemsabs] [pdf] [
- Contextual Dueling Banditsabs] [pdf] [
- Beyond Hartigan Consistency: Merge Distortion Metric for Hierarchical Clusteringabs] [pdf] [
- Faster Algorithms for Testing under Conditional Samplingabs] [pdf] [
- Learning and inference in the presence of corrupted inputsabs] [pdf] [
- From Averaging to Acceleration, There is Only a Step-sizeabs] [pdf] [
- Variable Selection is Hardabs] [pdf] [
- Vector-Valued Property Elicitationabs] [pdf] [
- Competing with the Empirical Risk Minimizer in a Single Passabs] [pdf] [
- A Chaining Algorithm for Online Nonparametric Regressionabs] [pdf] [
- Escaping From Saddle Points — Online Stochastic Gradient for Tensor Decompositionabs] [pdf] [
- Learning the dependence structure of rare events: a non-asymptotic studyabs] [pdf] [
- Thompson Sampling for Learning Parameterized Markov Decision Processesabs] [pdf] [
- Computational Lower Bounds for Community Detection on Random Graphsabs] [pdf] [
- Adaptive Recovery of Signals by Convex Optimizationabs] [pdf] [
- Tensor principal component analysis via sum-of-square proofsabs] [pdf] [
- Fast Exact Matrix Completion with Finite Samplesabs] [pdf] [
- Exp-Concavity of Proper Composite Lossesabs] [pdf] [
- On Learning Distributions from their Samplesabs] [pdf] [
- MCMC Learningabs] [pdf] , [
- Online with Spectral Boundsabs] [pdf] , [
- Regret Lower Bound and Optimal Algorithm in Dueling Bandit Problemabs] [pdf] [
- Second-order Quantile Methods for Experts and Combinatorial Gamesabs] [pdf] [
- Hierarchical Label Queries with Data-Dependent Partitionsabs] [pdf] [
- Algorithms for Lipschitz Learning on Graphsabs] [pdf] [
- Low Rank Matrix Completion with Exponential Family Noiseabs] [pdf] [
- Bad Universal Priors and Notions of Optimalityabs] [pdf] [
- Learning with Square Loss: Localization through Offset Rademacher Complexityabs] [pdf] [
- Achieving All with No Parameters: AdaNormalHedgeabs] [pdf] [
- Lower and Upper Bounds on the Generalization of Stochastic Exponentially Concave Optimizationabs] [pdf] [
- Correlation Clustering with Noisy Partial Informationabs] [pdf] [
- Online Density Estimation of Bradley-Terry Modelsabs] [pdf] [
- First-order regret bounds for combinatorial semi-banditsabs] [pdf] [
- Norm-Based Capacity Control in Neural Networksabs] [pdf] [
- Cortical Learning via Predictionabs] [pdf] [
- Partitioning Well-Clustered Graphs: Spectral Clustering Works!abs] [pdf] [
- Batched Bandit Problemsabs] [pdf] [
- Hierarchies of Relaxations for Online Prediction Problems with Evolving Constraintsabs] [pdf] [
- Fast Mixing for Discrete Point Processesabs] [pdf] [
- Generalized Mixability via Entropic Dualityabs] [pdf] [
- On the Complexity of Bandit Linear Optimizationabs] [pdf] [
- An Almost Optimal PAC Algorithmabs] [pdf] [
- Minimax rates for memory-bounded sparse linear regressionabs] [pdf] [
- Interactive Fingerprinting Codes and the Hardness of Preventing False Discoveryabs] [pdf] [
- Convex Risk Minimization and Conditional Probability Estimationabs] [pdf] [
- Regularized Linear Regression: A Precise Analysis of the Estimation Errorabs] [pdf] , [
- Max vs Min: Tensor Decomposition and ICA with nearly Linear Sample Complexityabs] [pdf] , [
- On Convergence of Emphatic Temporal-Difference Learningabs] [pdf] [
Open Problems
- Open Problem: Restricted Eigenvalue Condition for Heavy Tailed Designsabs] [pdf] , [
- Open Problem: The landscape of the loss surfaces of multilayer networksabs] [pdf] [
- Open Problem: The Oracle Complexity of Smooth Convex Optimization in Nonstandard Settingsabs] [pdf] [
- Open Problem: Online Sabotaged Shortest Pathabs] [pdf] , [
- Open Problem: Learning Quantum Circuits with Queriesabs] [pdf] , [
- Open Problem: Recursive Teaching Dimension Versus VC Dimensionabs] [pdf] , [
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment