On his blog, Sebastien put together a list of some of the accepted papers at COLT 2016. Go read his blog entry, I'll wait......Here is a subset that will be or has been featured here:
- The Power of Depth for Feedforward Neural Network
- On the Expressive Power of Deep Learning: A Tensor Analysis
- Benefits of depth in neural networks
- Gradient Descent only Converges to Minimizers
- Dropping Convexity for Faster Semi-definite Optimization
- Online Sparse Linear Regression
- On the Approximability of Sparse PCA
- Noisy Tensor Completion via the Sum-of-Squares Hierarchy
- Online Learning with Low Rank Experts
- Information-theoretic thresholds for community detection in sparse networks
- First-order Methods for Geodesically Convex Optimization
- Cortical Computation via Iterative Constructions
- Asymptotic behavior of $\ell_q$-based Laplacian regularization in semi-supervised learning
- Highly-Smooth Zero-th Order Online Optimization
- Semidefinite Programs for Exact Recovery of a Hidden Community
- On the low-rank approach for semidefinite programs arising in synchronization and community detection
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment