Much happened since the last Nuit Blanche in Review (July 2016): The big news this past August was the stellar line-up we got for a potential workshop at NIPS and the fact that it got declined (There won't be a #NIPS2016 workshop on "Mapping Machine Learning to Hardware"). My take on this is that there is a surge of interest in mapping ML and even CS algorithms as close as possible to ASICs. That means that: either algorithms change because of the particular constraints of the silicon technology or that new technologies can be identified as important in the future. This problem is as much an industrial as an algorithmic one, so this type of workshop will eventually find its way in conferences like NIPS. In other words, it is not a question of "if" but "when". As a result of this, I created the MappingMLtoHardware tag that features either algorithmic effort to fit into hardware or hardware efforts to fit into particular classes of algorithms. On another front there is an exciting push for breaking the traditional backpropagation algorithm used for neural networks. Two different techniques were featured in [1] and [2] this month. In the in-depth section, we noted a few uses for Random Features. We also had a long Paris Machine Learning Newsletter for this past Summer. Enjoy !
Implementations
- “Why Should I Trust you?” Explaining the Predictions of Any Classifier - implementation -
- Randomized Matrix Decompositions using R - implementation -
- The Journal of Machine Learning currently point to an empty link for the implementation of Are Random Forests Truly the Best Classifiers?
We also created a new tag (Overviews) that features reviews, book and other material that provide some context.
- Lecture Notes on Randomized Linear Algebra / Spectral Graph Methods by Michael Mahoney
- Reviews: Low-Rank Semidefinite Programming:Theory and Applications / A Primer on Reproducing Kernel Hilbert Spaces
- Three Highly Technical Reference Pages: Reinforcement Learning, Deep Vision, Recurrent Neural Networks and state of the art page in object classification
In-depth:
- Fast brain decoding with random sampling and random projections
- Scaling up Vector Autoregressive Models With Operator-Valued Random Fourier Features
- How to Fake Multiply by a Gaussian Matrix
- Stable Reinforcement Learning with Autoencoders for Tactile and Visual Data
- Stacked Approximated Regression Machine: A Simple Deep Learning Approach
- Decoupled Neural Interfaces using Synthetic Gradients
- Functional Hashing for Compressing Neural Networks
- Densely Connected Convolutional Networks
- Full Resolution Image Compression with Recurrent Neural Networks
- Direct inference on compressive measurements using convolutional neural networks
- Fast Component Pursuit for Large-Scale Inverse Covariance Estimation
- Robust Extreme Multi-label Learning
- Model-Free Episodic Control
- Exact Recovery of Chaotic Systems from Highly Corrupted Data
- Fast Algorithms for Demixing Sparse Signals from Nonlinear Observations / Fast recovery from a union of subspaces
- Seeing the Forest from the Trees in Two Looks: Matrix Sketching by Cascaded Bilateral Sampling
- Secure Group Testing
Hardware
Slides/videos:
CfP conference
Job:
Credit photo: NASA, APL, SwRI, 08-31-2016 Pluto's Methane Snowcaps on the Edge of Darkness
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment