Pages

Friday, May 27, 2016

A proposal for a NIPS Workshop or Symposium on "Mapping Machine Learning to Hardware"

So now that the deadline for the papers at NIPS is over, I am interested in organizing a workshop on how Machine Learning is being mapped to Hardware. If you are interested or know somebody who is potentially interested, please get in touch with me and we will improve on that proposal (and yes the title can change too). Here is the first draft:

Dear Colleagues,

Mapping Machine Learning to Hardware

With the recent successes of Deep Learning, we are beginning to see a set of new specialized hardware dedicated to making some of these computations faster, or energy efficient or both. These technologies use either CMOS (CPU, GPUs, FPGAs, ASICs) or exotic technologies (Bio, Memristors, Quantum Chips, Photonics, etc…) as they seek to address a specific trade-off in mapping Machine Learning algorithms to a specific hardware technology.


Conversely there has been quite an effort on the empirical side at devising deep network architectures that can handle binary coefficients so as to be efficiently implementable on low complexity hardware.


A somewhat related issue is the recent interest of the sensing community to map the first layers of sensing hardware with the first layers of models such as deep networks. This approach has the potential of changing the way image reconstruction and signal processing will be performed in the future.


This workshop will bring together researchers at the interface of machine learning, hardware implementation, sensing, physics and biology.


The goals of the workshop are
  • to present how machine learning computations and algorithms are mapped and improved as a result of new hardware technologies and architectures
  • to evaluate the different trade-offs currently investigated in these approaches
  • to understand how sensors may change as a result of this mapping to Machine Learning algorithms.
  • to evaluate the constraints put forth on recent deep learning architectures so as to reduce redundancy and enable a simpler mapping between computing hardware and models.
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment