If you are coming to NIPS in Barcelona there won't be a "Mapping Machine Learning to Hardware" workshop. We tried but our proposal did not get accepted.
As a reminder, this was our proposal.
- Yoshua Bengio, University of Montreal
- Ashok Veeraraghavan, Rice University,
- Bill Dally, Stanford University and NVIDIA,
- Julien Demouth, NVIDIA
- Eugenio Culurciello, Purdue University
- Joni Dambre, Ghent University
- Andrew Ng or Bryan Catanzaro, Baidu Research
- Pradeep Dubey, Intel Corporation
- Pete Warden, Google
- Olivier Temam, Google
- James E Smith, University of Wisconsin
- Dino Sejdinovic, Dejan Vukobratovic, Dusan Jakovetic, Dragana Bajovic, University of Oxford
With the recent success of Deep Learning and related techniques, we are beginning to see new specialized hardware or extensions to existing architectures dedicated for making training and inference computations faster or energy efficient or both. These technologies use either traditional CMOS technology on conventional von-Neumann architectures such as CPUs or accelerators such as DSPs, GPUs, FPGAs, and ASICs or other novel and exotic technologies in research phase such as Memristors, Quantum Chips, and Optical/Photonics. The overarching goal being to address a specific trade-off in mapping machine learning algorithms in general and deep learning in particular, to a specific underlying hardware technology.
Conversely, there has been quite an effort on the empirical side at devising deep network architectures for efficient implementation on low complexity hardware via low-rank tensor factorizations, structured matrix approximations, lower bit-depth like binary coefficients, compression and pruning to name a few approaches. This also has implications on leveraging appropriate hardware technology for inferencing primarily with energy and latency as the primary design goals.
These efforts are finding some traction in the signal processing and sparse/compressive sensing community to map the first layers of sensing hardware with the first layers of models such as deep networks. This approach has the potential of changing the way sensing hardware, image reconstruction, signal processing and image understanding will be performed in the future.
This workshop aims to tie these seemingly disparate themes of co-design, architecture, algorithms and signal processing and bring together researchers at the interface of machine learning, hardware implementation, sensing, physics and biology for discussing the state of the art and the state of the possible.
The goals of the workshop are
Besides the presentations made by the plenary speakers, there will be a poster session, lightning talks selected from the posters and a roundtable at the conclusion of the workshop. The workshop will be taped and presentations slides will be made available online. A white paper describing the talks and the discussions that went on during the workshop will be made available after the meeting.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.