So the ICLR 2017 conference has garnered about 500 submissions and they are in the open review process. Here are few within the generic theme of "How do we change the architecture of Deep Learning models so that they can better fit other metrics such as lighter architecture" or more succintly Mapping ML to Hardware. I went through the submissions and looked at the titles and some abstracts so I am surely missing a few (kind feedback is welcome). Anyway, tonight will be a reading Nuit Blanche for sure.
- Training Compressed Fully-Connected Networks with a Density-Diversity Penalty Shengjie Wang, Haoran Cai, Jeff Bilmes, William Noble
- Trained Ternary Quantization Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally
- Bit-Pragmatic Deep Neural Network Computing Jorge Albericio, Patrick Judd, Alberto Delmas, Sayeh Sharify, Andreas Moshovos
- The Incredible Shrinking Neural Network: New Perspectives on Learning Representations Through The Lens of Pruning Nikolas Wolfe, Aditya Sharma, Lukas Drude, Bhiksha Raj
- An Analysis of Deep Neural Network Models for Practical Applications Alfredo Canziani, Adam Paszke, Eugenio Culurciello
- Loss-aware Binarization of Deep Networks Lu Hou, Quanming Yao, James T. Kwok
- Deep Multi-task Representation Learning: A Tensor Factorisation Approach Yongxin Yang, Timothy M. Hospedales
- Coarse Pruning of Convolutional Neural Networks with Random Masks Sajid Anwar, Wonyong Sung
- Modularized Morphing of Neural Networks Tao Wei, Changhu Wang, Chang Wen Chen
- Short and Deep: Sketching and Neural Networks Amit Daniely, Nevena Lazic, Yoram Singer, Kunal Talwar
- SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0 .5mb="" a="" model="" size=""> Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer0>
- Support Regularized Sparse Coding and Its Fast Encoder Yingzhen Yang, Jiahui Yu, Pushmeet Kohli, Jianchao Yang, Thomas S. Huang
- ParMAC: distributed optimisation of nested functions, with application to binary autoencoders Miguel A. Carreira-Perpinan, Mehdi Alizadeh
- Do Deep Convolutional Nets Really Need to be Deep and Convolutional? Gregor Urban, Krzysztof J. Geras, Samira Ebrahimi Kahou, Ozlem Aslan, Shengjie Wang, Abdelrahman Mohamed, Matthai Philipose, Matt Richardson, Rich Caruana
- Deep Convolutional Neural Network Design Patterns Leslie N. Smith, Nicholay Topin
- Compact Embedding of Binary-coded Inputs and Outputs using Bloom Filters Joan SerrĂ , Alexandros Karatzoglou
- Sparsely-Connected Neural Networks: Towards Efficient VLSI Implementation of Deep Neural Networks Arash Ardakani, Carlo Condo, Warren J. Gross
- Ternary Weight Decomposition and Binary Activation Encoding for Fast and Compact Neural Network Mitsuru Ambai, Takuya Matsumoto, Takayoshi Yamashita, Hironobu Fujiyoshi
- Hadamard Product for Low-rank Bilinear Pooling Jin-Hwa Kim, Kyoung-Woon On, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang
Image of SATURN
W00101892.jpg was taken on 2016-11-04 17:39 (UTC) and received on Earth 2016-11-06 00:39(UTC). The camera was pointing toward SATURN, and the image was taken using the MT3 and CL2filters. This image has not been validated or calibrated. A validated/calibrated image will be archived with the NASA Planetary Data System.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment