Two ML related hardware today, woohoo !
Spintronic nano-devices for bio-inspired computing by Julie Grollier, Damien Querlioz, Mark D. Stiles
Bio-inspired hardware holds the promise of low-energy, intelligent and highly adaptable computing systems. Applications span from automatic classification for big data management, through unmanned vehicle control, to control for bio-medical prosthesis. However, one of the major challenges of fabricating bio-inspired hardware is building ultra-high density networks out of complex processing units interlinked by tunable connections. Nanometer-scale devices exploiting spin electronics (or spintronics) can be a key technology in this context. In particular, magnetic tunnel junctions are well suited for this purpose because of their multiple tunable functionalities. One such functionality, non-volatile memory, can provide massive embedded memory in unconventional circuits, thus escaping the von-Neumann bottleneck arising when memory and processors are located separately. Other features of spintronic devices that could be beneficial for bio-inspired computing include tunable fast non-linear dynamics, controlled stochasticity, and the ability of single devices to change functions in different operating conditions. Large networks of interacting spintronic nano-devices can have their interactions tuned to induce complex dynamics such as synchronization, chaos, soliton diffusion, phase transitions, criticality, and convergence to multiple metastable states. A number of groups have recently proposed bio-inspired architectures that include one or several types of spintronic nanodevices. In this article we show how spintronics can be used for bio-inspired computing. We review the different approaches that have been proposed, the recent advances in this direction, and the challenges towards fully integrated spintronics-CMOS (Complementary metal - oxide - semiconductor) bio-inspired hardware.
Precise deep neural network computation on imprecise low-power analog hardware by Jonathan Binas, Daniel Neil, Giacomo Indiveri, Shih-Chii Liu, Michael Pfeiffer
There is an urgent need for compact, fast, and power-efficient hardware implementations of state-of-the-art artificial intelligence. Here we propose a power-efficient approach for real-time inference, in which deep neural networks (DNNs) are implemented through low-power analog circuits. Although analog implementations can be extremely compact, they have been largely supplanted by digital designs, partly because of device mismatch effects due to fabrication. We propose a framework that exploits the power of Deep Learning to compensate for this mismatch by incorporating the measured variations of the devices as constraints in the DNN training process. This eliminates the use of mismatch minimization strategies such as the use of very large transistors, and allows circuit complexity and power-consumption to be reduced to a minimum. Our results, based on large-scale simulations as well as a prototype VLSI chip implementation indicate at least a 3-fold improvement of processing efficiency over current digital implementations.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.