EIPBN 2022

Plenary Speaker

HsinYu Sidney Tsai

IBM Research AI at Almaden Research Center in San Jose, CA

Analog Memory-based techniques for Accelerating Deep Neural Networks

Von Neumann-style information processing systems — in which a “memory” delivers operations and then operands to a dedicated “compute unit” — are the basis of modern computer architectures. With the help of Moore’s Law and Dennard scaling, the throughput of these compute unit increased dramatically over the past 50 years, far exceeding the pace of improvements in data communication between memory and compute. As a result, the “Von Neumann bottleneck” now dominates considerations of system throughput and energy consumption, especially for Deep Neural Network (DNN) workloads. Non-Von Neumann architectures, such as those that move computation to the edge of memory crossbar arrays, can significantly reduce the cost of data communication.

Crossbar arrays of resistive non-volatile memories (NVM) offer a novel solution for deep learning tasks by computing matrix-vector multiplication (VMM) in analog memory arrays [1]. The highly parallel structure and computation at the location of the data enables fast and energy-efficient multiply-accumulate computations, which are the workhorse operations within most deep learning algorithms. In this presentation, we will focus on our implementation of an analog memory cell based on Phase-Change Memory (PCM) cells for inference [2-3]. Software-equivalent accuracy on various datasets has been achieved in both mixed software-hardware demonstration and with fully on-chip VMM, despite the considerable imperfections of existing NVM devices, such as noise and variability [4]. We will also discuss NVM device and new algorithms for on-chip training [5-6].

References

[1] G. W. Burr et al., “Experimental demonstration and tolerancing of a large-scale neural network (165,000 synapses), using phase-change memory as the synaptic weight element” IEDM Tech. Digest, 29.5 (2014).

[2] H. Tsai et al., “Inference of Long-Short-Term Memory networks at software-equivalent accuracy using 2.5M analog Phase Change Memory devices”, VLSI, T8-1 (2019).

[3] P. Narayanan et al., “Fully on-chip MAC at 14nm enabled by accurate row-wise programming of PCM-based weights and parallel vector-transport in duration-format”, VLSI, T1-T2 (2021).

[4] S. Ambrogio et al., “Reducing the impact of phase-change memory conductance drift on the inference of large-scale hardware neural networks”, 2019 IEEE International Electron Devices Meeting (IEDM), pp. 6.1.1-6.1.4 (2019).

[5] S. Ambrogio et al., “Equivalent-Accuracy Accelerated Neural Network Training using Analog Memory”, Nature, 558 (7708), 60 (2018).

[6] T. Gokmen, M. J. Rasch, W. Haensch, “The marriage of training and inference for scaled deep learning analog hardware,” 2019 IEEE International Electron Devices Meeting (IEDM), p.22-3, (2019).

About HsinYu (Sidney)

HsinYu Sidney Tsai received her PhD from the Electrical Engineering and Computer Science department at Massachusetts Institute of Technology in 2011 and joined IBM as a research staff member. Dr. Tsai currently works in the Almaden Research Center in San Jose, CA, applying PCM-based devices for neuromorphic computing. Leveraging training capability and error tolerance in deep neural networks (DNN), matrix-vector multiplication and network weight update operations can be achieved in constant time at low power in a memory cross-bar arrays. The group demonstrated software-equivalent accuracies for a variety of classic DNN networks and datasets, for both training and inference. Before joining the neuromorphic computing group, Sidney worked in the IBM T.J. Watson Research Center in Yorktown Heights, NY, where she is developed next generation lithography for circuit applications with directed self-assembly (DSA) and managed the Advanced Lithography group in the Microelectronics Research Laboratory.

HsinYu Sidney Tsai

Plenary Speakers

Walter Everett Voit

Scaled Additive Manufacturing of Industrially Relevant Polymers with Scanning Multi-DLP® Adaptive LightBars™: Implicit Data Representations and Synthetic Deconvolution
View Abstract & Bio

Marko Lončar

Integrated Lithium Niobate Photonics
View Abstract & Bio