Education Class C2
Title: Neural Network Accelerator Design
Instructor: Yu Wang, Tsinghua University
Abstract: We have witnessed the rapid growth of Deep Nerual Networks (DNNs) in the past decade. Deep neural network enabling technology has made a great impact in almost every field of our lives, including automatic driving, health care, smart city, social network, and so on. There are various kinds of DNNs, among which Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are the most popular ones. CNNs have popularized image classification and object detection, while RNNs are used in natural language processing, time series applications, and sentiment analysis. Recently, Graph Neural Networks (GNNs) excel for their capability to generate high-quality node feature vectors (embeddings) using graph-based deep learning method. GNNs are widely applied in recommendation system, social network, and biomedicine.
However, the high computation and storage complexity of neural network inference poses great difficulty on its application. Besides, the ultra-sparsity of GNNs brings great challenges to the computation on the traditional general-purpose platforms, such as CPUs and GPUs. In the past seven years, both academia and industry have devoted a lot of efforts to design Domain Specific Accelerators (DSAs) for DNN applications, so as to achieve low power and high performance deep neural network inference acceleration.
This talk will first give some basic concepts of DNN models, including CNNs, RNNs, and GNNs, from the perspective of algorithm. Secondly, the basic ideas and methodologies of designing DSAs for DNN applications will be introduced. Specifically, this talk will focus on the FPGA based nerual network accelerator designs. Thirdly, some design principles of accelerating GNNs on GPUs and FPGAs will be discussed. Finally, this talk will outline the history and developing trend of DNN accelerators.
Bio: Dr. Yu Wang is a tenured professor in the Department of Electronic Engineering at Tsinghua University. He is now the Chair of the Department of Electronic Engineering and the Dean of Institute for Electronics and Information Technology in Tianjin at Tsinghua University. He has published more than 70 journals (51 IEEE/ACM journals) and 200 conference papers (15 DAC, 14 DATE, 7 ICCAD, 31 ASP-DAC, 9 FPGA) in the areas of EDA, FPGA, VLSI Design, and Embedded Systems. He has received Best Paper Award in ASP-DAC19, FPGA17, NVMSA17, ISVLSI12, Best Poster Award in HEART12, and 10 Best Paper Nominations. He is a recipient of ACM/SIGDA Meritorious Service Award in 2020, Under-40 Innovators Award at DAC in 2018 (5 all over the world/year), IBM X10 Faculty Award in 2010 (one of 30 worldwide). He is the co-founder of Deephi Tech (a leading deep learning solution provider), which is acquired by Xilinx in 2018. He served as TPC chair for ISVLSI 2018, Program Co-Chair for ICFPT 2019/2011 and Finance Chair of ISLPED 2012-2016, Track Chair for DATE 2017-2019 and GLSVLSI 2018, General Chair Secretary for ASP-DAC 2020, Executive Committee Member for DAC 2020, PC member for leading conferences in these areas. He also served as Co-Editor-in-Chief for ACM SIGDA E-News in 2017-2019, Associate Editor for Journal of Circuits, Systems, and Computers in 2013-2020, and Special Issue Editor for Microelectronics Journal in 2017-2019. Currently he serves as Associate Editor for IEEE TCAD, IEEE TCSVT, ACM TECS, ACM TODAES, IET Computers and Digital Techniques, IEEE Embedded System Letter. He is an ACM/IEEE Senior Member.