Education Class D2

Portrait: Jae-Sun Seo

Title: DNNs on FPGAs

Instructor: Jaesun Seo, Arizona State University

Abstract: Deep neural networks (DNNs) have been successful in many practical applications including image classification, object detection, speech recognition, etc. GPUs have been a popular hardware platform for DNN workloads, aided by highly parallel computing with a massive number of processing cores. However, due to the lack of reconfigurability and high power consumption, GPU is not an ideal accelerator solution for DNN models especially with high sparsity or customized architectures. Application-specific integrated circuits (ASICs) typically have the highest energy efficiency, but their limited configurability can introduce a significant risk of premature obsolescence. With DNN algorithms evolving at a fast pace, ASIC designs will always lag behind the cutting edge due to the long design cycle. To that end, FPGAs have a unique advantage with potentially higher throughput and efficiency than GPUs, while offering faster time-to-market and potentially longer useful life than ASIC solutions.

In this lecture, we will present FPGA-based DNN accelerator designs and methodologies. We will first introduce the basics of DNN algorithms, and hardware requirements for computation and memory. Next, we will present efficient FPGA acceleration schemes, including loop optimization of iterative DNN operations, parallel computation, dataflow and data re-use, minimization of memory access, low-precision quantization, sparsity and pruning, etc. The lecture will discuss the key design trade-offs to map DNN algorithms on different FPGAs and how to optimally improve the throughput, power, and energy-efficiency for target applications. 

Bio: Jae-sun Seo received the Ph.D. degree in electrical engineering from the University of Michigan in 2010. From 2010 to 2013, he was with IBM T. J. Watson Research Center, where he worked on cognitive computing chips under the DARPA SyNAPSE project and energy-efficient integrated circuits for high-performance processors. In 2014, he joined ASU in the School of ECEE, where he is now an Associate Professor. During the summer of 2015, he was a visiting faculty at Intel Circuits Research Lab. His research interests include efficient hardware design of machine learning / neuromorphic algorithms and integrated power management. Dr. Seo was a recipient of IBM Outstanding Technical Achievement Award (2012), NSF CAREER Award (2017), and Intel Outstanding Researcher Award (2021).