You are here

Event Details

MP Associates, Inc.
MONDAY October 16, 9:00am - 10:00am | Crystal
EVENT TYPE: KEYNOTE
Small Neural Nets Are Beautiful: Enabling Embedded Systems with Small Deep-Neural-Network Architectures
Speaker:
Kurt Keutzer - Univ. of California, Berkeley
Over the last 50 years, in the diverse areas of natural language processing, speech recognition and computer vision, progress has been achieved through the orchestration of dozens of algorithms generally classified under the heading of “machine learning.” In just the last five years the best results on most of the problems in these areas have been provided by a single general approach: Deep Neural Networks (DNNs). Moreover, for many problems, such as object classification and object detection, results using DNNs enabled computer vision algorithms to offer an acceptable level of accuracy for the first time. Thus, in many application areas, broader algorithmic exploration is being replaced by the creation of a single DNN architecture. Compared to other software architectures DNNs are quite simple. They consist of a simple feedforward pipe-and-filter structure. Nevertheless, the particular organization of the DNN and the precise characterization of the computations in the filter elements is diverse enough to create a rich design space.  In the creation of a DNN to solve a particular application problem there are two implicit questions:

1) What is the right DNN architecture?
2) How do we find the right DNN architecture?

Our prior work in embedded systems has led us to explore these questions in a couple novel ways. First, for us “the right” DNN is one that offers acceptable accuracy and is capable of operating in real-time within the power and energy constraints of its target embedded application. This focus has led us away from experimenting with DNN architectures with a large number (e.g. 60M) of model parameters because their memory footprint makes them prohibitively expensive for deployment in many embedded systems. Instead we opted to explore the other extreme: very small DNN architectures capable of fitting into even the smallest embedded systems.

In approaching the second question, “What is the right DNN architecture?” we sought to leverage decades of research on systematic design-space exploration of application-specific embedded microprocessors.

The first result of our efforts was SqueezeNet, a DNN targeted for the object classification problem that achieves the same accuracy as the popular DNN AlexNet but with a 50x reduction in the number of model parameters. In this talk we will discuss our systematic approach to the creation of SqueezeNet and will broadly survey diverse efforts on designing small DNNs targeted for embedded systems.




Biography: Kurt Keutzer is a Professor of Electrical Engineering and Computer Science at the University of California, Berkeley. Prior to joining UC Berkeley he was Senior Vice-President and Chief Technical Officer at Synopsys. The 50th Design Automation Conference awarded Kurt a number of honors including “Top 10 Most Cited Author,” and “Author of a Top 10 Most Cited Paper.” He was also recognized as one of three people to have received four Best Paper Awards in the 50 Year History of the DAC. Kurt has been a Fellow of the IEEE since 1996. As an entrepreneur Kurt has been an active angel investor and was among the first investors in both Coverity and Tensilica. Kurt recently co-founded DeepScale with Forrest Iandola.

Kurt’s current research interests are at the two ends of the computational spectrum of Deep Learning: accelerating the training of Deep Neural Nets using massively distributed computing, and designing and implementing Deep Neural Nets for embedded systems.
Small Neural Nets Are Beautiful: Enabling Embedded Systems with Small Deep-Neural-Network Architectures
 Speaker: Kurt Keutzer - Univ. of California, Berkeley