Efficient Processing of Deep Neural Networks. Vivienne Sze
Чтение книги онлайн.

Читать онлайн книгу Efficient Processing of Deep Neural Networks - Vivienne Sze страница 3

Название: Efficient Processing of Deep Neural Networks

Автор: Vivienne Sze

Издательство: Ingram

Жанр: Программы

Серия: Synthesis Lectures on Computer Architecture

isbn: 9781681738338

isbn:

СКАЧАТЬ

       Efficient Processing of Deep Neural Networks

      Vivienne Sze, Yu-Hsin Chen, and Tien-Ju Yang

      Massachusetts Institute of Technology

      Joel S. Emer

      Massachusetts Institute of Technology and Nvidia Research

       SYNTHESIS LECTURES ON COMPUTER ARCHITECTURE #50

image

       ABSTRACT

      This book provides a structured treatment of the key principles and techniques for enabling efficient processing of deep neural networks (DNNs). DNNs are currently widely used for many artificial intelligence (AI) applications, including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Therefore, techniques that enable efficient processing of deep neural networks to improve key metrics—such as energy-efficiency, throughput, and latency—without sacrificing accuracy or increasing hardware costs are critical to enabling the wide deployment of DNNs in AI systems.

      The book includes background on DNN processing; a description and taxonomy of hardware architectural approaches for designing DNN accelerators; key metrics for evaluating and comparing different designs; features of DNN processing that are amenable to hardware/algorithm co-design to improve energy efficiency and throughput; and opportunities for applying new technologies. Readers will find a structured introduction to the field as well as formalization and organization of key concepts from contemporary work that provide insights that may spark new ideas.

       KEYWORDS

      deep learning, neural network, deep neural networks (DNN), convolutional neural networks (CNN), artificial intelligence (AI), efficient processing, accelerator architecture, hardware/software co-design, hardware/algorithm co-design, domain-specific accelerators

       Contents

       Preface

       Acknowledgments

       PART I Understanding Deep Neural Networks

       1 Introduction

       1.1 Background on Deep Neural Networks

       1.1.1 Artificial Intelligence and Deep Neural Networks

       1.1.2 Neural Networks and Deep Neural Networks

       1.2 Training versus Inference

       1.3 Development History

       1.4 Applications of DNNs

       1.5 Embedded versus Cloud

       2 Overview of Deep Neural Networks

       2.1 Attributes of Connections Within a Layer

       2.2 Attributes of Connections Between Layers

       2.3 Popular Types of Layers in DNNs

       2.3.1 CONV Layer (Convolutional)

       2.3.2 FC Layer (Fully Connected)

       2.3.3 Nonlinearity

       2.3.4 Pooling and Unpooling

       2.3.5 Normalization

       2.3.6 Compound Layers

       2.4 Convolutional Neural Networks (CNNs)

       2.4.1 Popular CNN Models

       2.5 Other DNNs

       2.6 DNN Development Resources

       2.6.1 Frameworks

       2.6.2 Models

       2.6.3 Popular Datasets for Classification

       2.6.4 Datasets for Other Tasks

       2.6.5 Summary

       PART II Design of Hardware for Processing DNNs

       3 Key Metrics and Design Objectives

       3.1 Accuracy

       3.2 Throughput and Latency

       СКАЧАТЬ