Название: Efficient Processing of Deep Neural Networks
Автор: Vivienne Sze
Издательство: Ingram
Жанр: Программы
Серия: Synthesis Lectures on Computer Architecture
isbn: 9781681738338
isbn:
3.7 Interplay Between Different Metrics
4.1 Matrix Multiplication with Toeplitz
4.2 Tiling for Optimizing Performance
4.3 Computation Transform Optimizations
4.3.1 Gauss’ Complex Multiplication Transform
4.3.2 Strassen’s Matrix Multiplication Transform
4.3.3 Winograd Transform
4.3.4 Fast Fourier Transform
4.3.5 Selecting a Transform
4.4 Summary
5.1 Evaluation Metrics and Design Objectives
5.2 Key Properties of DNN to Leverage
5.3 DNN Hardware Design Considerations
5.4 Architectural Techniques for Exploiting Data Reuse
5.4.1 Temporal Reuse
5.4.2 Spatial Reuse
5.5 Techniques to Reduce Reuse Distance
5.6 Dataflows and Loop Nests
5.7 Dataflow Taxonomy
5.7.1 Weight Stationary (WS)
5.7.2 Output Stationary (OS)
5.7.3 Input Stationary (IS)
5.7.4 Row Stationary (RS)
5.7.5 Other Dataflows
5.7.6 Dataflows for Cross-Layer Processing
5.8 DNN Accelerator Buffer Management Strategies
5.8.1 Implicit versus Explicit Orchestration
5.8.2 Coupled versus Decoupled Orchestration
5.8.3 Explicit Decoupled Data Orchestration (EDDO)
5.9 Flexible NoC Design for DNN Accelerators
5.9.1 Flexible Hierarchical Mesh Network
5.10 Summary
6 Operation Mapping on Specialized Hardware
6.1 Mapping and Loop Nests
6.2 Mappers and Compilers
6.3 Mapper Organization
6.3.1 Map Spaces and Iteration Spaces
6.3.2 Mapper Search
6.3.3 Mapper Models and Configuration Generation
6.4 Analysis Framework for Energy Efficiency
6.4.1 Input Data Access Energy Cost
6.4.2 Partial Sum Accumulation Energy Cost
6.4.3 Obtaining the Reuse Parameters
6.5 Eyexam: Framework for Evaluating Performance
6.5.1 Simple 1-D Convolution Example
6.5.2 Apply Performance Analysis Framework to 1-D Example
6.6 Tools for Map Space Exploration
PART III Co-Design of DNN Hardware and Algorithms
7.1 Benefits of Reduce Precision
7.2 Determining the Bit Width
7.2.1 Quantization
7.2.2 Standard Components of the Bit Width
7.3 Mixed Precision: Different Precision for Different Data Types
7.4 Varying Precision: Change Precision for Different Parts of the DNN
7.5 Binary Nets
7.6 Interplay Between Precision and Other Design Choices
7.7 Summary of Design Considerations for Reducing Precision
8.1 Sources of Sparsity
8.1.1 Activation Sparsity
8.1.2 Weight Sparsity
8.2 Compression
8.2.1 Tensor Terminology
8.2.2 Classification of Tensor Representations
8.2.3 Representation of Payloads
8.2.4 Representation Optimizations
8.2.5 СКАЧАТЬ