Computational Analysis and Deep Learning for Medical Care. Группа авторов
Чтение книги онлайн.

Читать онлайн книгу Computational Analysis and Deep Learning for Medical Care - Группа авторов страница 18

СКАЧАТЬ can be improved by increasing depth. Once the network gets converged, its accuracy saturates. Further, if we add more layers, then the performance gets degraded rapidly, which, in turn, results in higher training error. To solve the problem of the vanishing/exploding gradient, ResNet with a residual learning framework [6] was proposed by allowing new layers to fit a residual mapping. When a model is converged than to fit the mapping, it is easy to push the residual to zero. The principle of ResNet is residual learning and identity mapping and skip connections. The idea behind the residual learning is that it feeds the input image to the next convolutional layer and adds them together and performs non-linear activation (ReLU) and pooling.

Layer name Input size Filter size Window size # Filters Stride Depth # 1 × 1 # 3 × 3 reduce # 3 × 3 # 5 × 5 reduce # 5 × 5 Pool proj Padding Output size Params Ops
Convolution 224 × 224 7 × 7 - 64 2 1 2 112 × 112 × 64 2.7M 34M
Max pool 112 × 112 - 3 × 3 - 2 0 0 56 × 56 × 64
Convolution 56 × 56 3 × 3 - 192 1 2 64 192 1 56 × 56 × 192 112K 360M
Max pool 56 × 56 - 3 × 3 192 2 0 0 28 × 28 × 192
Inception (3a) 28 × 28 - - - - 2 64 96 128 16 32 32 - 28 × 28 × 256 159K 128M
Inception (3b) 28 × 28 - - - - 2 128 128 192 32 96 64 - 28 × 28 × 480 380K 304M
28 × 28 - 3 × 3 480 2 0 0 14 × 14 × 480
Inception (4a) 14 × 14 - - - - 2 192 96 208 16 48 64 - 14 × 14 × 512 364K 73M
Inception (4b) 14 × 14 - - - - 2 160 112 224 24 СКАЧАТЬ