論文地址:https://arxiv.org/abs/1905.11946
pytorch代碼地址:https://github.com/lukemelas/EfficientNet-PyTorch
論文主要思想
該論文主要講述模型縮放對模型性能的影響。通過對網(wǎng)絡(luò)深度(層數(shù))喷橙、寬度(通道數(shù))以及輸入圖片的分辨率的調(diào)整來獲取更佳的網(wǎng)絡(luò)模型。最終獲得的模型相比原來的模型不僅參數(shù)量少中跌,而且準(zhǔn)確率高。論文主要是依靠下圖進行論述
傳統(tǒng)的模型縮放方法一般只調(diào)整一個變量,例如:ResNet通過對網(wǎng)絡(luò)層數(shù)的縮放饲趋,獲得ResNet-18和ResNet-200幽邓。MobileNets通過對通道數(shù)的縮放來調(diào)整模型炮温。也有通過增大輸入圖片的大小來提取模型的準(zhǔn)確率。
本文方法為在現(xiàn)有baseline網(wǎng)絡(luò)下同時調(diào)整層數(shù)颊艳、通道數(shù)和分辨率三個變量來縮放模型茅特,以獲得更好的準(zhǔn)確率和運行速度。因此baseline網(wǎng)絡(luò)的模型設(shè)計也非常重要棋枕,文中的EfficientNet-B0模型為下圖:
后續(xù)工作便為調(diào)整d,w,r三個參數(shù)以達到最佳狀態(tài)白修,對于EfficientNet-B0網(wǎng)絡(luò),將φ 設(shè)1重斑,最后聯(lián)合搜索出的最佳參數(shù)為a=1.2兵睛,β=1.1,γ=1.15窥浪。調(diào)整的公式為
EfficientNet-B0網(wǎng)絡(luò)模型細(xì)節(jié)(代碼)
整體模型中的特征提取過程主要由stem,blocks和head構(gòu)成祖很,最后將獲得的feature map用全連接層做類型判別。
EfficientNet(
(_conv_stem): Conv2dStaticSamePadding(
3, 32, kernel_size=(3, 3), stride=(2, 2), bias=False
(static_padding): ZeroPad2d(padding=(0, 1, 0, 1), value=0.0)
)
(_bn0): BatchNorm2d(32, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_blocks): ModuleList(
(0): MBConvBlock(
(_depthwise_conv): Conv2dStaticSamePadding(
32, 32, kernel_size=(3, 3), stride=[1, 1], groups=32, bias=False
(static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0)
)
(_bn1): BatchNorm2d(32, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
32, 8, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
8, 32, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(16, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(1): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(96, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
96, 96, kernel_size=(3, 3), stride=[2, 2], groups=96, bias=False
(static_padding): ZeroPad2d(padding=(0, 1, 0, 1), value=0.0)
)
(_bn1): BatchNorm2d(96, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
96, 4, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
4, 96, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
96, 24, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(24, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(2): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(144, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
144, 144, kernel_size=(3, 3), stride=(1, 1), groups=144, bias=False
(static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0)
)
(_bn1): BatchNorm2d(144, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
144, 6, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
6, 144, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
144, 24, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(24, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(3): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(144, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
144, 144, kernel_size=(5, 5), stride=[2, 2], groups=144, bias=False
(static_padding): ZeroPad2d(padding=(1, 2, 1, 2), value=0.0)
)
(_bn1): BatchNorm2d(144, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
144, 6, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
6, 144, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
144, 40, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(40, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(4): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
40, 240, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(240, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
240, 240, kernel_size=(5, 5), stride=(1, 1), groups=240, bias=False
(static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0)
)
(_bn1): BatchNorm2d(240, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
240, 10, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
10, 240, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
240, 40, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(40, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(5): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
40, 240, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(240, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
240, 240, kernel_size=(3, 3), stride=[2, 2], groups=240, bias=False
(static_padding): ZeroPad2d(padding=(0, 1, 0, 1), value=0.0)
)
(_bn1): BatchNorm2d(240, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
240, 10, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
10, 240, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
240, 80, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(80, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(6): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
80, 480, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(480, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
480, 480, kernel_size=(3, 3), stride=(1, 1), groups=480, bias=False
(static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0)
)
(_bn1): BatchNorm2d(480, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
480, 20, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
20, 480, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
480, 80, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(80, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(7): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
80, 480, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(480, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
480, 480, kernel_size=(3, 3), stride=(1, 1), groups=480, bias=False
(static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0)
)
(_bn1): BatchNorm2d(480, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
480, 20, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
20, 480, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
480, 80, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(80, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(8): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
80, 480, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(480, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
480, 480, kernel_size=(5, 5), stride=[1, 1], groups=480, bias=False
(static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0)
)
(_bn1): BatchNorm2d(480, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
480, 20, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
20, 480, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
480, 112, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(112, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(9): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(672, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
672, 672, kernel_size=(5, 5), stride=(1, 1), groups=672, bias=False
(static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0)
)
(_bn1): BatchNorm2d(672, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
672, 28, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
28, 672, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
672, 112, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(112, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(10): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(672, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
672, 672, kernel_size=(5, 5), stride=(1, 1), groups=672, bias=False
(static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0)
)
(_bn1): BatchNorm2d(672, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
672, 28, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
28, 672, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
672, 112, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(112, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(11): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(672, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
672, 672, kernel_size=(5, 5), stride=[2, 2], groups=672, bias=False
(static_padding): ZeroPad2d(padding=(1, 2, 1, 2), value=0.0)
)
(_bn1): BatchNorm2d(672, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
672, 28, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
28, 672, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
672, 192, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(192, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(12): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
1152, 1152, kernel_size=(5, 5), stride=(1, 1), groups=1152, bias=False
(static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0)
)
(_bn1): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
1152, 48, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
48, 1152, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
1152, 192, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(192, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(13): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
1152, 1152, kernel_size=(5, 5), stride=(1, 1), groups=1152, bias=False
(static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0)
)
(_bn1): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
1152, 48, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
48, 1152, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
1152, 192, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(192, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(14): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
1152, 1152, kernel_size=(5, 5), stride=(1, 1), groups=1152, bias=False
(static_padding): ZeroPad2d(padding=(2, 2, 2, 2), value=0.0)
)
(_bn1): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
1152, 48, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
48, 1152, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
1152, 192, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(192, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
(15): MBConvBlock(
(_expand_conv): Conv2dStaticSamePadding(
192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn0): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_depthwise_conv): Conv2dStaticSamePadding(
1152, 1152, kernel_size=(3, 3), stride=[1, 1], groups=1152, bias=False
(static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0)
)
(_bn1): BatchNorm2d(1152, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
1152, 48, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
48, 1152, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
1152, 320, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(320, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
)
(_conv_head): Conv2dStaticSamePadding(
320, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn1): BatchNorm2d(1280, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_avg_pooling): AdaptiveAvgPool2d(output_size=1)
(_dropout): Dropout(p=0.2, inplace=False)
(_fc): Linear(in_features=1280, out_features=1000, bias=True)
(_swish): MemoryEfficientSwish()
)
以224x224的一張彩色圖片為例漾脂,其維度為[1,3,224,224]假颇。
stem的前向傳播,其中swish在之后的運算中大量使用骨稿,命名字面意思說能夠有效利用內(nèi)存笨鸡,但還不能理解其中原理。
(_conv_stem): Conv2dStaticSamePadding(
3, 32, kernel_size=(3, 3), stride=(2, 2), bias=False
(static_padding): ZeroPad2d(padding=(0, 1, 0, 1), value=0.0)
)
(_bn0): BatchNorm2d(32, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
維度變化:[1,3,224,224]--pad-->[1,3,225,225]--conv-->[1,32,112,112]--bn0-->[1,32,112,112]--swish-->[1,32,112,112]
blocks的前向傳播坦冠,以其中第一個MBConvBlock為例:
(0): MBConvBlock(
(_depthwise_conv): Conv2dStaticSamePadding(
32, 32, kernel_size=(3, 3), stride=[1, 1], groups=32, bias=False
(static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0)
)
(_bn1): BatchNorm2d(32, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_se_reduce): Conv2dStaticSamePadding(
32, 8, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_se_expand): Conv2dStaticSamePadding(
8, 32, kernel_size=(1, 1), stride=(1, 1)
(static_padding): Identity()
)
(_project_conv): Conv2dStaticSamePadding(
32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn2): BatchNorm2d(16, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
)
其中depthwise深度卷積的講解可以參考該篇博客http://www.reibang.com/p/38dc74d12fcf?utm_source=oschina-app
經(jīng)過blocks層的特征提取形耗,可以得到[1,320,7,7]的feature map。
head層設(shè)計
(_conv_head): Conv2dStaticSamePadding(
320, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False
(static_padding): Identity()
)
(_bn1): BatchNorm2d(1280, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True)
(_swish): MemoryEfficientSwish()
前向傳播維度變化:[1,320,7,7]--head-->[1,1280,7,7]--bn1-->[1,1280,7,7]--swish-->[1,1280,7,7]
池化層及最后的全連接層
(_avg_pooling): AdaptiveAvgPool2d(output_size=1)
[1,1280]--dropout-->[1,1280]--fc-->[1,1000]
(_dropout): Dropout(p=0.2, inplace=False)
(_fc): Linear(in_features=1280, out_features=1000, bias=True)
經(jīng)過conv_head后的維度[1,1280,7,7]--avg_pool-->[1,1280,1,1]--view-->[1,1280]--dropout-->[1,1280]--fc-->[1,1000]辙浑,一張圖片的最終輸出為[1,1000]激涤,因為預(yù)測類別數(shù)目為1000。至此網(wǎng)絡(luò)的前向傳播過程已經(jīng)講完判呕。
網(wǎng)絡(luò)中存在dropout和drop connect兩種對于過擬合的解決方法倦踢,兩者的區(qū)別可以參考該博客https://www.cnblogs.com/tornadomeet/p/3430312.html
最后通過修改a送滞,β,γ及不同的φ 參數(shù)來縮放baseline網(wǎng)絡(luò)硼一,以獲得EfficientNet-B1到B7累澡。最后對于如何獲取最佳的參數(shù),只有不停的去試般贼,但我們是否有大量的硬件設(shè)備和時間呢?這也是十分尷尬的情況愧哟。
參考博客:
depthwise的講解:http://www.reibang.com/p/38dc74d12fcf?utm_source=oschina-app
dropout和drop connect的區(qū)別:https://www.cnblogs.com/tornadomeet/p/3430312.html
對EfficieentNet的評價:https://www.zhihu.com/question/326833457/answer/700322601