本文通過一個例子實驗來觀察并講解PyTorch中model.modules(), model.named_modules(), model.children(), model.named_children(), model.parameters(), model.named_parameters(), model.state_dict()這些model實例方法的返回值。例子如下:
import torch
import torch.nn as nn
class Net(nn.Module):
def __init__(self, num_class=10):
super().__init__()
self.features = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=3),
nn.BatchNorm2d(6),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(in_channels=6, out_channels=9, kernel_size=3),
nn.BatchNorm2d(9),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.classifier = nn.Sequential(
nn.Linear(9*8*8, 128),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(128, num_class)
)
def forward(self, x):
output = self.features(x)
output = output.view(output.size()[0], -1)
output = self.classifier(output)
return output
model = Net()
如上代碼定義了一個由兩層卷積層,兩層全連接層組成的網(wǎng)絡(luò)模型泞莉。值得注意的是犬钢,這個Net由外到內(nèi)有3個層次:
Net:
----features:
------------Conv2d
------------BatchNorm2d
------------ReLU
------------MaxPool2d
------------Conv2d
------------BatchNorm2d
------------ReLU
------------MaxPool2d
----classifier:
------------Linear
------------ReLU
------------Dropout
------------Linear
網(wǎng)絡(luò)Net本身是一個nn.Module的子類,它又包含了features和classifier兩個由Sequential容器組成的nn.Module子類,features和classifier各自又包含眾多的網(wǎng)絡(luò)層,它們都屬于nn.Module子類,所以從外到內(nèi)共有3個層次朴译。
下面我們來看這幾個實例方法的返回值都是什么。
In [7]: model.named_modules()
Out[7]: <generator object Module.named_modules at 0x7f5db88f3840>
In [8]: model.modules()
Out[8]: <generator object Module.modules at 0x7f5db3f53c00>
In [9]: model.children()
Out[9]: <generator object Module.children at 0x7f5db3f53408>
In [10]: model.named_children()
Out[10]: <generator object Module.named_children at 0x7f5db80305e8>
In [11]: model.parameters()
Out[11]: <generator object Module.parameters at 0x7f5db3f534f8>
In [12]: model.named_parameters()
Out[12]: <generator object Module.named_parameters at 0x7f5d42da7570>
In [13]: model.state_dict()
Out[13]:
OrderedDict([('features.0.weight', tensor([[[[ 0.1200, -0.1627, -0.0841],
[-0.1369, -0.1525, 0.0541],
[ 0.1203, 0.0564, 0.0908]],
……
可以看出属铁,除了model.state_dict()返回的是一個字典眠寿,其他幾個方法返回值都顯示的是一個生成器,是一個可迭代變量红选,我們通過列表推導(dǎo)式用for循環(huán)將返回值取出來進一步進行觀察:
In [14]: model_modules = [x for x in model.modules()]
In [15]: model_named_modules = [x for x in model.named_modules()]
In [16]: model_children = [x for x in model.children()]
In [17]: model_named_children = [x for x in model.named_children()]
In [18]: model_parameters = [x for x in model.parameters()]
In [19]: model_named_parameters = [x for x in model.named_parameters()]
1. model.modules()
model.modules()迭代遍歷模型的所有子層澜公,所有子層即指nn.Module子類,在本文的例子中,Net(), features(), classifier(),以及nn.xxx構(gòu)成的卷積坟乾,池化迹辐,ReLU, Linear, BN, Dropout等都是nn.Module子類,也就是model.modules()會迭代的遍歷它們所有對象甚侣。我們看一下列表model_modules:
In [20]: model_modules
Out[20]:
[Net(
(features): Sequential(
(0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
(1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
(5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(6): ReLU(inplace)
(7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(classifier): Sequential(
(0): Linear(in_features=576, out_features=128, bias=True)
(1): ReLU(inplace)
(2): Dropout(p=0.5)
(3): Linear(in_features=128, out_features=10, bias=True)
)
),
Sequential(
(0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
(1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
(5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(6): ReLU(inplace)
(7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
),
Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1)),
BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True),
ReLU(inplace),
MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False),
Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1)),
BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True),
ReLU(inplace),
MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False),
Sequential(
(0): Linear(in_features=576, out_features=128, bias=True)
(1): ReLU(inplace)
(2): Dropout(p=0.5)
(3): Linear(in_features=128, out_features=10, bias=True)
),
Linear(in_features=576, out_features=128, bias=True),
ReLU(inplace),
Dropout(p=0.5),
Linear(in_features=128, out_features=10, bias=True)]
In [21]: len(model_modules)
Out[21]: 15
可以看出明吩,model_modules列表中共有15個元素,首先是整個Net殷费,然后遍歷了Net下的features子層印荔,進一步遍歷了feature下的所有層,然后又遍歷了classifier子層以及其下的所有層详羡。所以說model.modules()能夠迭代地遍歷模型的所有子層仍律。
2. model.named_modules()
顧名思義,它就是有名字的model.modules()实柠。model.named_modules()不但返回模型的所有子層水泉,還會返回這些層的名字:
In [28]: len(model_named_modules)
Out[28]: 15
In [29]: model_named_modules
Out[29]:
[('', Net(
(features): Sequential(
(0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
(1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
(5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(6): ReLU(inplace)
(7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(classifier): Sequential(
(0): Linear(in_features=576, out_features=128, bias=True)
(1): ReLU(inplace)
(2): Dropout(p=0.5)
(3): Linear(in_features=128, out_features=10, bias=True)
)
)),
('features', Sequential(
(0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
(1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
(5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(6): ReLU(inplace)
(7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)),
('features.0', Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))),
('features.1', BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)), ('features.2', ReLU(inplace)),
('features.3', MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)),
('features.4', Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))),
('features.5', BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)), ('features.6', ReLU(inplace)),
('features.7', MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)),
('classifier',
Sequential(
(0): Linear(in_features=576, out_features=128, bias=True)
(1): ReLU(inplace)
(2): Dropout(p=0.5)
(3): Linear(in_features=128, out_features=10, bias=True)
)),
('classifier.0', Linear(in_features=576, out_features=128, bias=True)),
('classifier.1', ReLU(inplace)),
('classifier.2', Dropout(p=0.5)),
('classifier.3', Linear(in_features=128, out_features=10, bias=True))]
可以看出,model.named_modules()也遍歷了15個元素窒盐,但每個元素都有了自己的名字草则,從名字可以看出,除了在模型定義時有命名的features和classifier蟹漓,其它層的名字都是PyTorch內(nèi)部按一定規(guī)則自動命名的炕横。返回層以及層的名字的好處是可以按名字通過迭代的方法修改特定的層,如果在模型定義的時候就給每個層起了名字葡粒,比如卷積層都是conv1,conv2...的形式份殿,那么我們可以這樣處理:
for name, layer in model.named_modules():
if 'conv' in name:
對layer進行處理
當(dāng)然,在沒有返回名字的情形中塔鳍,采用isinstance()函數(shù)也可以完成上述操作:
for layer in model.modules():
if isinstance(layer, nn.Conv2d):
對layer進行處理
3. model.children()
如果把這個網(wǎng)絡(luò)模型Net按層次從外到內(nèi)進行劃分的話伯铣,features和classifier是Net的子層,而conv2d, ReLU, BatchNorm, Maxpool2d這些有時features的子層轮纫, Linear, Dropout, ReLU等是classifier的子層,上面的model.modules()不但會遍歷模型的子層焚鲜,還會遍歷子層的子層掌唾,以及所有子層。
而model.children()只會遍歷模型的子層忿磅,這里即是features和classifier糯彬。
In [22]: len(model_children)
Out[22]: 2
In [22]: model_children
Out[22]:
[Sequential(
(0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
(1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
(5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(6): ReLU(inplace)
(7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
),
Sequential(
(0): Linear(in_features=576, out_features=128, bias=True)
(1): ReLU(inplace)
(2): Dropout(p=0.5)
(3): Linear(in_features=128, out_features=10, bias=True)
)]
可以看出,它只遍歷了兩個元素葱她,即features和classifier撩扒。
4. model.named_children()
model.named_children()就是帶名字的model.children(), 相比model.children(), model.named_children()不但迭代的遍歷模型的子層吨些,還會返回子層的名字:
In [23]: len(model_named_children)
Out[23]: 2
In [24]: model_named_children
Out[24]:
[('features', Sequential(
(0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
(1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(4): Conv2d(6, 9, kernel_size=(3, 3), stride=(1, 1))
(5): BatchNorm2d(9, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(6): ReLU(inplace)
(7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)),
('classifier', Sequential(
(0): Linear(in_features=576, out_features=128, bias=True)
(1): ReLU(inplace)
(2): Dropout(p=0.5)
(3): Linear(in_features=128, out_features=10, bias=True)
))]
對比上面的model.children(), 這里的model.named_children()還返回了兩個子層的名稱:features 和 classifier .
5. model.parameters()
迭代地返回模型的所有參數(shù)搓谆。
In [30]: len(model_parameters)
Out[30]: 12
In [31]: model_parameters
Out[31]:
[Parameter containing:
tensor([[[[ 0.1200, -0.1627, -0.0841],
[-0.1369, -0.1525, 0.0541],
[ 0.1203, 0.0564, 0.0908]],
……
[[-0.1587, 0.0735, -0.0066],
[ 0.0210, 0.0257, -0.0838],
[-0.1797, 0.0675, 0.1282]]]], requires_grad=True),
Parameter containing:
tensor([-0.1251, 0.1673, 0.1241, -0.1876, 0.0683, 0.0346],
requires_grad=True),
Parameter containing:
tensor([0.0072, 0.0272, 0.8620, 0.0633, 0.9411, 0.2971], requires_grad=True),
Parameter containing:
tensor([0., 0., 0., 0., 0., 0.], requires_grad=True),
Parameter containing:
tensor([[[[ 0.0632, -0.1078, -0.0800],
[-0.0488, 0.0167, 0.0473],
[-0.0743, 0.0469, -0.1214]],
……
[[-0.1067, -0.0851, 0.0498],
[-0.0695, 0.0380, -0.0289],
[-0.0700, 0.0969, -0.0557]]]], requires_grad=True),
Parameter containing:
tensor([-0.0608, 0.0154, 0.0231, 0.0886, -0.0577, 0.0658, -0.1135, -0.0221,
0.0991], requires_grad=True),
Parameter containing:
tensor([0.2514, 0.1924, 0.9139, 0.8075, 0.6851, 0.4522, 0.5963, 0.8135, 0.4010],
requires_grad=True),
Parameter containing:
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0.], requires_grad=True),
Parameter containing:
tensor([[ 0.0223, 0.0079, -0.0332, ..., -0.0394, 0.0291, 0.0068],
[ 0.0037, -0.0079, 0.0011, ..., -0.0277, -0.0273, 0.0009],
[ 0.0150, -0.0110, 0.0319, ..., -0.0110, -0.0072, -0.0333],
...,
[-0.0274, -0.0296, -0.0156, ..., 0.0359, -0.0303, -0.0114],
[ 0.0222, 0.0243, -0.0115, ..., 0.0369, -0.0347, 0.0291],
[ 0.0045, 0.0156, 0.0281, ..., -0.0348, -0.0370, -0.0152]],
requires_grad=True),
Parameter containing:
tensor([ 0.0072, -0.0399, -0.0138, 0.0062, -0.0099, -0.0006, -0.0142, -0.0337,
……
-0.0370, -0.0121, -0.0348, -0.0200, -0.0285, 0.0367, 0.0050, -0.0166],
requires_grad=True),
Parameter containing:
tensor([[-0.0130, 0.0301, 0.0721, ..., -0.0634, 0.0325, -0.0830],
[-0.0086, -0.0374, -0.0281, ..., -0.0543, 0.0105, 0.0822],
[-0.0305, 0.0047, -0.0090, ..., 0.0370, -0.0187, 0.0824],
...,
[ 0.0529, -0.0236, 0.0219, ..., 0.0250, 0.0620, -0.0446],
[ 0.0077, -0.0576, 0.0600, ..., -0.0412, -0.0290, 0.0103],
[ 0.0375, -0.0147, 0.0622, ..., 0.0350, 0.0179, 0.0667]],
requires_grad=True),
Parameter containing:
tensor([-0.0709, -0.0675, -0.0492, 0.0694, 0.0390, -0.0861, -0.0427, -0.0638,
-0.0123, 0.0845], requires_grad=True)]
6. model.named_parameters()
如果你是從前面看過來的炒辉,就會知道,這里就是迭代的返回帶有名字的參數(shù)泉手,會給每個參數(shù)加上帶有 .weight或 .bias的名字以區(qū)分權(quán)重和偏置:
In [32]: len(model_named_parameters)
Out[32]: 12
In [33]: model_named_parameters
Out[33]:
[('features.0.weight', Parameter containing:
tensor([[[[ 0.1200, -0.1627, -0.0841],
[-0.1369, -0.1525, 0.0541],
[ 0.1203, 0.0564, 0.0908]],
……
[[-0.1587, 0.0735, -0.0066],
[ 0.0210, 0.0257, -0.0838],
[-0.1797, 0.0675, 0.1282]]]], requires_grad=True)),
('features.0.bias', Parameter containing:
tensor([-0.1251, 0.1673, 0.1241, -0.1876, 0.0683, 0.0346],
requires_grad=True)),
('features.1.weight', Parameter containing:
tensor([0.0072, 0.0272, 0.8620, 0.0633, 0.9411, 0.2971], requires_grad=True)),
('features.1.bias', Parameter containing:
tensor([0., 0., 0., 0., 0., 0.], requires_grad=True)),
('features.4.weight', Parameter containing:
tensor([[[[ 0.0632, -0.1078, -0.0800],
[-0.0488, 0.0167, 0.0473],
[-0.0743, 0.0469, -0.1214]],
……
[[-0.1067, -0.0851, 0.0498],
[-0.0695, 0.0380, -0.0289],
[-0.0700, 0.0969, -0.0557]]]], requires_grad=True)),
('features.4.bias', Parameter containing:
tensor([-0.0608, 0.0154, 0.0231, 0.0886, -0.0577, 0.0658, -0.1135, -0.0221,
0.0991], requires_grad=True)),
('features.5.weight', Parameter containing:
tensor([0.2514, 0.1924, 0.9139, 0.8075, 0.6851, 0.4522, 0.5963, 0.8135, 0.4010],
requires_grad=True)),
('features.5.bias', Parameter containing:
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0.], requires_grad=True)),
('classifier.0.weight', Parameter containing:
tensor([[ 0.0223, 0.0079, -0.0332, ..., -0.0394, 0.0291, 0.0068],
……
[ 0.0045, 0.0156, 0.0281, ..., -0.0348, -0.0370, -0.0152]],
requires_grad=True)),
('classifier.0.bias', Parameter containing:
tensor([ 0.0072, -0.0399, -0.0138, 0.0062, -0.0099, -0.0006, -0.0142, -0.0337,
……
-0.0370, -0.0121, -0.0348, -0.0200, -0.0285, 0.0367, 0.0050, -0.0166],
requires_grad=True)),
('classifier.3.weight', Parameter containing:
tensor([[-0.0130, 0.0301, 0.0721, ..., -0.0634, 0.0325, -0.0830],
[-0.0086, -0.0374, -0.0281, ..., -0.0543, 0.0105, 0.0822],
[-0.0305, 0.0047, -0.0090, ..., 0.0370, -0.0187, 0.0824],
...,
[ 0.0529, -0.0236, 0.0219, ..., 0.0250, 0.0620, -0.0446],
[ 0.0077, -0.0576, 0.0600, ..., -0.0412, -0.0290, 0.0103],
[ 0.0375, -0.0147, 0.0622, ..., 0.0350, 0.0179, 0.0667]],
requires_grad=True)),
('classifier.3.bias', Parameter containing:
tensor([-0.0709, -0.0675, -0.0492, 0.0694, 0.0390, -0.0861, -0.0427, -0.0638,
-0.0123, 0.0845], requires_grad=True))]
7. model.state_dict()
model.state_dict()直接返回模型的字典黔寇,和前面幾個方法不同的是這里不需要迭代,它本身就是一個字典斩萌,可以直接通過修改state_dict來修改模型各層的參數(shù)缝裤,用于參數(shù)剪枝特別方便。詳細(xì)的state_dict方法颊郎,在我的這篇文章中有介紹:PyTorch模型保存深入理解