torch.flatten 和 torch.nn.Flatten 都用于把多維Tensor展平(flatten), 區(qū)別是:
- torch.flatten是函數(shù)嫌术,使用前無需先實例化梢睛,默認從第0維開始展平咖摹,通用化好
torch.flatten(input, start_dim=0, end_dim=- 1)
- torch.nn.Flatten是類狮含,使用前需要先實例化洋满,由于其在torch.nn模塊中剩晴,默認專門處理神經(jīng)網(wǎng)數(shù)據(jù)的展平锣咒,而神經(jīng)網(wǎng)絡數(shù)據(jù)通常第0維是Batch_Size, Batch_Size無需展平,所以其默認從第1維開始展平赞弥。
Class torch.nn.Flatten(start_dim=1, end_dim=- 1)
測試范例程序如下:
import torch
input_tensor = torch.randn(32, 4, 5, 5)
m = torch.nn.Flatten() #實例化Flatten
output1 = m(input_tensor)
print(output1.shape)
output2 = torch.flatten(input_tensor)
print(output2.shape)
運行結(jié)果如下:
torch.Size([32, 100])
torch.Size([3200])
另外毅整,torch.nn.Flatten適合作為一個“神經(jīng)網(wǎng)絡層”,加入神經(jīng)網(wǎng)絡中绽左,范例:
def _create_fcs(self, split_size, num_boxes, num_classes):
S, B, C = split_size, num_boxes, num_classes
return nn.Sequential(
nn.Flatten(),
nn.Linear(1024 * S * S, 4096),
# Usually, dropout is placed on the fully connected layers only
# A rule of thumb is to set the keep probability (1 - drop probability) to 0.5 when dropout is applied to fully connected layers
# https://stackoverflow.com/questions/46841362/where-dropout-should-be-inserted-fully-connected-layer-convolutional-layer
nn.Dropout(0.5),
nn.LeakyReLU(0.1),
# The predictions are encoded as an S × S × (B ? 5 + C) tensor
nn.Linear(4096, S * S * (B * 5 + C)), # 7*7*(2*5+20)=1470
)