1. Fasttext
1.1 模型架構(gòu)
Fasttext
模型架構(gòu)和Word2vec
的CBOW
模型架構(gòu)非常相似镊尺,下面就是FastText
模型的架構(gòu)圖:
從上圖可以看出來耳璧,
Fasttext
模型包括輸入層、隱含層、輸出層共三層。其中輸入的是詞向量棚饵,輸出的是label
,隱含層是對(duì)多個(gè)詞向量的疊加平均
- CBOW的輸入是目標(biāo)單詞的上下文掩完,
Fasttext
的輸入是多個(gè)單詞及其n-gram特征噪漾,這些單詞用來表示單個(gè)文檔 - CBOW的輸入單詞使用one-hot編碼,
Fasttext
的輸入特征時(shí)使用embedding編碼 - CBOW的輸出是目標(biāo)詞匯且蓬,
Fasttext
的輸出是文檔對(duì)應(yīng)的類別
1.2 模型實(shí)現(xiàn)
class Model(nn.Module):
def __init__(self, config):
super(Model, self).__init__()
self.embedding = nn.Embedding(config.n_vocab, config.embed, padding_idx=config.n_vocab - 1)
self.embedding_ngram2 = nn.Embedding(config.n_gram_vocab, config.embed)
self.embedding_ngram3 = nn.Embedding(config.n_gram_vocab, config.embed)
self.dropout = nn.Dropout(config.dropout)
self.fc1 = nn.Linear(config.embed * 3, config.hidden_size)
# self.dropout2 = nn.Dropout(config.dropout)
self.fc2 = nn.Linear(config.hidden_size, config.num_classes)
def forward(self, x):
out_word = self.embedding(x[0])
out_bigram = self.embedding_ngram2(x[2])
out_trigram = self.embedding_ngram3(x[3])
out = torch.cat((out_word, out_bigram, out_trigram), -1)
out = out.mean(dim=1)
out = self.dropout(out)
out = self.fc1(out)
out = F.relu(out)
out = self.fc2(out)
return out
2. TextCNN
2.1 模型架構(gòu)
與傳統(tǒng)圖像的CNN網(wǎng)絡(luò)相比,TextCNN
在網(wǎng)絡(luò)結(jié)構(gòu)上沒有任何變化, 從下圖可以看出TextCNN
其實(shí)只有一層convolution
欣硼,一層max-pooling
, 最后將輸出外接softmax
來n分類
2.2 模型實(shí)現(xiàn)
class Model(nn.Module):
def __init__(self, config):
super(Model, self).__init__()
self.embedding = nn.Embedding(config.n_vocab, config.embed, padding_idx=config.n_vocab - 1)
self.convs = nn.ModuleList([nn.Conv2d(1, config.num_filters, (k, config.embed)) for k in config.filter_sizes])
self.dropout = nn.Dropout(config.dropout)
self.fc = nn.Linear(config.num_filters * len(config.filter_sizes), config.num_classes)
def conv_and_pool(self, x, conv):
x = F.relu(conv(x)).squeeze(3)
x = F.max_pool1d(x, x.size(2)).squeeze(2)
return x
def forward(self, x):
out = self.embedding(x[0])
out = out.unsqueeze(1)
out = torch.cat([self.conv_and_pool(out, conv) for conv in self.convs], 1)
out = self.dropout(out)
out = self.fc(out)
return
3. TextRNN
3.1 模型架構(gòu)
一般取前向/反向LSTM
在最后一個(gè)時(shí)間步長上隱藏狀態(tài),然后進(jìn)行拼接恶阴,在經(jīng)過一個(gè)softmax
層進(jìn)行一個(gè)多分類诈胜;或者取前向/反向LSTM
在每一個(gè)時(shí)間步長上的隱藏狀態(tài)豹障,對(duì)每一個(gè)時(shí)間步長上的兩個(gè)隱藏狀態(tài)進(jìn)行拼接concat,然后對(duì)所有時(shí)間步長上拼接后的隱藏狀態(tài)取均值耘斩,再經(jīng)過一個(gè)softmax層
進(jìn)行一個(gè)多分類
3.2 模型實(shí)現(xiàn)
class Model(nn.Module):
def __init__(self, config):
super(Model, self).__init__()
self.embedding = nn.Embedding(config.n_vocab, config.embed, padding_idx=config.n_vocab - 1)
self.lstm = nn.LSTM(config.embed, config.hidden_size, config.num_layers, bidirectional=True, batch_first=True, dropout=config.dropout)
self.fc = nn.Linear(config.hidden_size * 2, config.num_classes)
def forward(self, x):
x, _ = x
out = self.embedding(x) # [batch_size, seq_len, embeding]=[128, 32, 300]
out, _ = self.lstm(out)
out = self.fc(out[:, -1, :]) # 句子最后時(shí)刻的 hidden state
return out
4. TextRCNN
4.1 模型架構(gòu)
與TextCNN
比較類似沼填,都是把文本表示為一個(gè)嵌入矩陣,再進(jìn)行卷積操作括授。不同的是TextCNN
中的文本嵌入矩陣每一行只是文本中一個(gè)詞的向量表示,而在RCNN
中岩饼,文本嵌入矩陣的每一行是當(dāng)前詞的詞向量以及上下文嵌入表示的拼接
4.2 模型實(shí)現(xiàn)
class Model(nn.Module):
def __init__(self, config):
super(Model, self).__init__()
self.embedding = nn.Embedding(config.n_vocab, config.embed, padding_idx=config.n_vocab - 1)
self.lstm = nn.LSTM(config.embed, config.hidden_size, config.num_layers, bidirectional=True, batch_first=True, dropout=config.dropout)
self.maxpool = nn.MaxPool1d(config.pad_size)
self.fc = nn.Linear(config.hidden_size * 2 + config.embed, config.num_classes)
def forward(self, x):
x, _ = x
embed = self.embedding(x) # [batch_size, seq_len, embeding]=[64, 32, 64]
out, _ = self.lstm(embed)
out = torch.cat((embed, out), 2)
out = F.relu(out)
out = out.permute(0, 2, 1)
out = self.maxpool(out).squeeze()
out = self.fc(out)
return
5. BiLSTM_Attention
5.1 模型架構(gòu)
相對(duì)于以前的文本分類中的BiLSTM
模型荚虚,BiLSTM+Attention
模型的主要區(qū)別是在BiLSTM
層之后,全連接softmax
分類層之前接入了一個(gè)叫做Attention Layer
的結(jié)構(gòu)
5.2 模型實(shí)現(xiàn)
class Model(nn.Module):
def __init__(self, config):
super(Model, self).__init__()
self.embedding = nn.Embedding(config.n_vocab, config.embed, padding_idx=config.n_vocab - 1)
self.lstm = nn.LSTM(config.embed, config.hidden_size, config.num_layers, bidirectional=True, batch_first=True, dropout=config.dropout)
self.tanh1 = nn.Tanh()
self.w = nn.Parameter(torch.zeros(config.hidden_size * 2))
self.tanh2 = nn.Tanh()
self.fc1 = nn.Linear(config.hidden_size * 2, config.hidden_size2)
self.fc = nn.Linear(config.hidden_size2, config.num_classes)
def forward(self, x):
x, _ = x
emb = self.embedding(x) # [batch_size, seq_len, embeding]=[128, 32, 300]
H, _ = self.lstm(emb) # [batch_size, seq_len, hidden_size * num_direction]=[128, 32, 256]
M = self.tanh1(H) # [128, 32, 256]
alpha = F.softmax(torch.matmul(M, self.w), dim=1).unsqueeze(-1) # [128, 32, 1]
out = H * alpha # [128, 32, 256]
out = torch.sum(out, 1) # [128, 256]
out = F.relu(out)
out = self.fc1(out)
out = self.fc(out) # [128, 64]
return out
6. DPCNN
6.1 模型架構(gòu)
第一層采用text region embedding
籍茧,其實(shí)就是對(duì)一個(gè)n-gram
文本塊進(jìn)行卷積渴析,得到的feature maps
作為該文本塊的embedding
漓帚。然后是convolution blocks
的堆疊毡们,就是兩個(gè)卷積層與shortcut
的組合。convolution blocks
中間采用max-pooling
,設(shè)置步長為2以進(jìn)行負(fù)采樣脖隶。最后一個(gè)pooling層
將每個(gè)文檔的數(shù)據(jù)整合成一個(gè)向量
6.2 模型實(shí)現(xiàn)
class Model(nn.Module):
def __init__(self, config):
super(Model, self).__init__()
self.embedding = nn.Embedding(config.n_vocab, config.embed, padding_idx=config.n_vocab - 1)
self.conv_region = nn.Conv2d(1, config.num_filters, (3, config.embed), stride=1)
self.conv = nn.Conv2d(config.num_filters, config.num_filters, (3, 1), stride=1)
self.max_pool = nn.MaxPool2d(kernel_size=(3, 1), stride=2)
self.padding1 = nn.ZeroPad2d((0, 0, 1, 1)) # top bottom
self.padding2 = nn.ZeroPad2d((0, 0, 0, 1)) # bottom
self.relu = nn.ReLU()
self.fc = nn.Linear(config.num_filters, config.num_classes)
def forward(self, x):
x = x[0]
x = self.embedding(x)
x = x.unsqueeze(1) # [batch_size, 250, seq_len, 1]
x = self.conv_region(x) # [batch_size, 250, seq_len-3+1, 1]
x = self.padding1(x) # [batch_size, 250, seq_len, 1]
x = self.relu(x)
x = self.conv(x) # [batch_size, 250, seq_len-3+1, 1]
x = self.padding1(x) # [batch_size, 250, seq_len, 1]
x = self.relu(x)
x = self.conv(x) # [batch_size, 250, seq_len-3+1, 1]
while x.size()[2] > 2:
x = self._block(x)
x = x.squeeze() # [batch_size, num_filters(250)]
x = self.fc(x)
return x
def _block(self, x):
x = self.padding2(x)
px = self.max_pool(x)
x = self.padding1(px)
x = F.relu(x)
x = self.conv(x)
x = self.padding1(x)
x = F.relu(x)
x = self.conv(x)
x = x + px
return x
NLP新人庄敛,歡迎大家一起交流,互相學(xué)習(xí)涎显,共同成長~~