閱讀動機(jī):
發(fā)現(xiàn)他在NER progress屠榜了广辰,ps:榜中的ontonotes和我做的89類的ontonotes不一樣,是一個業(yè)界比較上心的18類的窃祝。
主要貢獻(xiàn):
貢獻(xiàn)1:
contextual string embeddings:一個建模在字符級別上的語言模型(character language model)
訓(xùn)練的細(xì)節(jié)感覺也沒什么可以說的蔓挖。字符集別的嵌入,就這樣笆凌。
貢獻(xiàn)2:
是一個挺不錯的框架圣猎,集成了各種不同的embedding
ps: 比較牛掰的是還寫了一個stackEmbeddings, 這得省去多少代碼啊,具體內(nèi)容后面寫
里面能夠知道的XLNet乞而、FlairEmbedding送悔、Bert、Elmo爪模、Word2vec等等欠啤,具體見上面鏈接
上手DEMO
下面代碼中的load如果沒有資源會自動下載到本地,比較坑的是這個文件沒有vpn訪問不了屋灌,醉了洁段。當(dāng)然自己下一下也是挺快的。
下載地址參考1-ner
下載地址參考2-onto-ner
可以靠著這個構(gòu)造其他的下載鏈接
可以在下面的Sentence中設(shè)置tokenizer
from flair.data import Sentence
from flair.models import SequenceTagger
# make a sentence [sentence是一個類共郭, list of token類]
sentence = Sentence('The George Washington went to Washington .')
# load the NER tagger model
# tagger = SequenceTagger.load('ner')
tagger = SequenceTagger.load('/home/huyufeng/flair/flair/checkpoints/en-ner-conll03-v0.4.pt')
tagger = SequenceTagger.load('/home/huyufeng/flair/flair/checkpoints/en-ner-ontonotes-v0.4.pt')
# run NER over sentence
tagger.predict(sentence)
# print sentence with predicted tags
print(sentence.to_tagged_string())
#>>> The George <B-PER> Washington <E-PER> went to Washington <S-LOC> .
上面是打印比較簡略的信息祠丝,下面是打印比較完整的信息:
>>> print(sentence.to_dict(tag_type='ner'))
{"text": "The George Washington went to Washington .",
"labels": [],
"entities": [{"text": "George Washington",
"start_pos": 4, "end_pos": 21, "type": "PER",
"confidence": 0.9787668585777283},
{"text": "Washington",
"start_pos": 30, "end_pos": 40, "type": "LOC",
"confidence": 0.9987319111824036}]}
>>>
>>> for entity in sentence.get_spans('ner'):
... print(entity)
...
PER-span [2,3]: "George Washington"
LOC-span [6]: "Washington"
例子結(jié)束,上面這個例子彰顯了flair的方便除嘹,能夠很快的部署模型写半,當(dāng)然上面是直接Seq2Seq,句子直接得到標(biāo)簽尉咕,沒有中間商...嗯叠蝇。
--------------------第一階段結(jié)束--------------------
構(gòu)造數(shù)據(jù)集
當(dāng)然,Sentence的結(jié)構(gòu)既然固定了年缎,那就需要構(gòu)造數(shù)據(jù)集了悔捶。
sentence = Sentence('The grass is green .')
print(sentence)
sentence[3].add_tag('ner', 'color')
print(sentence.to_tagged_string())
Embedding
下面介紹了Glove, Elmo, Flair, bert 四種embedding
from flair.embeddings import WordEmbeddings, ELMoEmbeddings, FlairEmbeddings, BertEmbeddings
from flair.data import Sentence
s = Sentence("it was filthy to do such dirty work")
root = "/home/huyufeng/elmo/dataset/"
weight_file_path = "elmo_2x2048_256_2048cnn_1xhighway_weights.hdf5"
options_file_path = "elmo_2x2048_256_2048cnn_1xhighway_options.json"
elmo_embedding = ELMoEmbeddings(options_file=root+options_file_path , weight_file=root+weight_file_path)
elmo_embedding.embed(s)
for token in s:
print(token)
print(token.embedding)
print(token.embedding.size())
glove_path = "/home/huyufeng/flair/flair/checkpoints/glove.gensim.vectors.npy"
glove_embedding = WordEmbeddings(glove_path)
glove_embedding.embed(s)
for token in s:
print(token)
print(token.embedding)
print(token.embedding.size())
flair_embedding_forward = FlairEmbeddings('news-forward')
# flair_embedding_forward = FlairEmbeddings('news-backward')
flair_embedding_forward.embed(s)
for token in s:
print(token)
print(token.embedding)
print(token.embedding.size())#3584 7*512
bert_path = "/home/huyufeng/glove/uncased_L-12_H-768_A-12"
bert_embedding = BertEmbeddings(bert_path)
bert_embedding.embed(s)
for token in s:
print(token)
print(token.embedding)
print(token.embedding.size())#3072 3*1024
stacking——simutaneously-multi-embed
采用的concat的方法
直接參考link,后續(xù)真用上了再補(bǔ)单芜。
直接一個函數(shù)搞定蜕该,很方便。
from flair.embeddings import ELMoEmbeddings, FlairEmbeddings, BertEmbeddings, StackedEmbeddings
# create a StackedEmbedding object that combines elmo and forward/backward flair embeddings
stacked_embeddings = StackedEmbeddings([
elmo_embedding,
bert_embedding,
flair_embedding_forward,
flair_embedding_backward,
])
Document Embeddings
Word Embeding 得到的是words * dim
Docu Embeding 得到的是dim洲鸠, 即整個句子的embedding
文章給出了兩種方法:Pooling 和 RNN方法:
Pooling方法蛇损,默認(rèn)是mean,有[mean, max, min]三個方法可選,立即可用淤齐。
RNN方法默認(rèn)采用GRU,有['GRU', 'LSTM'], 問題:這里的GRU和LSTM可能需要訓(xùn)練過才有效袜匿。
from flair.embeddings import ELMoEmbeddings, FlairEmbeddings, BertEmbeddings, StackedEmbeddings, DocumentPoolEmbeddings, DocumentRNNEmbeddings
# create an example sentence
sentence = Sentence('The grass is green . And the sky is blue .')
sentence2 = Sentence('The grass is green . And the sky is blue .')
# initialize the document embeddings, mode = mean
document_embeddings = DocumentPoolEmbeddings([bert_embedding,
flair_embedding_backward,
flair_embedding_forward])
# embed the sentence with our document embedding
document_embeddings.embed(sentence)
print(sentence.get_embedding()) #7168
>>> tensor([-0.0132, -0.1393, 0.0427, ..., -0.0013, -0.0026, 0.0170],
grad_fn=<CatBackward>)
document_embeddings_rnn = DocumentRNNEmbeddings([bert_embedding,
flair_embedding_backward,
flair_embedding_forward])
# embed the sentence with our document embedding
document_embeddings_rnn.embed(sentence2)
print(sentence2.get_embedding()) #7296
>>> tensor([-0.0651, 0.6252, 0.2668, ..., -0.0013, -0.0026, 0.0170],
grad_fn=<CatBackward>)
Loading Training Data
有各種各樣的數(shù)據(jù)集更啄,感覺可以玩一玩。
數(shù)據(jù)集 | 描述 | 備注 |
---|---|---|
'UD_ENGLISH' | 普遍依賴樹圖資料庫 | |
'WIKINER_ENGLISH' | WikiNER | |
'NEWSGROUPS' | 文本分類 | 似乎和我的比較相似 |
‘IMDB' | 咳咳 | 興趣所在 |
導(dǎo)入數(shù)據(jù)
import flair.datasets
corpus = flair.datasets.IMDB()
news_corpus = flair.datasets.NEWSGROUPS()
打印數(shù)據(jù)
print(corpus)
>>> Corpus: 10183 train + 1131 dev + 7532 test sentences
print(len(corpus.test)) #[train, test, dev]
print(corpus.test[0])
print(corpus.test[0].to_tagged_string('pos'))
print(corpus.test[0].labels) # 這個是文本分類的(只有一個label)居灯,和上面不一樣祭务,還有ner的
數(shù)據(jù)采樣 10 %
downsampled_corpus = flair.datasets.IMDB().downsample(0.1)
數(shù)據(jù)分析:會得到所有類別的詳細(xì)的分析。結(jié)果見附錄
stats = corpus.obtain_statistics()
print(stats)
Multi-Corpus
暫時不知道有什么用
from flair.data import MultiCorpus
multi_corpus = MultiCorpus([english_corpus, german_corpus, dutch_corpus])
讀取數(shù)據(jù)不再上述中的列表:形如link
George N B-PER
Washington N I-PER
went V O
to P O
Washington N B-LOC
pass
讀取不再上述中的CSV數(shù)據(jù):形如[link]
pass
歪樓
好像git上找一個函數(shù)的調(diào)用比VScode還要舒服一點(diǎn)怪嫌,是時候轉(zhuǎn)git了义锥。
reference
學(xué)習(xí)目錄
- Tutorial 1: Basics
- Tutorial 2: Tagging your Text
- Tutorial 3: Embedding Words
- Tutorial 4: List of All Word Embeddings
- Tutorial 5: Embedding Documents
- Tutorial 6: Loading your own Corpus
- Tutorial 7: Training your own Models
- Tutorial 8: Optimizing your Models
- Tutorial 9: Training your own Flair Embeddings
附錄--句子分類
句子分類,順便上手岩灭,利用其結(jié)果也許未來有用呢拌倍? 這里舉一個IMDB pos-neg的例子(用于輿情分析咯,除了情緒噪径,還有檢測攻擊語言的柱恤,呼呼呼~)
sentence = Sentence('France is the current world cup winner.')
# add a label to a sentence
sentence.add_label('sports')
# a sentence can also belong to multiple classes
sentence.add_labels(['sports', 'world cup'])
# you can also set the labels while initializing the sentence
sentence = Sentence('France is the current world cup winner.', labels=['sports', 'world cup'])
print(sentence)
for label in sentence.labels:
print(label)
上面是為句子設(shè)類別,下面是正式的句子分類
from flair.models import TextClassifier
classifier = TextClassifier.load('/home/huyufeng/flair/flair/checkpoints/imdb-v0.4.pt')
s2 = Sentence('I feel bad about this movie')
s3 = Sentence('I feel really bad about characters\' suffering, It touched something deeply in my heart, absolutely, it is good')
# predict NER tags
classifier.predict([s2, s3])
# print sentence with predicted labels
>>> print(s2.labels)
[NEGATIVE (0.9519780874252319)]
>>> print(s3.labels)
[POSITIVE (0.9950523972511292)]
附錄——embedding 目錄
方便自己找位置:246
model
root = /home/huyufeng/flair/flair/checkpoints
Positive-Negtive Class——imdb-v0.4.pt
NER-ontonotes——en-ner-ontonotes-v0.4.pt
NER——en-ner-conll03-v0.4.pt
Embeding目錄
ELMO:
root = /home/huyufeng/elmo/dataset/
elmo_2x1024_128_2048cnn_1xhighway_weights.hdf5
elmo_2x1024_128_2048cnn_1xhighway_options.json
elmo_2x2048_256_2048cnn_1xhighway_weights.hdf5
elmo_2x2048_256_2048cnn_1xhighway_options.json
elmo_2x4096_512_2048cnn_2xhighway_5.5B_weights.hdf5
elmo_2x4096_512_2048cnn_2xhighway_5.5B_options.json
GLOVE
root = /home/huyufeng/flair/flair/checkpoints
glove.gensim.vectors.npy
FlairEmbedding
FlairEmbeddings('news-forward') #已經(jīng)下載了找爱,直接載入梗顺。
bertEmbedding
/home/huyufeng/glove/uncased_L-12_H-768_A-12
直接導(dǎo)入文件夾就好了,比較坑的是车摄,必須把bert_config.json文件的名字改成config.json才行寺谤,浪費(fèi)半小時。
附錄——embedding 目錄
方便自己找位置:246
root = /home/huyufeng/flair/flair/checkpoints/DATASET
數(shù)據(jù)集 | 位置 |
---|---|
'WIKINER_ENGLISH' | aij-wikiner-en-wp3.bz2 |
'NEWSGROUPS' | 20news-bydate.tar.gz |
‘IMDB' | aclImdb_v1.tar.gz |
import flair.datasets
corpus = flair.datasets.IMDB()
PS
學(xué)習(xí)到了一個代碼的寫法吮播,再文件夾下面寫init.py
然后在里面寫導(dǎo)入文件变屁,就可以直接導(dǎo)入了。
import flair.datasets
corpus = flair.datasets.UD_ENGLISH()
附錄——數(shù)據(jù)分析
{
"TRAIN": {
"dataset": "TRAIN",
"total_number_of_documents": 10183,
"number_of_documents_per_class": {
"rec.motorcycles": 535,
"comp.sys.mac.hardware": 513,
"comp.windows.x": 530,
"sci.electronics": 523,
"talk.politics.mideast": 526,
"misc.forsale": 533,
"talk.politics.guns": 487,
"soc.religion.christian": 535,
"rec.autos": 534,
"alt.atheism": 435,
"comp.os.ms-windows.misc": 529,
"sci.med": 535,
"rec.sport.baseball": 528,
"sci.crypt": 540,
"comp.graphics": 535,
"talk.religion.misc": 339,
"rec.sport.hockey": 535,
"comp.sys.ibm.pc.hardware": 533,
"talk.politics.misc": 418,
"sci.space": 540
},
"number_of_tokens_per_tag": {},
"number_of_tokens": {
"total": 3397464,
"min": 22,
"max": 13487,
"avg": 333.6407738387509
}
},
"TEST": {...
附錄——為什么選擇flair
與Facebook的FastText甚至谷歌的AutoML自然語言平臺不同薄料,使用Flair進(jìn)行文本分類仍然是一項(xiàng)底層的工作敞贡。我們可以通過設(shè)置諸如學(xué)習(xí)率、批量大小摄职、退火因子(anneal factor)誊役、損失函數(shù)、優(yōu)化選擇等參數(shù)來完全控制文本嵌入和訓(xùn)練的方式…為了獲得最佳表現(xiàn)谷市,需要調(diào)整這些超參數(shù)蛔垢。Flair為我們提供了一個有名的超參數(shù)調(diào)優(yōu)庫Hyperopt的封裝器,我們可以使用它來對超參數(shù)進(jìn)行調(diào)優(yōu)以獲得最佳的性能迫悠。
在本文中鹏漆,為了簡單起見,我們使用了默認(rèn)的超參數(shù)。在大多數(shù)默認(rèn)參數(shù)下艺玲,我們的Flair模型在10個訓(xùn)練周期后獲得了0.973的f1-score括蝠。
為了進(jìn)行對比,我們使用FastText和AutoML自然語言平臺訓(xùn)練了一個文本分類模型饭聚。首先我們使用默認(rèn)參數(shù)運(yùn)行FastText忌警,并獲得了0.883的f1-score,這意味著模型在很大程度上優(yōu)于FastText秒梳。然而法绵,F(xiàn)astText只需要幾秒鐘的訓(xùn)練時間,而我們訓(xùn)練的Flair模型則需要5分鐘酪碘。
我們將結(jié)果與在谷歌的AutoML自然語言平臺上獲得的結(jié)果進(jìn)行了比較朋譬。平臺首先需要20分鐘來解析數(shù)據(jù)集。之后兴垦,我們開始了訓(xùn)練過程徙赢,這幾乎花了3個小時完成,但卻獲得了99.211的f1-score——這比我們自己訓(xùn)練的模型稍微好一點(diǎn)滑进。