gensim是python的一個工具包,由于看了一篇paper雹熬,里面有提到這個腰涧,所以了解下并且做個記錄
簡介
gensim是一個python的免費的NLP庫,旨在自動地從文檔中提取語義主題象浑,盡可能地高效和輕松
- Gensim旨在處理原始的、非結(jié)構(gòu)化的數(shù)字文本(純文本)琅豆。
- 實現(xiàn)了
Word2Vec
,FastText
, Latent Semantic Analysis (LSI, LSA, seeLsiModel
), Latent Dirichlet Allocation (LDA, seeLdaModel
) 等一些無監(jiān)督的算法
安裝
<!--使用pip安裝-->
pip install --upgrade gensim
依賴包列表
- Python >= 2.7 (tested with versions 2.7, 3.5 and 3.6)
- NumPy >= 1.11.3
- SciPy >= 0.18.1
- Six >= 1.5.0
- smart_open >= 1.2.1
使用
- String 轉(zhuǎn) Vectors:將單詞標號愉豺,統(tǒng)計過濾掉停用詞和詞頻低的詞,再用詞序號表示當前的文檔趋距,實現(xiàn)從string到vector的轉(zhuǎn)變
>>> import logging
>>> logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
>>> from pprint import pprint
>>> from collections import defaultdict
>>> from gensim import corpora
>>> documents = ["Human machine interface for lab abc computer applications",
>>> "A survey of user opinion of computer system response time",
>>> "The EPS user interface management system",
>>> "System and human system engineering testing of EPS",
>>> "Relation of user perceived response time to error measurement",
>>> "The generation of random binary unordered trees",
>>> "The intersection graph of paths in trees",
>>> "Graph minors IV Widths of trees and well quasi ordering",
>>> "Graph minors A survey"]
# remove common words and tokenize
>>> stoplist = set('for a of the and to in'.split())
>>> texts = [[word for word in document.lower().split() if word not in stoplist]
>>> for document in documents]
>>>
>>> # remove words that appear only once
>>> frequency = defaultdict(int)
>>> for text in texts:
>>> for token in text:
>>> frequency[token] += 1
>>>
>>> texts = [[token for token in text if frequency[token] > 1]
>>> for text in texts]
>>>
>>> pprint(texts)
[['human', 'interface', 'computer'],
['survey', 'user', 'computer', 'system', 'response', 'time'],
['eps', 'user', 'interface', 'system'],
['system', 'human', 'system', 'eps'],
['user', 'response', 'time'],
['trees'],
['graph', 'trees'],
['graph', 'minors', 'trees'],
['graph', 'minors', 'survey']]
>>> dictionary = corpora.Dictionary(texts)
>>> dictionary.save('/tmp/deerwester.dict') # store the dictionary, for future reference
>>> print(dictionary)
Dictionary(12 unique tokens)
>>> print(dictionary.token2id)
{'minors': 11, 'graph': 10, 'system': 5, 'trees': 9, 'eps': 8, 'computer': 0,
'survey': 4, 'user': 7, 'human': 1, 'time': 6, 'interface': 2, 'response': 3}
>>> new_doc = "Human computer interaction"
>>> new_vec = dictionary.doc2bow(new_doc.lower().split())
>>> print(new_vec) # the word "interaction" does not appear in the dictionary and is ignored
[(0, 1), (1, 1)]
>>> corpus = [dictionary.doc2bow(text) for text in texts]
>>> corpora.MmCorpus.serialize('/tmp/deerwester.mm', corpus) # store to disk, for later use
>>> print(corpus)
[(0, 1), (1, 1), (2, 1)]
[(0, 1), (3, 1), (4, 1), (5, 1), (6, 1), (7, 1)]
[(2, 1), (5, 1), (7, 1), (8, 1)]
[(1, 1), (5, 2), (8, 1)]
[(3, 1), (6, 1), (7, 1)]
[(9, 1)]
[(9, 1), (10, 1)]
[(9, 1), (10, 1), (11, 1)]
對于數(shù)據(jù)量小的文本粒氧,我們可以一次性的加載進內(nèi)存然后處理文本,但是對于數(shù)據(jù)量很大的時候节腐,直接加載會嚴重浪費內(nèi)存外盯,gensim可以分批加載,在使用的時候再將數(shù)據(jù)加載進來
>>> class MyCorpus(object):
>>> def __iter__(self):
>>> for line in open('mycorpus.txt'):
>>> # assume there's one document per line, tokens separated by whitespace
>>> yield dictionary.doc2bow(line.lower().split())
>>> corpus_memory_friendly = MyCorpus() # doesn't load the corpus into memory!
>>> print(corpus_memory_friendly)
<__main__.MyCorpus object at 0x10d5690>
>>> for vector in corpus_memory_friendly: # load one vector into memory at a time
... print(vector)
[(0, 1), (1, 1), (2, 1)]
[(0, 1), (3, 1), (4, 1), (5, 1), (6, 1), (7, 1)]
[(2, 1), (5, 1), (7, 1), (8, 1)]
[(1, 1), (5, 2), (8, 1)]
[(3, 1), (6, 1), (7, 1)]
[(9, 1)]
[(9, 1), (10, 1)]
[(9, 1), (10, 1), (11, 1)]
[(4, 1), (10, 1), (11, 1)]
統(tǒng)計出現(xiàn)的單詞翼雀,構(gòu)建字典的時候也可以不用一次性加載全部文本
>>> from six import iteritems
>>> # collect statistics about all tokens
>>> dictionary = corpora.Dictionary(line.lower().split() for line in open('mycorpus.txt'))
>>> # remove stop words and words that appear only once
>>> stop_ids = [dictionary.token2id[stopword] for stopword in stoplist
>>> if stopword in dictionary.token2id]
>>> once_ids = [tokenid for tokenid, docfreq in iteritems(dictionary.dfs) if docfreq == 1]
>>> dictionary.filter_tokens(stop_ids + once_ids) # remove stop words and words that appear only once
>>> dictionary.compactify() # remove gaps in id sequence after words that were removed
>>> print(dictionary)
Dictionary(12 unique tokens)
- 主題向量的變換
對文本向量的變換是Gensim的核心饱苟。通過挖掘語料中隱藏的語義結(jié)構(gòu)特征,我們最終可以變換出一個簡潔高效的文本向量狼渊。- Transforming vectors箱熬,首先类垦,將上一小節(jié)的稀疏向量轉(zhuǎn)換為TF-IDF
>>> from gensim import models
>>> tfidf = models.TfidfModel(corpus) # step 1 -- initialize a model
2019-04-12 11:05:09,654 : INFO : collecting document frequencies
2019-04-12 11:05:09,655 : INFO : PROGRESS: processing document #0
2019-04-12 11:05:09,655 : INFO : calculating IDF weights for 9 documents and 11 features (28 matrix non-zeros)
>>> print(tfidf)
TfidfModel(num_docs=9, num_nnz=28)
>>> doc_bow = [(0, 1), (1, 1)] # test tf-idf
>>> print(tfidf[doc_bow])
[(0, 0.7071067811865476), (1, 0.7071067811865476)]
>>> corpus_tfidf = tfidf[corpus] # corpus tf-idf
>>> for doc in corpus_tfidf:
... print(doc)
...
[(0, 0.5773502691896257), (1, 0.5773502691896257), (2, 0.5773502691896257)]
[(0, 0.44424552527467476), (3, 0.44424552527467476), (4, 0.44424552527467476), (5, 0.3244870206138555), (6, 0.44424552527467476), (7, 0.3244870206138555)]
[(2, 0.5710059809418182), (5, 0.4170757362022777), (7, 0.4170757362022777), (8, 0.5710059809418182)]
[(1, 0.49182558987264147), (5, 0.7184811607083769), (8, 0.49182558987264147)]
[(3, 0.6282580468670046), (6, 0.6282580468670046), (7, 0.45889394536615247)]
[(9, 1.0)]
[(9, 0.7071067811865475), (10, 0.7071067811865475)]
[(9, 0.5080429008916749), (10, 0.5080429008916749), (11, 0.695546419520037)]
[(4, 0.6282580468670046), (10, 0.45889394536615247), (11, 0.6282580468670046)]
- 構(gòu)建完TF-IDF后,計算LSI(latent Sematic Indexing)
>>> lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics = 2)
2019-04-12 11:11:38,070 : INFO : using serial LSI version on this node
2019-04-12 11:11:38,071 : INFO : updating model with new documents
2019-04-12 11:11:38,072 : INFO : preparing a new chunk of documents
2019-04-12 11:11:38,073 : INFO : using 100 extra samples and 2 power iterations
2019-04-12 11:11:38,073 : INFO : 1st phase: constructing (12, 102) action matrix
2019-04-12 11:11:38,183 : INFO : orthonormalizing (12, 102) action matrix
2019-04-12 11:11:38,295 : INFO : 2nd phase: running dense svd on (12, 9) matrix
2019-04-12 11:11:38,330 : INFO : computing the final decomposition
2019-04-12 11:11:38,330 : INFO : keeping 2 factors (discarding 47.565% of energy spectrum)
2019-04-12 11:11:38,358 : INFO : processed documents up to #9
2019-04-12 11:11:38,379 : INFO : topic #0(1.594): 0.703*"trees" + 0.538*"graph" + 0.402*"minors" + 0.187*"survey" + 0.061*"system" + 0.060*"time" + 0.060*"response" + 0.058*"user" + 0.049*"computer" + 0.035*"interface"
2019-04-12 11:11:38,379 : INFO : topic #1(1.476): -0.460*"system" + -0.373*"user" + -0.332*"eps" + -0.328*"interface" + -0.320*"time" + -0.320*"response" + -0.293*"computer" + -0.280*"human" + -0.171*"survey" + 0.161*"trees"
>>> corpus_lsi = lsi[corpus_tfidf]
>>> lsi.print_topics(2)
2019-04-12 11:12:31,602 : INFO : topic #0(1.594): 0.703*"trees" + 0.538*"graph" + 0.402*"minors" + 0.187*"survey" + 0.061*"system" + 0.060*"time" + 0.060*"response" + 0.058*"user" + 0.049*"computer" + 0.035*"interface"
2019-04-12 11:12:31,602 : INFO : topic #1(1.476): -0.460*"system" + -0.373*"user" + -0.332*"eps" + -0.328*"interface" + -0.320*"time" + -0.320*"response" + -0.293*"computer" + -0.280*"human" + -0.171*"survey" + 0.161*"trees"
[(0, '0.703*"trees" + 0.538*"graph" + 0.402*"minors" + 0.187*"survey" + 0.061*"system" + 0.060*"time" + 0.060*"response" + 0.058*"user" + 0.049*"computer" + 0.035*"interface"'), (1, '-0.460*"system" + -0.373*"user" + -0.332*"eps" + -0.328*"interface" + -0.320*"time" + -0.320*"response" + -0.293*"computer" + -0.280*"human" + -0.171*"survey" + 0.161*"trees"')]
上面結(jié)果表明城须,該文檔被分為兩個潛話題分類蚤认,第一類話題和“trees”, “graph” ,“minors”關(guān)聯(lián)性比較大。最后來查看整個文檔中每個文件的話題分類
>>> for doc in corpus_lsi:
... print(doc)
...
[(0, 0.0660078339609052), (1, -0.5200703306361847)]
[(0, 0.19667592859142907), (1, -0.7609563167700036)]
[(0, 0.08992639972446646), (1, -0.7241860626752505)]
[(0, 0.07585847652178296), (1, -0.6320551586003427)]
[(0, 0.10150299184980459), (1, -0.5737308483002947)]
[(0, 0.7032108939378302), (1, 0.16115180214026098)]
[(0, 0.8774787673119822), (1, 0.16758906864659778)]
[(0, 0.909862468681857), (1, 0.14086553628719395)]
[(0, 0.6165825350569285), (1, -0.05392907566389119)]
>>> for doc in documents:
... print(doc)
...
Human machine interface for lab abc computer applications
A survey of user opinion of computer system response time
The EPS user interface management system
System and human system engineering testing of EPS
Relation of user perceived response time to error measurement
The generation of random binary unordered trees
The intersection graph of paths in trees
Graph minors IV Widths of trees and well quasi ordering
Graph minors A survey
除了LSI模型外糕伐,還有Random Projections(RP),Latent Dirichlet Allocation(LDA),Hierarchical Dirichlet Process(HDP),都是用于提取潛話題模型
- 文檔相似度的計算
在得到每一篇文檔對應(yīng)的主題向量后砰琢,我們就可以計算文檔之間的相似度,進而完成如文本聚類良瞧、信息檢索之類的任務(wù)陪汽。在Gensim中,也提供了這一類任務(wù)的API接口
以信息檢索為例褥蚯。對于一篇待檢索的query挚冤,我們的目標是從文本集合中檢索出主題相似度最高的文檔。
首先赞庶,我們需要將待檢索的query和文本放在同一個向量空間里進行表達(以LSI向量空間為例):
# 構(gòu)造LSI模型并將待檢索的query和文本轉(zhuǎn)化為LSI主題向量
# 轉(zhuǎn)換之前的corpus和query均是BOW向量
lsi_model = models.LsiModel(corpus, id2word=dictionary, num_topics=2)
documents = lsi_model[corpus]
query_vec = lsi_model[query]
接下來训挡,我們用待檢索的文檔向量初始化一個相似度計算的對象:
index = similarities.MatrixSimilarity(documents)
我們也可以通過save()和load()方法持久化這個相似度矩陣:
index.save('/tmp/test.index')
index = similarities.MatrixSimilarity.load('/tmp/test.index')
注意,如果待檢索的目標文檔過多歧强,使用similarities.MatrixSimilarity類往往會帶來內(nèi)存不夠用的問題舍哄。此時,可以改用similarities.Similarity類誊锭。二者的接口基本保持一致。
最后弥锄,我們借助index對象計算任意一段query和所有文檔的(余弦)相似度:
sims = index[query_vec]
#返回一個元組類型的迭代器:(idx, sim)