gensim學(xué)習(xí)記錄

gensim是python的一個工具包,由于看了一篇paper雹熬,里面有提到這個腰涧,所以了解下并且做個記錄

簡介

gensim是一個python的免費的NLP庫,旨在自動地從文檔中提取語義主題象浑,盡可能地高效和輕松

  • Gensim旨在處理原始的、非結(jié)構(gòu)化的數(shù)字文本(純文本)琅豆。
  • 實現(xiàn)了Word2Vec, FastText, Latent Semantic Analysis (LSI, LSA, see LsiModel), Latent Dirichlet Allocation (LDA, see LdaModel) 等一些無監(jiān)督的算法

安裝

官方安裝頁面

<!--使用pip安裝-->
pip install --upgrade gensim

依賴包列表

使用

  • String 轉(zhuǎn) Vectors:將單詞標號愉豺,統(tǒng)計過濾掉停用詞和詞頻低的詞,再用詞序號表示當前的文檔趋距,實現(xiàn)從string到vector的轉(zhuǎn)變
>>> import logging
>>> logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
>>> from pprint import pprint
>>> from collections import defaultdict
>>> from gensim import corpora
>>> documents = ["Human machine interface for lab abc computer applications",
>>>              "A survey of user opinion of computer system response time",
>>>              "The EPS user interface management system",
>>>              "System and human system engineering testing of EPS",
>>>              "Relation of user perceived response time to error measurement",
>>>              "The generation of random binary unordered trees",
>>>              "The intersection graph of paths in trees",
>>>              "Graph minors IV Widths of trees and well quasi ordering",
>>>              "Graph minors A survey"]
# remove common words and tokenize
>>> stoplist = set('for a of the and to in'.split())
>>> texts = [[word for word in document.lower().split() if word not in stoplist]
>>>          for document in documents]
>>>
>>> # remove words that appear only once
>>> frequency = defaultdict(int)
>>> for text in texts:
>>>     for token in text:
>>>         frequency[token] += 1
>>>
>>> texts = [[token for token in text if frequency[token] > 1]
>>>          for text in texts]
>>>
>>> pprint(texts)
[['human', 'interface', 'computer'],
 ['survey', 'user', 'computer', 'system', 'response', 'time'],
 ['eps', 'user', 'interface', 'system'],
 ['system', 'human', 'system', 'eps'],
 ['user', 'response', 'time'],
 ['trees'],
 ['graph', 'trees'],
 ['graph', 'minors', 'trees'],
 ['graph', 'minors', 'survey']]
>>> dictionary = corpora.Dictionary(texts)
>>> dictionary.save('/tmp/deerwester.dict')  # store the dictionary, for future reference
>>> print(dictionary)
Dictionary(12 unique tokens)
>>> print(dictionary.token2id)
{'minors': 11, 'graph': 10, 'system': 5, 'trees': 9, 'eps': 8, 'computer': 0,
'survey': 4, 'user': 7, 'human': 1, 'time': 6, 'interface': 2, 'response': 3}
>>> new_doc = "Human computer interaction"
>>> new_vec = dictionary.doc2bow(new_doc.lower().split())
>>> print(new_vec)  # the word "interaction" does not appear in the dictionary and is ignored
[(0, 1), (1, 1)]
>>> corpus = [dictionary.doc2bow(text) for text in texts]
>>> corpora.MmCorpus.serialize('/tmp/deerwester.mm', corpus)  # store to disk, for later use
>>> print(corpus)
[(0, 1), (1, 1), (2, 1)]
[(0, 1), (3, 1), (4, 1), (5, 1), (6, 1), (7, 1)]
[(2, 1), (5, 1), (7, 1), (8, 1)]
[(1, 1), (5, 2), (8, 1)]
[(3, 1), (6, 1), (7, 1)]
[(9, 1)]
[(9, 1), (10, 1)]
[(9, 1), (10, 1), (11, 1)]

對于數(shù)據(jù)量小的文本粒氧,我們可以一次性的加載進內(nèi)存然后處理文本,但是對于數(shù)據(jù)量很大的時候节腐,直接加載會嚴重浪費內(nèi)存外盯,gensim可以分批加載,在使用的時候再將數(shù)據(jù)加載進來

>>> class MyCorpus(object):
>>>     def __iter__(self):
>>>         for line in open('mycorpus.txt'):
>>>             # assume there's one document per line, tokens separated by whitespace
>>>             yield dictionary.doc2bow(line.lower().split())
>>> corpus_memory_friendly = MyCorpus()  # doesn't load the corpus into memory!
>>> print(corpus_memory_friendly)
<__main__.MyCorpus object at 0x10d5690>
>>> for vector in corpus_memory_friendly:  # load one vector into memory at a time
...     print(vector)
[(0, 1), (1, 1), (2, 1)]
[(0, 1), (3, 1), (4, 1), (5, 1), (6, 1), (7, 1)]
[(2, 1), (5, 1), (7, 1), (8, 1)]
[(1, 1), (5, 2), (8, 1)]
[(3, 1), (6, 1), (7, 1)]
[(9, 1)]
[(9, 1), (10, 1)]
[(9, 1), (10, 1), (11, 1)]
[(4, 1), (10, 1), (11, 1)]

統(tǒng)計出現(xiàn)的單詞翼雀,構(gòu)建字典的時候也可以不用一次性加載全部文本

>>> from six import iteritems
>>> # collect statistics about all tokens
>>> dictionary = corpora.Dictionary(line.lower().split() for line in open('mycorpus.txt'))
>>> # remove stop words and words that appear only once
>>> stop_ids = [dictionary.token2id[stopword] for stopword in stoplist
>>>             if stopword in dictionary.token2id]
>>> once_ids = [tokenid for tokenid, docfreq in iteritems(dictionary.dfs) if docfreq == 1]
>>> dictionary.filter_tokens(stop_ids + once_ids)  # remove stop words and words that appear only once
>>> dictionary.compactify()  # remove gaps in id sequence after words that were removed
>>> print(dictionary)
Dictionary(12 unique tokens)
  • 主題向量的變換
    對文本向量的變換是Gensim的核心饱苟。通過挖掘語料中隱藏的語義結(jié)構(gòu)特征,我們最終可以變換出一個簡潔高效的文本向量狼渊。
    • Transforming vectors箱熬,首先类垦,將上一小節(jié)的稀疏向量轉(zhuǎn)換為TF-IDF
>>> from gensim import models
>>> tfidf = models.TfidfModel(corpus)  # step 1 -- initialize a model
2019-04-12 11:05:09,654 : INFO : collecting document frequencies
2019-04-12 11:05:09,655 : INFO : PROGRESS: processing document #0
2019-04-12 11:05:09,655 : INFO : calculating IDF weights for 9 documents and 11 features (28 matrix non-zeros)
>>> print(tfidf)
TfidfModel(num_docs=9, num_nnz=28)
>>> doc_bow = [(0, 1), (1, 1)] # test tf-idf
>>> print(tfidf[doc_bow])
[(0, 0.7071067811865476), (1, 0.7071067811865476)]
>>> corpus_tfidf = tfidf[corpus] # corpus tf-idf
>>> for doc in corpus_tfidf:
...     print(doc)
...
[(0, 0.5773502691896257), (1, 0.5773502691896257), (2, 0.5773502691896257)]
[(0, 0.44424552527467476), (3, 0.44424552527467476), (4, 0.44424552527467476), (5, 0.3244870206138555), (6, 0.44424552527467476), (7, 0.3244870206138555)]
[(2, 0.5710059809418182), (5, 0.4170757362022777), (7, 0.4170757362022777), (8, 0.5710059809418182)]
[(1, 0.49182558987264147), (5, 0.7184811607083769), (8, 0.49182558987264147)]
[(3, 0.6282580468670046), (6, 0.6282580468670046), (7, 0.45889394536615247)]
[(9, 1.0)]
[(9, 0.7071067811865475), (10, 0.7071067811865475)]
[(9, 0.5080429008916749), (10, 0.5080429008916749), (11, 0.695546419520037)]
[(4, 0.6282580468670046), (10, 0.45889394536615247), (11, 0.6282580468670046)]
  • 構(gòu)建完TF-IDF后,計算LSI(latent Sematic Indexing)
>>> lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics = 2)
2019-04-12 11:11:38,070 : INFO : using serial LSI version on this node
2019-04-12 11:11:38,071 : INFO : updating model with new documents
2019-04-12 11:11:38,072 : INFO : preparing a new chunk of documents
2019-04-12 11:11:38,073 : INFO : using 100 extra samples and 2 power iterations
2019-04-12 11:11:38,073 : INFO : 1st phase: constructing (12, 102) action matrix
2019-04-12 11:11:38,183 : INFO : orthonormalizing (12, 102) action matrix
2019-04-12 11:11:38,295 : INFO : 2nd phase: running dense svd on (12, 9) matrix
2019-04-12 11:11:38,330 : INFO : computing the final decomposition
2019-04-12 11:11:38,330 : INFO : keeping 2 factors (discarding 47.565% of energy spectrum)
2019-04-12 11:11:38,358 : INFO : processed documents up to #9
2019-04-12 11:11:38,379 : INFO : topic #0(1.594): 0.703*"trees" + 0.538*"graph" + 0.402*"minors" + 0.187*"survey" + 0.061*"system" + 0.060*"time" + 0.060*"response" + 0.058*"user" + 0.049*"computer" + 0.035*"interface"
2019-04-12 11:11:38,379 : INFO : topic #1(1.476): -0.460*"system" + -0.373*"user" + -0.332*"eps" + -0.328*"interface" + -0.320*"time" + -0.320*"response" + -0.293*"computer" + -0.280*"human" + -0.171*"survey" + 0.161*"trees"
>>> corpus_lsi = lsi[corpus_tfidf]
>>> lsi.print_topics(2)
2019-04-12 11:12:31,602 : INFO : topic #0(1.594): 0.703*"trees" + 0.538*"graph" + 0.402*"minors" + 0.187*"survey" + 0.061*"system" + 0.060*"time" + 0.060*"response" + 0.058*"user" + 0.049*"computer" + 0.035*"interface"
2019-04-12 11:12:31,602 : INFO : topic #1(1.476): -0.460*"system" + -0.373*"user" + -0.332*"eps" + -0.328*"interface" + -0.320*"time" + -0.320*"response" + -0.293*"computer" + -0.280*"human" + -0.171*"survey" + 0.161*"trees"
[(0, '0.703*"trees" + 0.538*"graph" + 0.402*"minors" + 0.187*"survey" + 0.061*"system" + 0.060*"time" + 0.060*"response" + 0.058*"user" + 0.049*"computer" + 0.035*"interface"'), (1, '-0.460*"system" + -0.373*"user" + -0.332*"eps" + -0.328*"interface" + -0.320*"time" + -0.320*"response" + -0.293*"computer" + -0.280*"human" + -0.171*"survey" + 0.161*"trees"')]

上面結(jié)果表明城须,該文檔被分為兩個潛話題分類蚤认,第一類話題和“trees”, “graph” ,“minors”關(guān)聯(lián)性比較大。最后來查看整個文檔中每個文件的話題分類

>>> for doc in corpus_lsi:
...     print(doc)
...
[(0, 0.0660078339609052), (1, -0.5200703306361847)]
[(0, 0.19667592859142907), (1, -0.7609563167700036)]
[(0, 0.08992639972446646), (1, -0.7241860626752505)]
[(0, 0.07585847652178296), (1, -0.6320551586003427)]
[(0, 0.10150299184980459), (1, -0.5737308483002947)]
[(0, 0.7032108939378302), (1, 0.16115180214026098)]
[(0, 0.8774787673119822), (1, 0.16758906864659778)]
[(0, 0.909862468681857), (1, 0.14086553628719395)]
[(0, 0.6165825350569285), (1, -0.05392907566389119)]
>>> for doc in documents:
...     print(doc)
...
Human machine interface for lab abc computer applications
A survey of user opinion of computer system response time
The EPS user interface management system
System and human system engineering testing of EPS
Relation of user perceived response time to error measurement
The generation of random binary unordered trees
The intersection graph of paths in trees
Graph minors IV Widths of trees and well quasi ordering
Graph minors A survey

除了LSI模型外糕伐,還有Random Projections(RP),Latent Dirichlet Allocation(LDA),Hierarchical Dirichlet Process(HDP),都是用于提取潛話題模型

  • 文檔相似度的計算
    在得到每一篇文檔對應(yīng)的主題向量后砰琢,我們就可以計算文檔之間的相似度,進而完成如文本聚類良瞧、信息檢索之類的任務(wù)陪汽。在Gensim中,也提供了這一類任務(wù)的API接口

以信息檢索為例褥蚯。對于一篇待檢索的query挚冤,我們的目標是從文本集合中檢索出主題相似度最高的文檔。
首先赞庶,我們需要將待檢索的query和文本放在同一個向量空間里進行表達(以LSI向量空間為例):

# 構(gòu)造LSI模型并將待檢索的query和文本轉(zhuǎn)化為LSI主題向量
# 轉(zhuǎn)換之前的corpus和query均是BOW向量
lsi_model = models.LsiModel(corpus, id2word=dictionary,          num_topics=2)
documents = lsi_model[corpus]
query_vec = lsi_model[query]

接下來训挡,我們用待檢索的文檔向量初始化一個相似度計算的對象:

index = similarities.MatrixSimilarity(documents)

我們也可以通過save()和load()方法持久化這個相似度矩陣:

index.save('/tmp/test.index')
index = similarities.MatrixSimilarity.load('/tmp/test.index')

注意,如果待檢索的目標文檔過多歧强,使用similarities.MatrixSimilarity類往往會帶來內(nèi)存不夠用的問題舍哄。此時,可以改用similarities.Similarity類誊锭。二者的接口基本保持一致。
最后弥锄,我們借助index對象計算任意一段query和所有文檔的(余弦)相似度:

sims = index[query_vec] 
#返回一個元組類型的迭代器:(idx, sim)
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末丧靡,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子籽暇,更是在濱河造成了極大的恐慌温治,老刑警劉巖,帶你破解...
    沈念sama閱讀 216,402評論 6 499
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件戒悠,死亡現(xiàn)場離奇詭異熬荆,居然都是意外死亡,警方通過查閱死者的電腦和手機绸狐,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,377評論 3 392
  • 文/潘曉璐 我一進店門卤恳,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人寒矿,你說我怎么就攤上這事突琳。” “怎么了符相?”我有些...
    開封第一講書人閱讀 162,483評論 0 353
  • 文/不壞的土叔 我叫張陵拆融,是天一觀的道長。 經(jīng)常有香客問我,道長镜豹,這世上最難降的妖魔是什么傲须? 我笑而不...
    開封第一講書人閱讀 58,165評論 1 292
  • 正文 為了忘掉前任,我火速辦了婚禮趟脂,結(jié)果婚禮上泰讽,老公的妹妹穿的比我還像新娘。我一直安慰自己散怖,他們只是感情好菇绵,可當我...
    茶點故事閱讀 67,176評論 6 388
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著镇眷,像睡著了一般咬最。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上欠动,一...
    開封第一講書人閱讀 51,146評論 1 297
  • 那天永乌,我揣著相機與錄音,去河邊找鬼具伍。 笑死翅雏,一個胖子當著我的面吹牛,可吹牛的內(nèi)容都是我干的人芽。 我是一名探鬼主播望几,決...
    沈念sama閱讀 40,032評論 3 417
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼萤厅!你這毒婦竟也來了橄抹?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 38,896評論 0 274
  • 序言:老撾萬榮一對情侶失蹤惕味,失蹤者是張志新(化名)和其女友劉穎楼誓,沒想到半個月后,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體名挥,經(jīng)...
    沈念sama閱讀 45,311評論 1 310
  • 正文 獨居荒郊野嶺守林人離奇死亡疟羹,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,536評論 2 332
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了禀倔。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片榄融。...
    茶點故事閱讀 39,696評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖救湖,靈堂內(nèi)的尸體忽然破棺而出剃袍,到底是詐尸還是另有隱情,我是刑警寧澤捎谨,帶...
    沈念sama閱讀 35,413評論 5 343
  • 正文 年R本政府宣布民效,位于F島的核電站憔维,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏畏邢。R本人自食惡果不足惜业扒,卻給世界環(huán)境...
    茶點故事閱讀 41,008評論 3 325
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望舒萎。 院中可真熱鬧程储,春花似錦、人聲如沸臂寝。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,659評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽咆贬。三九已至败徊,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間掏缎,已是汗流浹背皱蹦。 一陣腳步聲響...
    開封第一講書人閱讀 32,815評論 1 269
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留眷蜈,地道東北人沪哺。 一個月前我還...
    沈念sama閱讀 47,698評論 2 368
  • 正文 我出身青樓,卻偏偏與公主長得像酌儒,于是被迫代替她去往敵國和親辜妓。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 44,592評論 2 353

推薦閱讀更多精彩內(nèi)容