作者:文建華甜攀,小文的數(shù)據(jù)之旅书幕,數(shù)據(jù)分析愛好者,不想當(dāng)碼農(nóng)的偽碼農(nóng)荧嵌。博客:zhihu.com/c_188462686
先簡單介紹一下jieba中文分詞包,jieba包主要有三種分詞模式:
精確模式:默認(rèn)情況下是精確模式砾淌,精確地分詞完丽,適合文本分析;
全模式:把所有能成詞的詞語都分出來, 但是詞語會(huì)存有歧義拇舀;
搜索引擎模式:在精確模式的基礎(chǔ)上,對(duì)長詞再次切分蜻底,適合用于搜索引擎分詞骄崩。
jieba 包常用的語句:
精確模式分詞:jieba.cut(text,cut_all = False)聘鳞,當(dāng)cut_all = True時(shí)為全模式
自定義詞典:jieba.load_userdict(file_name)
增加詞語:jieba.add_word(seg,freq,flag)
刪除詞語:jieba.del_word(seg)
《哈利·波特》是英國作家J·K·羅琳的奇幻文學(xué)系列小說,描寫主角哈利·波特在霍格沃茨魔法學(xué)校7年學(xué)習(xí)生活中的冒險(xiǎn)故事要拂。下面將以《哈利波特》錯(cuò)綜復(fù)雜的人物關(guān)系為例抠璃,實(shí)踐一下jieba包。
#加載所需包
import numpy as np
import pandas as pd
import jieba,codecs
import jieba.posseg as pseg #標(biāo)注詞性模塊
from pyecharts import Bar,WordCloud
#導(dǎo)入人名脱惰、停用詞搏嗡、特定詞庫
renmings = pd.read_csv('人名.txt',engine='python',encoding='utf-8',names=['renming'])['renming']
stopwords = pd.read_csv('mystopwords.txt',engine='python',encoding='utf-8',names=['stopwords'])['stopwords'].tolist()
book = open('哈利波特.txt',encoding='utf-8').read()
jieba.load_userdict('哈利波特詞庫.txt')
#定義一個(gè)分詞函數(shù)
def words_cut(book):
words = list(jieba.cut(book))
stopwords1 = [w for w in words if len(w)==1] #添加停用詞
seg = set(words) - set(stopwords) - set(stopwords1) #過濾停用詞,得到更為精確的分詞
result = [i for i in words if i in seg]
return result
#初次分詞
bookwords = words_cut(book)
renming = [i.split(' ')[0] for i in set(renmings)] #只要人物名字拉一,出掉詞頻以及詞性
nameswords = [i for i in bookwords if i in set(renming)] #篩選出人物名字
#統(tǒng)計(jì)詞頻
bookwords_count = pd.Series(bookwords).value_counts().sort_values(ascending=False)
nameswords_count = pd.Series(nameswords).value_counts().sort_values(ascending=False)
bookwords_count[:100].index
經(jīng)過初次分詞之后采盒,我們發(fā)現(xiàn)大部分的詞語已經(jīng)ok了,但是還是有小部分名字類的詞語分得不精確蔚润,比如說'布利'磅氨、'羅恩說'、'伏地'嫡纠、'斯內(nèi)'烦租、'地說'等等,還有像'烏姆里奇'除盏、'霍格沃茲'等分成兩個(gè)詞語的叉橱。
#自定義部分詞語
jieba.add_word('鄧布利多',100,'nr')
jieba.add_word('霍格沃茨',100,'n')
jieba.add_word('烏姆里奇',100,'nr')
jieba.add_word('拉唐克斯',100,'nr')
jieba.add_word('伏地魔',100,'nr')
jieba.del_word('羅恩說')
jieba.del_word('地說')
jieba.del_word('斯內(nèi)')
#再次分詞
bookwords = words_cut(book)
nameswords = [i for i in bookwords if i in set(renming)]
bookwords_count = pd.Series(bookwords).value_counts().sort_values(ascending=False)
nameswords_count = pd.Series(nameswords).value_counts().sort_values(ascending=False)
bookwords_count[:100].index
再次分詞之后,我們可以看到在初次分詞出現(xiàn)的錯(cuò)誤已經(jīng)得到修正了者蠕,接下來我們統(tǒng)計(jì)分析窃祝。
#統(tǒng)計(jì)詞頻TOP15的詞語
bar = Bar('出現(xiàn)最多的詞語TOP15',background_color = 'white',title_pos = 'center',title_text_size = 20)
x = bookwords_count[:15].index.tolist()
y = bookwords_count[:15].values.tolist()
bar.add('',x, y,xaxis_interval = 0,xaxis_rotate = 30,is_label_show = True)
bar
整部小說出現(xiàn)最多的詞語TOP15中出現(xiàn)了哈利、赫敏蠢棱、羅恩锌杀、鄧布利多、魔杖泻仙、魔法糕再、馬爾福、斯內(nèi)普和小天狼星等字眼玉转。
我們自己串一下突想,大概可以知道《哈利波特》的主要內(nèi)容了,就是哈利在小伙伴赫敏究抓、羅恩的陪伴下猾担,經(jīng)過大法師鄧布利多的幫助與培養(yǎng),利用魔杖使用魔法把大boss伏地魔k.o的故事刺下。當(dāng)然啦绑嘹,《哈利波特》還是非常精彩的。
#統(tǒng)計(jì)人物名字TOP20的詞語
bar = Bar('主要人物Top20',background_color = 'white',title_pos = 'center',title_text_size = 20)
x = nameswords_count[:20].index.tolist()
y =nameswords_count[:20].values.tolist()
bar.add('',x, y,xaxis_interval = 0,xaxis_rotate = 30,is_label_show = True)
bar
整部小說按照出場次數(shù)橘茉,我們發(fā)現(xiàn)哈利作為主角的地位無可撼動(dòng)工腋,比排名第二的赫敏遠(yuǎn)超13000多次姨丈,當(dāng)然這也是非常正常的,畢竟這本書是《哈利波特》擅腰,而不是《赫敏格蘭杰》蟋恬。
#整本小說的詞語詞云分析
name = bookwords_count.index.tolist()
value = bookwords_count.values.tolist()
wc = WordCloud(background_color = 'white')
wc.add("", name, value, word_size_range=[10, 200],shape = 'diamond')
wc
#人物關(guān)系分析
names = {}
relationships = {}
lineNames = []
with codecs.open('哈利波特.txt','r','utf8') as f:
n = 0
for line in f.readlines():
n+=1
print('正在處理第{}行'.format(n))
poss = pseg.cut(line)
lineNames.append([])
for w in poss:
if w.word in set(nameswords):
lineNames[-1].append(w.word)
if names.get(w.word) is None:
names[w.word] = 0
relationships[w.word] = {}
names[w.word] += 1
for line in lineNames:
for name1 in line:
for name2 in line:
if name1 == name2:
continue
if relationships[name1].get(name2) is None:
relationships[name1][name2]= 1
else:
relationships[name1][name2] = relationships[name1][name2]+ 1
node = pd.DataFrame(columns=['Id','Label','Weight'])
edge = pd.DataFrame(columns=['Source','Target','Weight'])
for name,times in names.items():
node.loc[len(node)] = [name,name,times]
for name,edges in relationships.items():
for v, w in edges.items():
if w > 3:
edge.loc[len(edge)] = [name,v,w]
處理之后,我們發(fā)現(xiàn)同一個(gè)人物出現(xiàn)了不同的稱呼趁冈,因此合并并統(tǒng)計(jì)歼争,得出88個(gè)節(jié)點(diǎn)。
node.loc[node['Id']=='哈利','Id'] = '哈利波特'
node.loc[node['Id']=='波特','Id'] = '哈利波特'
node.loc[node['Id']=='阿不思','Id'] = '鄧布利多'
node.loc[node['Label']=='哈利','Label'] = '哈利波特'
node.loc[node['Label']=='波特','Label'] = '哈利波特'
node.loc[node['Label']=='阿不思','Label'] = '鄧布利多'
edge.loc[edge['Source']=='哈利','Source'] = '哈利波特'
edge.loc[edge['Source']=='波特','Source'] = '哈利波特'
edge.loc[edge['Source']=='阿不思','Source'] = '鄧布利多'
edge.loc[edge['Target']=='哈利','Target'] = '哈利波特'
edge.loc[edge['Target']=='波特','Target'] = '哈利波特'
edge.loc[edge['Target']=='阿不思','Target'] = '鄧布利多'
nresult = node['Weight'].groupby([node['Id'],node['Label']]).agg({'Weight':np.sum}).sort_values('Weight',ascending = False)
eresult = edge.sort_values('Weight',ascending = False)
nresult.to_csv('node.csv',index = False)
eresult.to_csv('edge.csv',index = False)
有了節(jié)點(diǎn)node以及邊edge后渗勘,通過gephi對(duì)《哈利波特》的人物關(guān)系進(jìn)行分析:
(節(jié)點(diǎn)的大小表示人物的出場次數(shù)沐绒,線的粗細(xì)表示人物之間的交往關(guān)系)