在上一篇Machine Learning文章,我寫了如何在網(wǎng)絡(luò)上簡單爬取資源逾雄。這次介紹關(guān)于爬取資源的處理族操,即Data Processing;
我思
二、Data Processing
a绊起、中文詞頻統(tǒng)計(jì)及詞云可視化
工具為:中文分詞jieba模塊,jieba是一款優(yōu)秀的中文分詞處理器盈电,簡單蝴簇、方便且開源;python WordCloud 模塊,功能齊全匆帚,可玩性及展示性較強(qiáng)熬词;
以下為相關(guān)代碼:
from scipy.misc import imread
import jieba
import jieba.analyse #關(guān)鍵字提取
from os import path
from wordcloud import WordCloud
import matplotlib.pyplot as plt
file=open(r'./art/鹿鼎記.txt',encoding='utf-8',errors='ignore')
url=r'./art/stop_word.txt'
content=file.read()
file_one=[]
try:
jieba.analyse.set_stop_words(url) # 除去中文停止詞庫
tags=jieba.analyse.extract_tags(content,topK=160,withWeight=True) # 獲得關(guān)鍵詞及其次數(shù),數(shù)量為前160
for tag in tags:
file_one.append([tag[0],tag[1]*1000]) # 寫入file_one 列表內(nèi)
print(tag[0]+'\t'+str(tag[1]*1000)) # 顯示
finally:
print('OK')
patch=r'C:\Users\22109\Downloads\字體-方正蘭亭黑體.ttf' #設(shè)定詞云顯示中文類型
dictionary=dict(file_one) # list——dict類型
bg_pic = imread(r'./art/img.png')
wc = WordCloud(
# 設(shè)置字體
font_path = patch,
# 設(shè)置背景色
background_color='white',
# 允許最大詞匯
max_words=200,
# 詞云形狀
mask=bg_pic,
# 最大號字體
max_font_size=200,
)
wc.generate_from_frequencies(dictionary) #引入字典類型
plt.figure()
plt.imshow(wc) # plt顯示
plt.axis('off')
詞云顯示圖:
Figure_1.png
b吸重、統(tǒng)計(jì)可視化處理
b-1互拾、直方圖比較
import os
import jieba
import jieba.analyse
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sys
import matplotlib
def file_read(url,n):
matplotlib.rcParams['font.family'] = 'sans-serif'
matplotlib.rcParams['font.sans-serif'] = [u'SimHei']
matplotlib.rcParams['font.size'] = '10'
content=open(links,'r').read()
file_one=[]
try:
jieba.analyse.set_stop_words('D:\My Documents\Downloads\jinyong\停用詞.txt')
jieba.load_userdict('D:\My Documents\Downloads\jinyong\dict.txt')
tags = jieba.analyse.extract_tags(content,topK=120,withWeight=True)
for i in range(len(tags)):
file_one.append(tags[i])
finally:
print 'OK'
dictionary=pd.DataFrame(file_one).iloc[0:n,:]
return dictionary
width,n=0.4,21
links='D:\My Documents\Downloads\jinyong\誅仙.txt'
dictionary=file_read(links,n)
plt.bar(range(len(dictionary[0])),dictionary[1]*100,width=width,color='rgy')
for i in range(len(dictionary)):
plt.text(i-width/4*3,dictionary[1][i]*100,dictionary[0][i])
plt.show()
結(jié)果如下(以誅仙為例):
誅仙詞頻圖
可以看出,主角光環(huán)十分強(qiáng)大嚎幸,而且與故事情節(jié)較為呼應(yīng)的是颜矿,張小凡及鬼厲的頻率幾乎相等,即符合誅仙小說的情節(jié)變化(小凡黑化)嫉晶;有意思的是骑疆,從數(shù)據(jù)來看田篇,陸雪琪的出場率大于碧瑤的出場率,若是不懂情節(jié)或未看過原著的話封断,大部分會認(rèn)為是:陸雪琪為女主角 斯辰;但是,如果爬取碧瑤青云之戰(zhàn)之前的數(shù)據(jù)坡疼,估計(jì)碧瑤是遙遙領(lǐng)先的彬呻。。柄瑰。
將武俠小說(鹿鼎記)與誅仙對比:
代碼如下:
import os
import jieba
import jieba.analyse
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sys
import matplotlib
def file_read(url,n,width):
matplotlib.rcParams['font.family'] = 'sans-serif'
matplotlib.rcParams['font.sans-serif'] = [u'SimHei']
matplotlib.rcParams['font.size'] = '8'
long=len(url)
for j in range(long):
content=open(url[j],'r').read()
file_one=[]
try:
jieba.analyse.set_stop_words('D:\My Documents\Downloads\jinyong\停用詞.txt')
jieba.load_userdict('D:\My Documents\Downloads\jinyong\dict.txt')
tags = jieba.analyse.extract_tags(content,topK=120,withWeight=True)
for i in range(len(tags)):
file_one.append(tags[i])
finally:
print 'ok'
dictionary=pd.DataFrame(file_one).iloc[0:n,:]
plt.bar(np.arange(len(dictionary[0]))-width/long*pow(-1,j),dictionary[1]*100,width=width/long)
for i in range(len(dictionary)):
plt.text(np.array(i)-width/long*pow(-1,j)-width/4*3,dictionary[1][i]*100,dictionary[0][i])
plt.show()
width,n=0.4,21
links='D:\My Documents\Downloads\jinyong\誅仙.txt'
url='D:\My Documents\Downloads\jinyong\鹿鼎記.txt'
urls=[links,url]
file_read(urls,n,width)
可視化處理:
對比圖