前任3評論圖
紀念下自己的過去
本是件技術活楷力,無奈卻也傷感了少許言蛇,《體面》 這首歌單曲循環(huán)兩個禮拜,每次深夜一兩點甚至四點后再睡去笆包,也許現(xiàn)在的自己并不夠優(yōu)秀环揽,只能一個勁的羨慕別人的五年小長跑,一輩子的長跑庵佣,而我歉胶,卻再也不能回去了吧。不想一份感情像紙張一樣巴粪,揉了又鋪好通今,又揉。
她很好肛根,只是我不夠優(yōu)秀
大學兩年辫塌,異地一年,不同校派哲,隔三差五就往她的學校跑臼氨,熟悉了兩個校園,也習慣了有彼此的日子狮辽。她還在上學一也,我卻早她工作了巢寡。在這里想她........
不多說了,步入正題
數(shù)據(jù)來源
(一)來自豆瓣上的前任3評論(爬到不能爬為止椰苟,以后會完善)
貼上代碼:
# -*- coding: utf-8 -*-
# @Time : 2018/3/27 11:15
# @Author : 蛇崽
# @Email : 643435675@QQ.com
# @File : test_douban_qianren3.py(再見前任3的影評)
import csv
import requests
from lxml import etree
import time
from lxml import etree
url = 'https://movie.douban.com/subject/26662193/comments?start=0&limit=20&sort=new_score&status=P&percent_type='
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36',
'Cookie': 'gr_user_id=ffdf2f63-ec37-49b5-99e8-0e0d28741172; bid=qh9RXgIGopg; viewed="26826540_24703171"; ap=1; ll="118172"; ct=y; _vwo_uuid_v2=8C5B24903B1D1D3886FE478B91C5DE97|7eac18658e7fecbbf3798b88cfcf6113; _pk_ref.100001.4cf6=%5B%22%22%2C%22%22%2C1522129522%2C%22https%3A%2F%2Fwww.baidu.com%2Flink%3Furl%3DdnHqCRiT1HlhToCp0h1cpdyV8rB9f_OfOvJhjRPO3p1jrl764LGvi7gbYSdskDMh%26wd%3D%26eqid%3De15db1bb0000e3cd000000045ab9b6fe%22%5D; _pk_id.100001.4cf6=4e61f4192b9486a8.1485672092.10.1522130672.1522120744.; _pk_ses.100001.4cf6=*'}
# r = requests.post(url,headers=headers)
# r.raise_for_status()
# html = etree.HTML(r.text)
def get_html(current_url):
time.sleep(2)
r = requests.get(current_url, headers=headers)
r.raise_for_status()
return etree.HTML(r.text)
def parse_html(content,writer):
links = content.xpath("http://*[@class='comment-item']")
for link in links:
content = link.xpath("./div[@class='comment']/p/text()")[0].strip()
author = link.xpath("./div[@class='comment']/h3/span[@class='comment-info']/a/text()")[0].strip()
time = link.xpath("./div[@class='comment']/h3/span[@class='comment-info']/span[@class='comment-time ']/text()")[
0].strip()
is_useful = link.xpath("./div[@class='comment']/h3/span[@class='comment-vote']/span[@class='votes']/text()")[0]
print('content:', content)
print('time:', time)
print('is_useful:', is_useful)
# detail = (author, time, is_useful, content)
detail = (is_useful,content)
writer.writerow(detail)
if __name__ == '__main__':
with open('douban.txt', 'a+', encoding='utf-8', newline='') as csvf:
writer = csv.writer(csvf)
writer.writerow(('作者', '時間', '有用數(shù)', '內(nèi)容'))
for page in range(0, 260, 20):
url = 'https://movie.douban.com/subject/26662193/comments?start={}&limit=20&sort=new_score&status=P&percent_type='.format(
page)
r = get_html(url)
parse_html(r,writer)
(二)結果截圖:
txt內(nèi)容
數(shù)據(jù)分析
(一)結巴分詞與matplotlib繪圖
代碼:
#encoding=utf-8
import matplotlib.pyplot as plt
from PIL import Image
from wordcloud import WordCloud
import jieba
import numpy as np
#讀取txt格式的文本內(nèi)容
text_from_file_with_apath = open('douban.txt','rb').read()
#使用jieba進行分詞抑月,并對分詞的結果以空格隔開
wordlist_after_jieba = jieba.cut(text_from_file_with_apath, cut_all = True)
wl_space_split = " ".join(wordlist_after_jieba)
#對分詞后的文本生成詞云
# my_wordcloud = WordCloud().generate(wl_space_split)
font = r'C:\Windows\Fonts\simfang.ttf'
mask = np.array(Image.open('test_ciyun.jpg'))
wc = WordCloud(mask=mask,max_words=3000,collocations=False, font_path=font, width=5800, height=2400, margin=10,background_color='black').generate(wl_space_split)
default_colors = wc.to_array()
plt.title("QR 3")
plt.imshow(wc)
plt.axis("off")
plt.show()
(二)填坑:
1)中文能正常顯示的設置:
font_path=font font = r'C:\Windows\Fonts\simfang.ttf' **
2)背景圖片設置未生效(個人感覺):
mask = np.array(Image.open('test_ciyun.jpg'))
3)字符編碼問題解決:
text_from_file_with_apath = open('douban.txt','rb').read()
分析
1 電影
2 沒有
3 什么
4 你們
5 就是
6 愛情,男人舆蝴,女人
7 自己谦絮,分手
時間有點晚了........睡去罷..............
思考ing.png