前言
今天繼續(xù)APP爬蟲卧蜓,今天爬取的是微博榜單(24小時(shí)榜)的數(shù)據(jù)蛮艰,采集的字段有:
- 用戶id
- 用戶地區(qū)
- 用戶性別
- 用戶粉絲
- 微博內(nèi)容
- 發(fā)布時(shí)間
- 轉(zhuǎn)發(fā)氧卧、評(píng)論和點(diǎn)贊量
該文分以下內(nèi)容:
- 爬蟲代碼
- 用戶分析
- 微博分析
爬蟲代碼
import requests
import json
import re
import time
import csv
headers = {
'Host': 'api.weibo.cn',
'Connection': 'keep-alive',
'User-Agent': 'Weibo/29278 (iPhone; iOS 11.4.1; Scale/2.00)'
}
f = open('1.csv','w+',encoding='utf-8',newline='')
writer = csv.writer(f)
writer.writerow(['user_id','user_location','user_gender','user_follower','text','created_time','reposts_count','comments_count','attitudes_count'])
def get_info(url):
res = requests.get(url,headers=headers)
print(url)
datas = re.findall('"mblog":(.*?),"weibo_position"',res.text,re.S)
for data in datas:
json_data = json.loads(data+'}')
user_id = json_data['user']['name']
user_location = json_data['user']['location']
user_gender = json_data['user']['gender']
user_follower = json_data['user']['followers_count']
text = json_data['text']
created_time = json_data['created_at']
reposts_count = json_data['reposts_count']
comments_count = json_data['comments_count']
attitudes_count = json_data['attitudes_count']
print(user_id,user_location,user_gender,user_follower,text,created_time,reposts_count,comments_count,attitudes_count)
writer.writerow([user_id,user_location,user_gender,user_follower,text,created_time,reposts_count,comments_count,attitudes_count])
time.sleep(5)
if __name__ == '__main__':
urls = ['https://api.weibo.cn/2/cardlist?gsid=_2A252dh7LDeRxGeNM41oV-S_MzDSIHXVTIhUDrDV6PUJbkdANLVTwkWpNSf8_0j6hqTyDS0clYi-pzwDc2Kd8oj_d&wm=3333_2001&i=b9f7194&b=0&from=1088193010&c=iphone&networktype=wifi&v_p=63&skin=default&v_f=1&s=ef8eeeee&lang=zh_CN&sflag=1&ua=iPhone8,1__weibo__8.8.1__iphone__os11.4.1&ft=11&aid=01AuxGxLabPA7Vzz8ZXBUpkeJqWbJ1woycR3lFBdLhoxgQC1I.&moduleID=pagecard&scenes=0&uicode=10000327&luicode=10000010&count=20&extparam=discover&containerid=102803_ctg1_8999_-_ctg1_8999_home&fid=102803_ctg1_8999_-_ctg1_8999_home&lfid=231091&page={}'.format(str(i)) for i in range(1,16)]
for url in urls:
get_info(url)
用戶分析
首先對(duì)部分用戶id進(jìn)行可視化谭企,字體大一點(diǎn)的是上榜2次的(這次統(tǒng)計(jì)中最多上榜的是2次)。
接著對(duì)地區(qū)進(jìn)行數(shù)據(jù)處理侮穿,進(jìn)行統(tǒng)計(jì)』汆拢可以看出亲茅,位于北京的用戶是最多的(大V都在北京)。
df['location'] = df['user_location'].str.split(' ').str[0]
接下來看下用戶的性別比例:男性用戶占多狗准。
最后再看看上榜大V粉絲前十:
微博分析
首先克锣,對(duì)時(shí)間數(shù)據(jù)進(jìn)行處理,取出小時(shí)時(shí)間段腔长。
接著袭祟,我們看看微博點(diǎn)贊前十的用戶。
最后捞附,繪制微博文章詞云圖巾乳。