因?yàn)榧磳⒌侥臣夜久嬖嚸两荩W(wǎng)上對(duì)該公司的評(píng)價(jià)不好揖闸,所以我去查看了全部評(píng)論,突發(fā)奇想我明明會(huì)爬蟲了料身,干嘛還呆逼地10段10段地加載汤纸,所以有了下面的代碼,有缺陷存在芹血。贮泞。。
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import requests
from bs4 import BeautifulSoup
import pandas as pd
headers = {'Accept':'text/html, */*; q=0.01','Accept-Encoding':'gzip, deflate, sdch','Accept-Language':'zh-CN,zh;q=0.8','User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.75 Safari/537.36'}
cookies = {'Cookie':'aliyungf_tc=AQAAAD2MPjQrcwQAp4x2Dgdwc71am5e9; __c=1491732911; W_CITY_S_V=57; __g=-; isHasPushRecommentMessage=true; thirtyMinutes=true; isShowDownload=false; thirtyMinutesCount=2; pageType=2; ac="544397501@qq.com"; __t=ZPp3Vr6QMt1cLNx; __l=r=&l=%2Fgsr194222.html%3Fka%3Dpercent-review-list; __a=29429174.1491732911..1491732911.7.1.7.7; t=ZPp3Vr6QMt1cLNx; AB_T=abvb'}
url1 = 'http://www.kanzhun.com/gsrPage.json?companyId=194222&companyName=%E4%B8%AD%E6%95%B0%E9%80%9A&pageNum='
url2 = '&cityCode=&sortMethod=1&employeeStatus=0'
name2 = [] #合并name字段各列表內(nèi)容
score2 = []#合并score字段各列表內(nèi)容
content2 = []#合并content字段各列表內(nèi)容
question2 = []
for i in range(1,8):
url = url1 + str(i) + url2
response = requests.get(url,headers = headers,cookies = cookies)
soup = BeautifulSoup(response.text,'lxml')
name = soup.find_all('p',class_='f_14 grey_99 dd_bot')
for n in name:
name1 = n.get_text()
name2.append(name1)
score = soup.find_all('span',class_='grade')
for s in score:
score1 = s.get_text()
score2.append(score1)
content = soup.find_all('h3',class_='question_title')
for c in content:
content1 = c.get_text()
content11 = content1.replace('\n','')
content2.append(content11)
question = soup.find_all('p',class_='question_content')
for q in question:
question1 = q.get_text()
question1 = question1.replace('\n','')
question2.append(question1)
print(len(question1))
table = pd.DataFrame({'name':name2,'score':score2,'content':content2})
print(table)
簡單說下代碼幔烛,由于看準(zhǔn)網(wǎng)的評(píng)論是用JS加載的啃擦,所以要用到抓包,直接上截圖教程饿悬。
打開Chrome瀏覽器令蛉,然后F12,接著點(diǎn)擊Network,勾選Preserve log,選擇XHR珠叔,右鍵重新加載蝎宇。拖到最下面的點(diǎn)擊查看更多。
這時(shí)Name列表中會(huì)出現(xiàn)很多網(wǎng)址祷安,找到連續(xù)出現(xiàn)的姥芥,如圖是listmore這個(gè)網(wǎng)址,點(diǎn)開可以看到里面的Request URL汇鞭,其中會(huì)有"pageNum="的字段凉唐,這就是存放頁面的字段,自己遍歷一個(gè)數(shù)字范圍就能實(shí)現(xiàn)爬取多個(gè)頁面了霍骄,這個(gè)過程就是抓包了台囱。
最后,因?yàn)榭礈?zhǔn)網(wǎng)貌似改版了读整,里面有1玄坦、問答式;2绘沉、用戶自己的評(píng)論煎楣。該死的把text內(nèi)容全放在同一個(gè)class里面,這里看不懂的話自己看下源代碼就知道了车伞,所以我的代碼原本是打算將多個(gè)變量組成DataFrame择懂,方便以后分析的×砭粒可惜困曙,以上這個(gè)原因?qū)е?question"這個(gè)變量的長度超過了其它的變量,放不進(jìn)去谦去,所以只能放棄了慷丽。
PS:我還不懂如何提取需要“查看全文”的部分,所以有的評(píng)論只爬到了部分鳄哭,這是以后要學(xué)習(xí)的地方要糊。