最近學(xué)習(xí)數(shù)據(jù)分析近零,因此嘗試一下這兩個(gè)網(wǎng)站的職位需求做分析用诺核,在其中遇到了很多坑,記錄一下久信。
框架就選用了scrapy窖杀,比較簡(jiǎn)單,建了兩個(gè)文件裙士,分別作用于不同的網(wǎng)站入客。
先來看BOSS直聘:
網(wǎng)上搜了很多BOSS直聘的例子,以為很容易,只需要模擬一個(gè)登陸頭就可以了……但是進(jìn)去發(fā)現(xiàn)完全不是那么一回事桌硫。
按照慣例,首先在items.py中定義需要獲取的數(shù)據(jù):
import scrapy
class PositionViewItem(scrapy.Item):
# define the fields for your item here like:
name :scrapy.Field = scrapy.Field()#名稱
salary :scrapy.Field = scrapy.Field()#薪資
education :scrapy.Field = scrapy.Field()#學(xué)歷
experience :scrapy.Field = scrapy.Field()#經(jīng)驗(yàn)
jobjd :scrapy.Field = scrapy.Field()#工作ID
district :scrapy.Field = scrapy.Field()#地區(qū)
category :scrapy.Field = scrapy.Field()#行業(yè)分類
scale :scrapy.Field = scrapy.Field()#規(guī)模
corporation :scrapy.Field = scrapy.Field()#公司名稱
url :scrapy.Field = scrapy.Field()#職位URL
createtime :scrapy.Field = scrapy.Field()#發(fā)布時(shí)間
posistiondemand :scrapy.Field = scrapy.Field()#崗位職責(zé)
cortype :scrapy.Field = scrapy.Field()#公司性質(zhì)
上面定義的就是ITEM,構(gòu)思好需要的數(shù)值,目前就簡(jiǎn)單的設(shè)置為普通的scrapy.Field()
name :str = 'DA'
url :str='https://www.zhipin.com/c100010000/?query=%E6%95%B0%E6%8D%AE&page=10'#起始url設(shè)定為進(jìn)入BOSS直聘之后的搜索頁夭咬,搜索參數(shù)為全國(guó)的數(shù)據(jù)分析
cookies :Dict = {
"__zp_stoken__":"bf79ElaZ4z7IK5JruWAX5j256l7CJf3k7Ag2A9mrsSPN%2FnLgjChK0LguCrB%2FtIEFMKdnysNhr4ilqIicjeHkCsCpBQ%3D%3D"
}#設(shè)置cookies
headers :Dict = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0',
'Referer': 'https://www.zhipin.com/web/common/security-check.html?seed=6gkgYHovIokVntQcwXUH9KW3%2FbEZsqfeaoCctIp1rE8%3D&name=f2d51032&ts=1571623520634&callbackUrl=%2Fjob_detail%2F%3Fquery%3D%25E6%2595%25B0%25E6%258D%25AE%25E5%2588%2586%25E6%259E%2590%26city%3D100010000%26industry%3D%26position%3D&srcReferer=https%3A%2F%2Fwww.zhipin.com%2Fjob_detail%2F%3Fquery%3D%25E6%2595%25B0%25E6%258D%25AE%25E5%2588%2586%25E6%259E%2590%26city%3D100010000%26industry%3D%26position%3D'
}#設(shè)置登錄頭
設(shè)置完常用的參數(shù)之后,嘗試定義start_requests方法作為爬取的起始url
def start_requests(self) -> Request:
yield Request(self.url, headers=self.headers, cookies=self.cookies)#返回一個(gè)yield铆隘,調(diào)用默認(rèn)callback卓舵,第一個(gè)參數(shù)是之前定義的url,第二個(gè)是定義的請(qǐng)求頭膀钠,第三個(gè)是cookies掏湾。
scrapy中默認(rèn)的回調(diào)函數(shù)為parse,直接定義一個(gè)parse用于獲取response的內(nèi)容托修,之后直接用xpath語法進(jìn)行解析忘巧。
def parse(self, response) -> None:
if response.status == 200:
PositionInfos :selector.SelectorList = response.selector.xpath(r'//div[@class="job-primary"]')
for positioninfo in PositionInfos:
pvi = PositionViewItem()
pvi['name'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/h3[@class="name"]/a/div[@class="job-title"]/text()').extract())
pvi['salary'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/h3[@class="name"]/a/span[@class="red"]/text()').extract())
pvi['education'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/p/text()').extract()[2])
pvi['experience'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/p/text()').extract()[1])
pvi['district'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/p/text()').extract()[0])
pvi['corporation'] = ''.join(positioninfo.xpath(r'div[@class="info-company"]/div[@class="company-text"]/h3[@class="name"]/a/text()').extract())
pvi['category'] = ''.join(positioninfo.xpath(r'div[@class="info-company"]/div[@class="company-text"]/p/text()').extract()[0])
try:
pvi['scale'] = ''.join(positioninfo.xpath(r'div[@class="info-company"]/div[@class="company-text"]/p/text()').extract()[2])
except IndexError:
pvi['scale'] = ''.join(positioninfo.xpath(r'div[@class="info-company"]/div[@class="company-text"]/p/text()').extract()[1])
pvi['url'] = ''.join(positioninfo.xpath(r'div[@class="info-primary"]/h3[@class="name"]/a/@href').extract())
yield pvi
nexturl = response.selector.xpath(r'//a[@ka="page-next"]/@href').extract()
if nexturl:
nexturl = urljoin(self.url, ''.join(nexturl))
print(nexturl)
yield Request(nexturl, headers=self.headers, cookies=self.cookies, callback=self.parse)
xpath選擇器后面跟的.extract()會(huì)返回一個(gè)list,里面包含的是選擇器選擇出來的所有元素睦刃,如果選擇不出來砚嘴,那么這個(gè)語句會(huì)報(bào)錯(cuò)而不是返回空值!
yield pvi的作用是把定義好的ITEM傳給pipelines涩拙,方便在pipelines中對(duì)獲取的數(shù)據(jù)進(jìn)行操作际长。
nexturl = response.selector.xpath(r'//a[@ka="page-next"]/@href').extract()獲取到下一頁的鏈接之后,要用urllib.parse中的urljoin將獲取到的鏈接和源鏈接進(jìn)行合并兴泥,因?yàn)樽サ降逆溄硬⒉皇且粋€(gè)完整的url工育,而是類似于
/c101010100/?query=%E6%95%B0%E6%8D%AE%E5%88%86%E6%9E%90&page=2這種格式,需要用urljoin進(jìn)行合并搓彻,合并規(guī)則如下:
url='http://ip/ path='api/user/login' urljoin(url,path)拼接后的路徑為'http//ip/api/user/login'
本以為這樣就好了,用scrapy crawl + 名字()運(yùn)行,結(jié)果發(fā)現(xiàn)請(qǐng)求不到數(shù)據(jù),會(huì)直接302重定向到一個(gè)securitycheck的網(wǎng)頁.
打開fiddler查看請(qǐng)求過程:
可以看到完全模擬了整個(gè)查詢過程,先直接請(qǐng)求一遍地址,之后重定向到security-check的網(wǎng)頁,之后再切回到返回的頁面,看上去沒有問題,但是仔細(xì)查看會(huì)發(fā)現(xiàn)cookies中的__zp_token__發(fā)生了變化:
那么就很清楚了,應(yīng)該是在調(diào)用security-check之后回寫了一個(gè)token,之后根據(jù)這個(gè)最新的token來判斷請(qǐng)求,看了一下似乎是通過一個(gè)js進(jìn)行加密回寫的,知乎上有大神寫了解密的辦法,對(duì)前端不太懂,放棄了...
轉(zhuǎn)載鏈接如下:https://zhuanlan.zhihu.com/p/83235220
這個(gè)token只能通過手動(dòng)刷新的方式獲取,一般能持續(xù)個(gè)幾次請(qǐng)求就會(huì)失效,要重新獲取.不過手動(dòng)爬也只能爬個(gè)10頁左右,后面的不登陸就沒有了,因此也無所謂.
后來嘗試通過selenium模擬的方式進(jìn)行,也宣告失敗.
總之不是很成功,目前不推薦啦...