首先說(shuō)明這篇文章的數(shù)據(jù)來(lái)源唬滑,是爬蟲BOSS直聘"數(shù)據(jù)分析師"這一職位信息所得來(lái)的身坐。并且主要分析了數(shù)據(jù)分析師總體薪酬情況邑茄、不同城市薪酬分布琉用、不同學(xué)歷薪酬分布堕绩、北京上海工作經(jīng)驗(yàn)薪酬分布情況、北上廣深對(duì)數(shù)據(jù)分析職位需求量以及有招聘需求的公司所處行業(yè)的詞云圖分析邑时。
1.數(shù)據(jù)采集
2.數(shù)據(jù)清洗與處理
3.數(shù)據(jù)分析
數(shù)據(jù)采集
import requests
from fake_useragent import UserAgent
from lxml import etree
import pymysql
import pymongo
import json
import time
from requests import RequestException
mongo_url = 'localhost'
mongo_db = 'zhaopin'
ua = UserAgent()
class Boss(object):
def __init__(self):
self.url = 'https://www.zhipin.com/{}/?query=數(shù)據(jù)分析&page={}'
self.headers = {'user-agent': ua.random,
'referer':'https://www.zhipin.com/c101020100/?query=%E6%95%B0%E6%8D%AE%E5%88%86%E6%9E%90&page=1',
'cookie': ''}
self.client = pymongo.MongoClient(mongo_url)
self.db = self.client[mongo_db]
self.cityList = {'廣州':'c101280100','北京':'c101010100','上海':'c101020100','深圳':'c101280600','杭州':'c101210100','天津':'c101030100','西安':'c101110100','蘇州':'c101190400','武漢':'c101200100','廈門':'c101230200','長(zhǎng)沙':'c101250100','成都':'c101270100','鄭州':'c101180100','重慶':'c101040100'}
# def get_proxy(self):
# PROXY_POOL_URL = 'http://localhost:5555/random'
# try:
# response = requests.get(PROXY_POOL_URL)
# if response.status_code == 200:
# return response.text
# except ConnectionError:
# return None
def get_one_page(self, url):
try:
# proxy = self.get_proxy()
# proxies = {'http': proxy}
# print(proxies)
response = requests.get(url, headers=self.headers)
if response.status_code == 200:
return response.text
return None
except RequestException:
print("請(qǐng)求錯(cuò)誤")
def parse_one_page(self,html):
html = etree.HTML(html)
content = html.xpath("http://li/div[@class='job-primary']")
for con in content:
pos_name = con.xpath(".//div[@class='job-title']/text()")[0]
comp_name = con.xpath(".//div[@class='info-company']/div/h3/a/text()")[0]
salary = con.xpath(".//h3/a/span/text()")[0]
scale = con.xpath("./div[@class='info-company']//p/text()[last()]")[0]
education = con.xpath("./div/p/text()[3]")[0]
industry = con.xpath(".//div[@class='company-text']/p//text()")[0]
workyear = con.xpath("./div[@class='info-primary']/p/text()")[1]
location = con.xpath("./div[@class='info-primary']/p/text()")[0]
item = {'pos_name':pos_name,
'comp_name':comp_name,
'salary':salary,
'scale':scale,
'education':education,
'industry':industry,
'workyear':workyear,
'location':location}
yield item
def write_to_file(self, item):
with open('boss.txt', 'a', encoding='utf-8') as f:
f.write(json.dumps(item, ensure_ascii=False)+'\n')
def write_to_csv(self, item):
with open('爬蟲BOSS直聘.txt','a', encoding='utf-8') as file:
line = str(item['pos_name']) + ',' + str(item['comp_name']) + ',' + str(item['salary']) + ',' + \
str(item['scale']) + ',' + str(item['education']) + ',' + str(item['industry']) + ',' + \
str(item['workyear']) + ',' + str(item['location']) + '\n'
file.write(line)
def save_to_mongo(self, item):
if self.db['boss'].insert(item):
print("save successfully")
def save_mo_mysql(self, item):
conn = pymysql.connect(host='localhost', user='root', password='', db='test7', port=3306,
charset='utf8')
cur = conn.cursor()
insert_data = "INSERT INTO boss(pos_name, comp_name, salary, scale, education, industry, workyear,location) VALUES(%s, %s, %s, %s, %s, %s, %s, %s)"
val = (item['pos_name'], item['comp_name'], item['salary'], item['scale'], item['education'], item['industry'], item['workyear'], item['location'])
cur.execute(insert_data, val)
conn.commit()
def run(self):
title = u'posName,companyName,salary,scale,education,industry,workyear,location'+'\n'
file = open('%s.txt' % '爬蟲BOSS直聘', 'w',encoding='utf-8') # 創(chuàng)建爬蟲拉勾網(wǎng).txt文件
file.write(title)
file.close()
for city in self.cityList.values():
for i in range(1,11):
url = self.url.format(city, i)
# url = self.url.format(1)
response = self.get_one_page(url)
for i in self.parse_one_page(response):
self.write_to_csv(i)
time.sleep(3)
if __name__ == '__main__':
boss = Boss()
boss.run()
數(shù)據(jù)清洗與處理
首先看到爬下來(lái)的數(shù)據(jù)地區(qū)location奴紧,太詳細(xì)了,我們只保留市的前兩個(gè)字晶丘。
可以觀察到工資的格式也有些問題黍氮,是一個(gè)區(qū)間的形式,用函數(shù)把工資清理成最大值浅浮,最小值沫浆,以及平均值的形式便于分析。
數(shù)據(jù)分析
總體工資分布情況
不同城市工資分布的情況
不同學(xué)歷的分布情況
再仔細(xì)看看詳細(xì)的招聘人數(shù)情況
現(xiàn)在來(lái)看看北京上海工作經(jīng)驗(yàn)分布情況
來(lái)看看北上廣深對(duì)數(shù)據(jù)分析類職位的需求量
做個(gè)招聘的公司所處行業(yè)領(lǐng)域的詞云圖分析
可以觀察到需求數(shù)據(jù)分析這一職位的主要在互聯(lián)網(wǎng)滚秩,移動(dòng)互聯(lián)網(wǎng)件缸,電子商務(wù),金融等方面叔遂。所以向這些領(lǐng)域求職的話成功率會(huì)大很多。