一:前言
繼續(xù)練習Scrapy框架鳍刷,這次抓取的是實習僧網(wǎng)最新的招聘信息,包括招聘崗位,時間痘煤,工資凑阶,學歷要求,職位誘惑和職位描述等等衷快。之后保存到mongodb和json文件中以備后續(xù)使用宙橱。爬蟲中遇到了很多問題,比如不同解析函數(shù)傳值蘸拔、xpath配合正則师郑、職位描述的網(wǎng)頁結構多變,這些都一一解決了调窍。代碼地址:https://github.com/rieuse/ScrapyStudy
二:運行環(huán)境
- IDE:Pycharm 2017
- Python3.6
- pymongo 3.4.0
- scrapy 1.3.3
三:實例分析
1.首先進入官網(wǎng):shixiseng.com 然后點擊進入更多職位后就可以看見下面這樣網(wǎng)頁宝冕。
之后點擊最新發(fā)布,觀察一下網(wǎng)址變化陨晶。當點擊下一頁的時候網(wǎng)址已經(jīng)變成了http://www.shixiseng.com/interns?t=zj&p=2 猬仁,這就說明了網(wǎng)址的構成就是“p=”后面就是網(wǎng)頁的數(shù)目,隨后我們將使用列表迭代生成我們爬取的頁面地址先誉。
start_urls = ['http://www.shixiseng.com/interns?t=zj&p={}'.format(n) for n in range(1, 501)]
2.知道主網(wǎng)址構成后湿刽,就進入開發(fā)者模式看看每個招聘信息的具體網(wǎng)址。這里只是一個不含有域名的鏈接褐耳,那么就需要自己構成對應詳細的招聘信息網(wǎng)址
links = response.xpath('//div[@class="po-name"]/div[1]/a/@href').extract()
for link in links:
dlink = 'http://www.shixiseng.com' + link
3.進入詳細的招聘網(wǎng)址后诈闺,明確要抓取的數(shù)據(jù)如下圖。然后使用xpath抓取铃芦,這里有幾個要注意的地方雅镊。
re_first()
即可在獲取到使用正則匹配合適的第一個內容。獲取工資刃滓、地理位置仁烹、學歷、時間這一欄中會遇到分隔符 “|” 所以必要的時候匹配中文即可咧虎。
item['name'] = response.xpath('//*[@id="container"]/div[1]/div[1]/div[1]/span[1]/span/text()').extract_first()
item['link'] = response.meta['link']
item['company'] = response.xpath('//*[@id="container"]/div[1]/div[2]/div[1]/p[1]/a/text()').extract_first()
item['place'] = response.xpath('//*[@id="container"]/div[1]/div[1]/div[2]/span[2]/@title').extract_first()
item['education'] = response.xpath('//*[@id="container"]/div[1]/div[1]/div[2]/span[3]/text()').re_first(r'[\u4e00-\u9fa5]+')
item['people'] = response.xpath('//*[@id="container"]/div[1]/div[2]/div[1]/p[2]/span/text()').extract()[1]
item['money'] = response.xpath('//*[@id="container"]/div[1]/div[1]/div[2]/span[1]/text()').re_first(r'[^\s]+')
item['week'] = response.xpath('//*[@id="container"]/div[1]/div[1]/div[2]/span[4]/text()').extract_first()
item['month'] = response.xpath('//*[@id="container"]/div[1]/div[1]/div[2]/span[5]/text()').extract_first()
item['lure'] = response.xpath('//*[@id="container"]/div[1]/div[1]/div[2]/p/text()').extract_first()
item['description'] = response.xpath('//div[@class="dec_content"]/*/text()|//div[@class="dec_content"]/text()').extract()
item['data'] = response.xpath('//*[@id="container"]/div[1]/div[1]/p[3]/text()').extract()
四:實戰(zhàn)代碼
前面已經(jīng)完成了網(wǎng)頁結構分析和所需數(shù)據(jù)的抓取方法卓缰,下面就開始使用Scrapy框架來完成這次任務。完整代碼位置:github.com/rieuse/ScrapyStudy
1.首先使用命令行工具輸入代碼創(chuàng)建一個新的Scrapy項目砰诵,之后創(chuàng)建一個爬蟲征唬。
scrapy startproject ShiXiSeng
cd ShiXiSeng\ShiXiSeng\spiders
scrapy genspider shixi shixiseng.com
2.打開ShiXiSeng文件夾中的items.py,改為以下代碼茁彭,定義我們爬取的項目总寒。
import scrapy
class ShixisengItem(scrapy.Item):
name = scrapy.Field()
link = scrapy.Field()
company = scrapy.Field()
people = scrapy.Field()
place = scrapy.Field()
education = scrapy.Field()
money = scrapy.Field()
week = scrapy.Field()
month = scrapy.Field()
lure = scrapy.Field()
description = scrapy.Field()
data = scrapy.Field()
3.配置middleware.py配合settings中的User_Agent設置可以在下載中隨機選擇UA有一定的反ban效果,在原有代碼基礎上加入下面代碼理肺。這里的user_agent_list可以加入更多摄闸。
import random
from scrapy.downloadermiddlewares.useragent import UserAgentMiddleware
class RotateUserAgentMiddleware(UserAgentMiddleware):
def __init__(self, user_agent=''):
self.user_agent = user_agent
def process_request(self, request, spider):
ua = random.choice(self.user_agent_list)
if ua:
print(ua)
request.headers.setdefault('User-Agent', ua)
user_agent_list = [
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
]
4.明確一下目標善镰,這是抓取的數(shù)據(jù)保存到mongodb數(shù)據(jù)庫中和本地json文件。所以需要設置一下Pipelines.py
import json
import pymongo
from scrapy.conf import settings
class ShixisengPipeline(object):
def __init__(self):
self.client = pymongo.MongoClient(host=settings['MONGO_HOST'], port=settings['MONGO_PORT'])
self.db = self.client[settings['MONGO_DB']]
self.post = self.db[settings['MONGO_COLL']]
def process_item(self, item, spider):
postItem = dict(item)
self.post.insert(postItem)
return item
class JsonWriterPipeline(object):
def __init__(self):
self.file = open('shixi.json', 'w', encoding='utf-8')
def process_item(self, item, spider):
line = json.dumps(dict(item), ensure_ascii=False) + "\n"
self.file.write(line)
return item
def spider_closed(self, spider):
self.file.close()
5.然后setting.py里面也要修改一下贪薪,這樣才能啟動Pipeline相關配置媳禁,最后可以保存相關數(shù)據(jù)。
BOT_NAME = 'ShiXiSeng'
SPIDER_MODULES = ['ShiXiSeng.spiders']
NEWSPIDER_MODULE = 'ShiXiSeng.spiders'
# 配置mongodb
MONGO_HOST = "127.0.0.1" # 主機IP
MONGO_PORT = 27017 # 端口號
MONGO_DB = "Shixiseng" # 庫名
MONGO_COLL = "info" # collection名
# pipeline文件的入口,這里進
ITEM_PIPELINES = {
'ShiXiSeng.pipelines.JsonWriterPipeline': 300,
'ShiXiSeng.pipelines.ShixisengPipeline': 300,
}
# 設置隨機User_Agent
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
'ShiXiSeng.middlewares.RotateUserAgentMiddleware': 400,
}
ROBOTSTXT_OBEY = False # 不遵循網(wǎng)站的robots.txt策略
DOWNLOAD_DELAY = 1 # 下載同一個網(wǎng)站頁面前等待的時間画切,可以用來限制爬取速度減輕服務器壓力。
COOKIES_ENABLED = False # 關閉cookies
6.這次最重要的部分就是spider中的shixi.py
# -*- coding: utf-8 -*-
import scrapy
from ShiXiSeng.items import ShixisengItem
from scrapy import Request
class ShixiSpider(scrapy.Spider):
name = "shixi"
allowed_domains = ["shixiseng.com"]
start_urls = ['http://www.shixiseng.com/interns?t=zj&p={}'.format(n) for n in range(1, 501)]
headers = {
'Host': 'www.shixiseng.com',
'Connection': 'keep-alive',
'Cache-Control': 'max-age=0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Upgrade-Insecure-Requests': '1',
'Referer': 'http://www.shixiseng.com',
'Accept-Encoding': 'gzip,deflate,sdch',
'Accept - Language': 'zh-CN,zh;q=0.8,en;q=0.6'
}
def parse(self, response):
links = response.xpath('//div[@class="po-name"]/div[1]/a/@href').extract()
for link in links:
dlink = 'http://www.shixiseng.com' + link
yield Request(dlink, meta={'link': dlink}, headers=self.headers, callback=self.parser_detail)
def parser_detail(self, response):
item = ShixisengItem()
item['name'] = response.xpath('//*[@id="container"]/div[1]/div[1]/div[1]/span[1]/span/text()').extract_first()
item['link'] = response.meta['link']
item['company'] = response.xpath('//*[@id="container"]/div[1]/div[2]/div[1]/p[1]/a/text()').extract_first()
item['place'] = response.xpath('//*[@id="container"]/div[1]/div[1]/div[2]/span[2]/@title').extract_first()
item['education'] = response.xpath('//*[@id="container"]/div[1]/div[1]/div[2]/span[3]/text()').re_first(r'[\u4e00-\u9fa5]+')
item['people'] = response.xpath('//*[@id="container"]/div[1]/div[2]/div[1]/p[2]/span/text()').extract()[1]
item['money'] = response.xpath('//*[@id="container"]/div[1]/div[1]/div[2]/span[1]/text()').re_first(r'[^\s]+')
item['week'] = response.xpath('//*[@id="container"]/div[1]/div[1]/div[2]/span[4]/text()').extract_first()
item['month'] = response.xpath('//*[@id="container"]/div[1]/div[1]/div[2]/span[5]/text()').extract_first()
item['lure'] = response.xpath('//*[@id="container"]/div[1]/div[1]/div[2]/p/text()').extract_first()
item['description'] = response.xpath('//div[@class="dec_content"]/*/text()|//div[@class="dec_content"]/text()').extract()
item['data'] = response.xpath('//*[@id="container"]/div[1]/div[1]/p[3]/text()').extract()[0]
yield item
①這里面涉及到了不同解析函數(shù)的傳值問題囱怕,首先是從start_urls開始請求每一個鏈接然后得到的respose傳給parser()函數(shù)進行第一次解析霍弹,這個函數(shù)的作用就是獲取該頁面上的每一個招聘頁面的鏈接,之后yield一個Request()函數(shù)娃弓,使用meta這個參數(shù)即可傳值典格,類型是dict,這里就把鏈接傳給了parser_detail() 函數(shù)然后做進一步的解析獲取最后想要的數(shù)據(jù)台丛。
②使用了正則+xpath來獲取準確的數(shù)據(jù)
五:總結
運行代碼后就會獲取實習僧最新的全部的招聘數(shù)據(jù)耍缴,一共大概是5000條數(shù)據(jù)很快就可以抓取下來保存到mongodb數(shù)據(jù)庫和本地的json文件中。
代碼都放在github中了挽霉,有喜歡的朋友可以點擊 start follw防嗡,歡迎一起學習交流。**https://github.com/rieuse **