前言
scrapy爬取網(wǎng)站數(shù)據(jù)的時(shí)候丛晦,一般第一次爬取為全量爬取,以后需要的都是增量爬取,或者爬取中斷之后需要繼續(xù)爬取老虫,那么這都需要爬取剩余未爬取的,而已經(jīng)爬取過的則不需要爬取茫多。為了提高爬取效率祈匙,已經(jīng)爬取過的地址最好通過判斷是否爬取,如果爬取過則丟棄天揖,否則交給調(diào)度器夺欲,由調(diào)度器安排爬取。
根據(jù)爬蟲框架的結(jié)構(gòu)圖可知今膊,scrapy中有兩個(gè)重要的中間件些阅,一個(gè)是Downloader Middlewares一個(gè)是Spider Middlewares 在spiders中yield scrapy.Request()的請(qǐng)求都會(huì)經(jīng)過spiderMiddlewares。查看官方文檔關(guān)于scrapy.contrib.spidermiddleware.SpiderMiddleware
的process_spider_output(response, result, spider)方法的介紹:
當(dāng)Spider處理response返回result時(shí)斑唬,該方法被調(diào)用
可知在spider的parse方法中yied的item對(duì)象和request對(duì)象都會(huì)調(diào)用該方法市埋。那么增量爬蟲的判斷是否爬取過黎泣,如果爬取過則丟棄,否則交給調(diào)度器缤谎,這一功能可在此實(shí)現(xiàn)抒倚。本篇在不改變?cè)瓉?lái)spider的基礎(chǔ)上,通過中間件實(shí)現(xiàn)增量爬蟲坷澡。
具體實(shí)現(xiàn)
步驟一衡便、
新建數(shù)據(jù)庫(kù)操作文件db.py實(shí)現(xiàn)的功能:
- mysql數(shù)據(jù)庫(kù)的配置信息
- 根據(jù)origin_url字段判斷url在數(shù)據(jù)庫(kù)中是否已經(jīng)存在
- 管道中使用的插入數(shù)據(jù)方法
- 為避免sql查詢時(shí)的數(shù)據(jù)庫(kù)連接的反復(fù)建立,使用單例模式
#!/usr/bin/env python
# -*- encoding: utf-8 -*-
import pymysql
import logging
class DB_MySQL():
'''數(shù)據(jù)庫(kù)操作類'''
HOST = 'localhost'
DBNAME = 'hebei'
USER = 'root'
PASSWD = '123456'
PORT = '3306'
CHARSET = 'utf8'
def __init__(self):
self.conn = pymysql.connect(host=self.HOST, port=int(self.PORT), user=self.USER, passwd=self.PASSWD,
db=self.DBNAME, charset=self.CHARSET)
self.cur = self.conn.cursor()
# 插入數(shù)據(jù)
def insert(self, item):
try:
fields = item.keys()
sql = 'insert into news(%s) value(%s)' % (','.join(fields), ','.join(['%s']*len(fields)))
self.cur.execute(sql,[item[x] for x in fields])
self.conn.commit()
except Exception as e:
logging.error('mysql插入數(shù)據(jù)執(zhí)行異常: %s' % str(e))
# 判斷url是否已經(jīng)存在
def url_is_exist(self, url):
try:
if self.cur.execute('select 1 from news where origin_url = %s limit 1', (url,)):
return True
else:
return False
except Exception as e:
logging.error('mysql查詢origin_url是否存在執(zhí)行異常: ' + str(e))
def close(self):
self.cur.close()
self.conn.close()
db_mysql = DB_MySQL()
中間件實(shí)現(xiàn)
process_spider_output()方法中需要先判斷對(duì)象是否為Request對(duì)象洋访,然后獲取該對(duì)象的url屬性镣陕,并判斷該url是否已經(jīng)存在,如果存在則yield None姻政。
class HbPolicyNewsSpiderMiddleware(object):
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the spider middleware does not modify the
# passed objects.
@classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
def process_spider_input(self, response, spider):
# Called for each response that goes through the spider
# middleware and into the spider.
# Should return None or raise an exception.
return None
def process_spider_output(self, response, result, spider):
# Called with the results returned from the Spider, after
# it has processed the response.
# Must return an iterable of Request, dict or Item objects.
for i in result:
if isinstance(i, Request):
referer = i.headers[b'Referer'] if b'Referer' in i.headers.keys() else ''
if db_mysql.url_is_exist(i.url):
spider.logger.debug('url已存在丟棄請(qǐng)求:%s ,referer信息: %s' % (i.url, referer))
yield None
else:
spider.logger.debug('新url請(qǐng)求:%s ,referer信息: %s' % (i.url, referer))
yield i
else:
yield i
def process_spider_exception(self, response, exception, spider):
# Called when a spider or process_spider_input() method
# (from other spider middleware) raises an exception.
# Should return either None or an iterable of Response, dict
# or Item objects.
pass
def process_start_requests(self, start_requests, spider):
# Called with the start requests of the spider, and works
# similarly to the process_spider_output() method, except
# that it doesn’t have a response associated.
# Must return only requests (not items).
for r in start_requests:
yield r
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name)
settings.py中啟用中間件
SPIDER_MIDDLEWARES = {
'hb_policy_news.middlewares.HbPolicyNewsSpiderMiddleware': 543,
}