CrawlSpiders是Spider的派生類蝇闭,Spider類的設(shè)計(jì)原則是只爬取start_url列表中的網(wǎng)頁蜘拉,而CrawlSpider類定義了一些規(guī)則(rule)來提供跟進(jìn)link的方便的機(jī)制爆惧,從爬取的網(wǎng)頁中獲取link并繼續(xù)爬取的工作更適合。
一、我們先來分析一下CrawlSpiders源碼
源碼解析
class CrawlSpider(Spider):
rules = ()
def __init__(self, *a, **kw):
super(CrawlSpider, self).__init__(*a, **kw)
self._compile_rules()
# 首先調(diào)用parse()來處理start_urls中返回的response對象
# parse()則將這些response對象傳遞給了_parse_response()函數(shù)處理,并設(shè)置回調(diào)函數(shù)為parse_start_url()
# 設(shè)置了跟進(jìn)標(biāo)志位True
# parse將返回item和跟進(jìn)了的Request對象
def parse(self, response):
return self._parse_response(response, self.parse_start_url, cb_kwargs={}, follow=True)
# 處理start_url中返回的response至非,需要重寫
def parse_start_url(self, response):
return []
def process_results(self, response, results):
return results
# 從response中抽取符合任一用戶定義'規(guī)則'的鏈接,并構(gòu)造成Resquest對象返回
def _requests_to_follow(self, response):
if not isinstance(response, HtmlResponse):
return
seen = set()
# 抽取之內(nèi)的所有鏈接糠聪,只要通過任意一個(gè)'規(guī)則'荒椭,即表示合法
for n, rule in enumerate(self._rules):
links = [l for l in rule.link_extractor.extract_links(response) if l not in seen]
# 使用用戶指定的process_links處理每個(gè)連接
if links and rule.process_links:
links = rule.process_links(links)
# 將鏈接加入seen集合,為每個(gè)鏈接生成Request對象舰蟆,并設(shè)置回調(diào)函數(shù)為_repsonse_downloaded()
for link in links:
seen.add(link)
# 構(gòu)造Request對象趣惠,并將Rule規(guī)則中定義的回調(diào)函數(shù)作為這個(gè)Request對象的回調(diào)函數(shù)
r = Request(url=link.url, callback=self._response_downloaded)
r.meta.update(rule=n, link_text=link.text)
# 對每個(gè)Request調(diào)用process_request()函數(shù)。該函數(shù)默認(rèn)為indentify身害,即不做任何處理味悄,直接返回該Request.
yield rule.process_request(r)
# 處理通過rule提取出的連接,并返回item以及request
def _response_downloaded(self, response):
rule = self._rules[response.meta['rule']]
return self._parse_response(response, rule.callback, rule.cb_kwargs, rule.follow)
# 解析response對象塌鸯,會(huì)用callback解析處理他侍瑟,并返回request或Item對象
def _parse_response(self, response, callback, cb_kwargs, follow=True):
# 首先判斷是否設(shè)置了回調(diào)函數(shù)。(該回調(diào)函數(shù)可能是rule中的解析函數(shù),也可能是 parse_start_url函數(shù))
# 如果設(shè)置了回調(diào)函數(shù)(parse_start_url())涨颜,那么首先用parse_start_url()處理response對象费韭,
# 然后再交給process_results處理。返回cb_res的一個(gè)列表
if callback:
#如果是parse調(diào)用的庭瑰,則會(huì)解析成Request對象
#如果是rule callback星持,則會(huì)解析成Item
cb_res = callback(response, **cb_kwargs) or ()
cb_res = self.process_results(response, cb_res)
for requests_or_item in iterate_spider_output(cb_res):
yield requests_or_item
# 如果需要跟進(jìn),那么使用定義的Rule規(guī)則提取并返回這些Request對象
if follow and self._follow_links:
#返回每個(gè)Request對象
for request_or_item in self._requests_to_follow(response):
yield request_or_item
def _compile_rules(self):
def get_method(method):
if callable(method):
return method
elif isinstance(method, basestring):
return getattr(self, method, None)
self._rules = [copy.copy(r) for r in self.rules]
for rule in self._rules:
rule.callback = get_method(rule.callback)
rule.process_links = get_method(rule.process_links)
rule.process_request = get_method(rule.process_request)
def set_crawler(self, crawler):
super(CrawlSpider, self).set_crawler(crawler)
self._follow_links = crawler.settings.getbool('CRAWLSPIDER_FOLLOW_LINKS', True)
二弹灭、 CrawlSpider爬蟲文件字段的介紹
1督暂、 CrawlSpider繼承于Spider類,除了繼承過來的屬性外(name穷吮、allow_domains)逻翁,還提供了新的屬性和方法:class scrapy.linkextractors.LinkExtractor
Link Extractors 的目的很簡單: 提取鏈接?每個(gè)LinkExtractor有唯一的公共方法是 extract_links(),它接收一個(gè) Response 對象酒来,并返回一個(gè) scrapy.link.Link 對象卢未。
Link Extractors要實(shí)例化一次,并且 extract_links 方法會(huì)根據(jù)不同的 response 調(diào)用多次提取鏈接?
class scrapy.linkextractors.LinkExtractor(
allow = (),
deny = (),
allow_domains = (),
deny_domains = (),
deny_extensions = None,
restrict_xpaths = (),
tags = ('a','area'),
attrs = ('href'),
canonicalize = True,
unique = True,
process_value = None
)
主要參數(shù):
① allow:滿足括號中“正則表達(dá)式”的值會(huì)被提取堰汉,如果為空辽社,則全部匹配。
② deny:與這個(gè)正則表達(dá)式(或正則表達(dá)式列表)不匹配的URL一定不提取翘鸭。
③ allow_domains:會(huì)被提取的鏈接的domains滴铅。
④ deny_domains:一定不會(huì)被提取鏈接的domains。
⑤ restrict_xpaths:使用xpath表達(dá)式就乓,和allow共同作用過濾鏈接汉匙。
2、 在rules中包含一個(gè)或多個(gè)Rule對象生蚁,每個(gè)Rule對爬取網(wǎng)站的動(dòng)作定義了特定操作噩翠。如果多個(gè)rule匹配了相同的鏈接,則根據(jù)規(guī)則在本集合中被定義的順序邦投,第一個(gè)會(huì)被使用伤锚。
class scrapy.spiders.Rule(
link_extractor,
callback = None,
cb_kwargs = None,
follow = None,
process_links = None,
process_request = None
)
① link_extractor:是一個(gè)Link Extractor對象,用于定義需要提取的鏈接志衣。
② callback: 從link_extractor中每獲取到鏈接時(shí)屯援,參數(shù)所指定的值作為回調(diào)函數(shù),該回調(diào)函數(shù)接受一個(gè)response作為其第一個(gè)參數(shù)念脯。
注意:當(dāng)編寫爬蟲規(guī)則時(shí)狞洋,避免使用parse作為回調(diào)函數(shù)。由于CrawlSpider使用parse方法來實(shí)現(xiàn)其邏輯绿店,如果覆蓋了 parse方法吉懊,crawl spider將會(huì)運(yùn)行失敗。
③ follow:是一個(gè)布爾(boolean)值,指定了根據(jù)該規(guī)則從response提取的鏈接是否需要跟進(jìn)借嗽。 如果callback為None怕午,follow 默認(rèn)設(shè)置為True ,否則默認(rèn)為False淹魄。
④ process_links:指定該spider中哪個(gè)的函數(shù)將會(huì)被調(diào)用,從link_extractor中獲取到鏈接列表時(shí)將會(huì)調(diào)用該函數(shù)堡距。該方法主要用來過濾甲锡。
⑤ process_request:指定該spider中哪個(gè)的函數(shù)將會(huì)被調(diào)用, 該規(guī)則提取到每個(gè)request時(shí)都會(huì)調(diào)用該函數(shù)羽戒。 (用來過濾request)
3缤沦、Scrapy提供了log功能,可以通過 logging 模塊使用易稠「追希可以修改配置文件settings.py,任意位置添加下面兩行驶社,效果會(huì)清爽很多企量。
LOG_FILE = "TencentSpider.log"
LOG_LEVEL = "INFO"
Scrapy提供5層logging級別:
① CRITICAL - 嚴(yán)重錯(cuò)誤(critical)
② ERROR - 一般錯(cuò)誤(regular errors)
③ WARNING - 警告信息(warning messages)
④ INFO - 一般信息(informational messages)
⑤ DEBUG - 調(diào)試信息(debugging messages)
通過在setting.py中進(jìn)行以下設(shè)置可以被用來配置logging:
① LOG_ENABLED 默認(rèn): True,啟用logging
② LOG_ENCODING 默認(rèn): 'utf-8'亡电,logging使用的編碼
③ LOG_FILE 默認(rèn): None届巩,在當(dāng)前目錄里創(chuàng)建logging輸出文件的文件名
④ LOG_LEVEL 默認(rèn): 'DEBUG',log的最低級別
⑤ LOG_STDOUT 默認(rèn): False 如果為 True份乒,進(jìn)程所有的標(biāo)準(zhǔn)輸出(及錯(cuò)誤)將會(huì)被重定向到log中恕汇。例如,執(zhí)行 print "hello" 或辖,其將會(huì)在Scrapy log中顯示瘾英。
三、 CrawlSpider爬蟲案例分析
1颂暇、創(chuàng)建項(xiàng)目:scrapy startproject CrawlYouYuan
2缺谴、創(chuàng)建爬蟲文件:scrapy genspider -t crawl youyuan youyuan.com
3、項(xiàng)目文件分析
items.py
模型類
import scrapy
class CrawlyouyuanItem(scrapy.Item):
# 用戶名
username = scrapy.Field()
# 年齡
age = scrapy.Field()
# 頭像圖片的鏈接
header_url = scrapy.Field()
# 相冊圖片的鏈接
images_url = scrapy.Field()
# 內(nèi)心獨(dú)白
content = scrapy.Field()
# 籍貫
place_from = scrapy.Field()
# 學(xué)歷
education = scrapy.Field()
# 興趣愛好
hobby = scrapy.Field()
# 個(gè)人主頁
source_url = scrapy.Field()
# 數(shù)據(jù)來源網(wǎng)站
sourec = scrapy.Field()
# utc 時(shí)間
time = scrapy.Field()
# 爬蟲名
spidername = scrapy.Field()
youyuan.py
爬蟲文件
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from CrawlYouYuan.items import CrawlyouyuanItem
import re
class YouyuanSpider(CrawlSpider):
name = 'youyuan'
allowed_domains = ['youyuan.com']
start_urls = ['http://www.youyuan.com/find/beijing/mm18-25/advance-0-0-0-0-0-0-0/p1/']
# 自動(dòng)生成的文件不需要改東西蟀架,只需要添加rules文件里面Rule角色就可以
# 每一頁匹配規(guī)則
page_links = LinkExtractor(allow=(r"youyuan.com/find/beijing/mm18-25/advance-0-0-0-0-0-0-0/p\d+/"))
# 每個(gè)人個(gè)人主頁匹配規(guī)則
profile_links = LinkExtractor(allow=(r"youyuan.com/\d+-profile/"))
rules = (
# 沒有回調(diào)函數(shù)瓣赂,說明follow是True
Rule(page_links),
# 有回調(diào)函數(shù),說明follow是False
Rule(profile_links, callback='parse_item', follow=True),
)
def parse_item(self, response):
item = CrawlyouyuanItem()
item['username'] = self.get_username(response)
# 年齡
item['age'] = self.get_age(response)
# 頭像圖片的鏈接
item['header_url'] = self.get_header_url(response)
# 相冊圖片的鏈接
item['images_url'] = self.get_images_url(response)
# 內(nèi)心獨(dú)白
item['content'] = self.get_content(response)
# 籍貫
item['place_from'] = self.get_place_from(response)
# 學(xué)歷
item['education'] = self.get_education(response)
# 興趣愛好
item['hobby'] = self.get_hobby(response)
# 個(gè)人主頁
item['source_url'] = response.url
# 數(shù)據(jù)來源網(wǎng)站
item['sourec'] = "youyuan"
yield item
def get_username(self, response):
username = response.xpath("http://dl[@class='personal_cen']//div[@class='main']/strong/text()").extract()
if len(username):
username = username[0]
else:
username = "NULL"
return username.strip()
def get_age(self, response):
age = response.xpath("http://dl[@class='personal_cen']//dd/p/text()").extract()
if len(age):
age = re.findall(u"\d+歲", age[0])[0]
else:
age = "NULL"
return age.strip()
def get_header_url(self, response):
header_url = response.xpath("http://dl[@class='personal_cen']/dt/img/@src").extract()
if len(header_url):
header_url = header_url[0]
else:
header_url = "NULL"
return header_url.strip()
def get_images_url(self, response):
images_url = response.xpath("http://div[@class='ph_show']/ul/li/a/img/@src").extract()
if len(images_url):
images_url = ", ".join(images_url)
else:
images_url = "NULL"
return images_url
def get_content(self, response):
content = response.xpath("http://div[@class='pre_data']/ul/li/p/text()").extract()
if len(content):
content = content[0]
else:
content = "NULL"
return content.strip()
def get_place_from(self, response):
place_from = response.xpath("http://div[@class='pre_data']/ul/li[2]//ol[1]/li[1]/span/text()").extract()
if len(place_from):
place_from = place_from[0]
else:
place_from = "NULL"
return place_from.strip()
def get_education(self, response):
education = response.xpath("http://div[@class='pre_data']/ul/li[3]//ol[2]/li[2]/span/text()").extract()
if len(education):
education = education[0]
else:
education = "NULL"
return education.strip()
def get_hobby(self, response):
hobby = response.xpath("http://dl[@class='personal_cen']//ol/li/text()").extract()
if len(hobby):
hobby = ",".join(hobby).replace(" ", "")
else:
hobby = "NULL"
return hobby.strip()
pipelines.py
管道文件
import json
import codecs
class CrawlyouyuanPipeline(object):
def __init__(self):
self.filename = codecs.open('content.json', 'w', encoding='utf-8')
def process_item(self, item, spider):
html = json.dumps(dict(item), ensure_ascii=False)
self.filename.write(html + '\n')
return item
def spider_closed(self, spider):
self.filename.close()
settings.py
BOT_NAME = 'CrawlYouYuan'
SPIDER_MODULES = ['CrawlYouYuan.spiders']
NEWSPIDER_MODULE = 'CrawlYouYuan.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:56.0)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
ITEM_PIPELINES = {
'CrawlYouYuan.pipelines.CrawlyouyuanPipeline': 300,
}
begin.py
from scrapy import cmdline
cmdline.execute('scrapy crawl youyuan'.split())
在運(yùn)行程序之前需要使Scrapy版本和Twisted版本相吻合片拍,設(shè)置如下
這次分享的文章和上一篇
Scrapy 框架基本了解以及Spiders爬蟲文章來說煌集,詳細(xì)介紹了使用Scrapy框架爬蟲的具體步驟,并同時(shí)編寫爬蟲案例進(jìn)行分析捌省,很好的詮釋了Scrapy框架爬取數(shù)據(jù)的方便性和易懂性苫纤,下篇文章我會(huì)分享下Scrapy分布式爬取網(wǎng)站,讓我們一起學(xué)習(xí),一起探討爬蟲技術(shù)卷拘。