本篇將學(xué)習(xí)CrawlSpiders以及日志的使用赶掖,更多內(nèi)容請(qǐng)參考:Python學(xué)習(xí)指南
CrawlSpiders
通過(guò)下面的命令可以快速創(chuàng)建CrawlSpiders模板的代碼:
scrapy genspider -t crawl tencent tencent.com
上一個(gè)案例中蒙揣,我們通過(guò)正則表達(dá)式惊来,制作了新的url作為Request請(qǐng)求參數(shù)掀淘,現(xiàn)在我們可以換個(gè)花樣朗鸠。脸候。。
class scrapy.spiders.CrawlSpider
它是Spider的派生類(lèi)谷市,Spider類(lèi)的設(shè)計(jì)原則是只爬取start_url列表中的網(wǎng)頁(yè),而CrawlSpider類(lèi)定義了一些規(guī)則(rule)來(lái)提供跟進(jìn)link的方便的機(jī)制击孩,從爬取的網(wǎng)頁(yè)中獲取link并繼續(xù)爬取的工作更合適迫悠。
源碼參考
class CrawlSpider(Spider):
rules = ()
def __init__(self, *a, **kw):
super(CrawlSpider, self).__init__(*a, **kw)
self._compile_rules()
#首先調(diào)用parse()來(lái)處理start_urls中返回的response對(duì)象
#parse()則將這些response對(duì)象傳遞給了_parse_response()函數(shù)處理,并設(shè)置回調(diào)函數(shù)為parse_start_url()
#設(shè)置了跟進(jìn)標(biāo)志位True
#parse將返回item和跟進(jìn)了的Request對(duì)象
def parse(self, response):
return self._parse_response(response, self.parse_start_url, cb_kwargs={}, follow=True)
#處理start_url中返回的response巩梢,需要重寫(xiě)
def parse_start_url(self, response):
return []
def process_results(self, response, results):
return results
#從response中抽取符合任一用戶(hù)定義'規(guī)則'的鏈接创泄,并構(gòu)造成Resquest對(duì)象返回
def _requests_to_follow(self, response):
if not isinstance(response, HtmlResponse):
return
seen = set()
#抽取之內(nèi)的所有鏈接艺玲,只要通過(guò)任意一個(gè)'規(guī)則',即表示合法
for n, rule in enumerate(self._rules):
links = [l for l in rule.link_extractor.extract_links(response) if l not in seen]
#使用用戶(hù)指定的process_links處理每個(gè)連接
if links and rule.process_links:
links = rule.process_links(links)
#將鏈接加入seen集合鞠抑,為每個(gè)鏈接生成Request對(duì)象饭聚,并設(shè)置回調(diào)函數(shù)為_(kāi)repsonse_downloaded()
for link in links:
seen.add(link)
#構(gòu)造Request對(duì)象,并將Rule規(guī)則中定義的回調(diào)函數(shù)作為這個(gè)Request對(duì)象的回調(diào)函數(shù)
r = Request(url=link.url, callback=self._response_downloaded)
r.meta.update(rule=n, link_text=link.text)
#對(duì)每個(gè)Request調(diào)用process_request()函數(shù)碍拆。該函數(shù)默認(rèn)為indentify若治,即不做任何處理,直接返回該Request.
yield rule.process_request(r)
#處理通過(guò)rule提取出的連接感混,并返回item以及request
def _response_downloaded(self, response):
rule = self._rules[response.meta['rule']]
return self._parse_response(response, rule.callback, rule.cb_kwargs, rule.follow)
#解析response對(duì)象端幼,會(huì)用callback解析處理他,并返回request或Item對(duì)象
def _parse_response(self, response, callback, cb_kwargs, follow=True):
#首先判斷是否設(shè)置了回調(diào)函數(shù)弧满。(該回調(diào)函數(shù)可能是rule中的解析函數(shù)婆跑,也可能是 parse_start_url函數(shù))
#如果設(shè)置了回調(diào)函數(shù)(parse_start_url()),那么首先用parse_start_url()處理response對(duì)象庭呜,
#然后再交給process_results處理滑进。返回cb_res的一個(gè)列表
if callback:
#如果是parse調(diào)用的,則會(huì)解析成Request對(duì)象
#如果是rule callback募谎,則會(huì)解析成Item
cb_res = callback(response, **cb_kwargs) or ()
cb_res = self.process_results(response, cb_res)
for requests_or_item in iterate_spider_output(cb_res):
yield requests_or_item
#如果需要跟進(jìn)扶关,那么使用定義的Rule規(guī)則提取并返回這些Request對(duì)象
if follow and self._follow_links:
#返回每個(gè)Request對(duì)象
for request_or_item in self._requests_to_follow(response):
yield request_or_item
def _compile_rules(self):
def get_method(method):
if callable(method):
return method
elif isinstance(method, basestring):
return getattr(self, method, None)
self._rules = [copy.copy(r) for r in self.rules]
for rule in self._rules:
rule.callback = get_method(rule.callback)
rule.process_links = get_method(rule.process_links)
rule.process_request = get_method(rule.process_request)
def set_crawler(self, crawler):
super(CrawlSpider, self).set_crawler(crawler)
self._follow_links = crawler.settings.getbool('CRAWLSPIDER_FOLLOW_LINKS', True)
CrawlSpider繼承于Spider類(lèi),除了繼承過(guò)來(lái)的屬性外(name, allow_domains),還提供了新的屬性和方法:
LinkExtractors
class scrapy.linkextractors.LinkExtractor
Link Extractors的目的很簡(jiǎn)單:提取鏈接
每個(gè)LinkExtractor有唯一的公共方法時(shí)extract_links()数冬,它接收一個(gè)Response對(duì)象节槐,并返回一個(gè)scrapy.link.Link對(duì)象。
LinkExtractors要實(shí)例化一次拐纱,并且extract_links方法會(huì)根據(jù)不同的response調(diào)用多次提取鏈接铜异。
class scrapy.linkextractors.LinkExtractor(
allow = (),
deny = (),
allow_domains = (),
deny_domains = (),
deny_extensions = None,
restrict_xpaths = (),
tags = ('a','area'),
attrs = ('href'),
canonicalize = True,
unique = True,
process_value = None
)
主要參數(shù):
-
allow
:滿(mǎn)足括號(hào)中"正則表達(dá)式"的值會(huì)被提取,如果為空秸架,則全部匹配 -
deny
: 與這個(gè)正則表達(dá)式(或正則表達(dá)式列表)匹配的URL一定不會(huì)提取揍庄。 -
allow_domains
: 會(huì)被提取的鏈接的domains -
deny_domains
:一定不會(huì)被提取鏈接的domains -
restrict_xpaths
:使用xpath表達(dá)式,和allow共同作用過(guò)濾鏈接东抹。
rules
在rules中包含了一個(gè)或多個(gè)Rule對(duì)象蚂子,每個(gè)Rule對(duì)爬取網(wǎng)站的動(dòng)作定義了特定操作。如果多個(gè)rule匹配了相同的鏈接府阀,則根據(jù)規(guī)則在本集合中被定義的順序缆镣,第一個(gè)會(huì)被使用。
class scrapy.spiders.Rule(
link_extractor,
callback = None,
cb_kwargs = None,
follow = None,
process_links = None,
process_request = None
)
-
link_extractor
:是一個(gè)Link Extractor對(duì)象试浙,用于定義需要爬取的鏈接 -
callback
: 從link_extractor中每獲取到鏈接時(shí)董瞻,參數(shù)所指定的值作為回調(diào)函數(shù),該回調(diào)函數(shù)接受一個(gè)response作為其第一個(gè)參數(shù)。
注意:當(dāng)編寫(xiě)爬蟲(chóng)規(guī)則時(shí)钠糊,避免使用parse作為回調(diào)函數(shù)挟秤,由于CrawlSpider使用parse方法來(lái)實(shí)現(xiàn)其邏輯,如果覆蓋了parse方法抄伍,crawl spider將會(huì)運(yùn)行失敗艘刚。
-
follow
:是一個(gè)布爾(boolean)值,指定了根據(jù)該規(guī)則從response提取的鏈接是否需要跟進(jìn)截珍。如果callback為None,follow默認(rèn)設(shè)置為T(mén)rue攀甚,否則默認(rèn)為False. -
process_links
:指定該spider中哪個(gè)的函數(shù)會(huì)被調(diào)用,從link_extractor中獲取到鏈接列表時(shí)將會(huì)調(diào)用該函數(shù)岗喉。該方法主要用來(lái)過(guò)濾秋度。 -
process_request
:指定該spider中哪個(gè)函數(shù)將會(huì)被調(diào)用,該規(guī)則提取到每個(gè)requests時(shí)都會(huì)調(diào)用該函數(shù)钱床。(用來(lái)過(guò)濾request)
爬取規(guī)則(Crawling rules)
繼續(xù)用騰訊招聘為例荚斯,給出配合rule使用CrawlSpider的例子:
1.首先運(yùn)行
scrapy shell "http://hr.tencent.com/position.php?&start=0#a"
2.導(dǎo)入LinkExtractor,創(chuàng)建LinkExtractor實(shí)例對(duì)象:
from scrapy.linkextractors import LinkExtractor
page_lx = LinkExtractor(allow=('position.php?&start=\d+'))
allow:LinkExtractor對(duì)象最重要的參數(shù)之一查牌,這是一個(gè)正則表達(dá)式事期,必須要匹配這個(gè)正則表達(dá)式(或正則表達(dá)式列表)的URL才會(huì)被提取,如果沒(méi)有給出(或?yàn)榭?纸颜,它會(huì)匹配所有的鏈接兽泣。
deny:用法同allow,只不過(guò)與這個(gè)正則表達(dá)式匹配的URL不會(huì)被提取,它的優(yōu)先級(jí)高于allow的參數(shù)胁孙,如果沒(méi)有給出(或None),將不排除任何鏈接撞叨。
3.調(diào)用LinkExtractor實(shí)例的extract_links()方法查詢(xún)匹配結(jié)果:
page_lx.extract_links(response)
4. 沒(méi)有查找:
[]
5. 注意轉(zhuǎn)義字符的問(wèn)題,繼續(xù)重新匹配:
page_lx = LinkExtractor(allow=('position\.php\?start=\d+'))
page_lx.extract_links(response)
CrawlSpider版本
#提取匹配到的`http://hr.tencent.com/position.php?&start=\d+`的鏈接
page_lx = LinkExtractor(allow=('start=\d+'))
rules = [
#提取匹配,并使用spider的parse方法進(jìn)行分析;并跟進(jìn)鏈接(沒(méi)有callback意味著follow默認(rèn)為T(mén)rue)
Rule(page_lx, callback='parse', follow=True)
]
特別注意:上面的例子是錯(cuò)誤的浊洞,callback千萬(wàn)不能寫(xiě)parse.再次強(qiáng)調(diào):由于CrawlSpider使用parse方法來(lái)實(shí)現(xiàn)其邏輯,如果覆蓋了 parse方法胡岔,crawl spider將會(huì)運(yùn)行失敗
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from cnblogSpider.items import TencentItem
class TencentcrawlSpider(CrawlSpider):
name = 'tencentCrawl'
allowed_domains = ['hr.tencent.com']
start_urls = ['http://hr.tencent.com/position.php?&start=0#a']
rules = (
Rule(LinkExtractor(allow=(r'start=\d+')), callback='parse_item', follow=True),
)
def parse_item(self, response):
content = response.xpath('//tr[@class="odd"]')
content += response.xpath('//tr[@class="even"]')
print(len(content))
for each in content:
item = TencentItem()
name = each.xpath('./td[1]/a/text()').extract()[0]
detailLink = each.xpath('./td[1]/a/@href').extract()[0]
positionInfo = each.xpath('./td[2]/text()').extract()[0]
peopleNumber = each.xpath('./td[3]/text()').extract()[0]
workLocation = each.xpath('./td[4]/text()').extract()[0]
publishTime = each.xpath('./td[5]/text()').extract()[0]
item['name'] = name.encode('utf-8')
item['detailLink'] = detailLink.encode('utf-8')
item['positionInfo'] = positionInfo.encode('utf-8')
item['peopleNumber'] = peopleNumber.encode('utf-8')
item['workLocation'] = workLocation.encode('utf-8')
item['publishTime'] = publishTime.encode('utf-8')
yield item
運(yùn)行:scrapy crawl tencentCrawl
Logging
Scrapy提供了log功能法希,可以通過(guò)logging模板使用
可以修改設(shè)置文件settings.py,任意位置添加下面兩行靶瘸,效果會(huì)清爽很多苫亦。
LOG_FILE = 'TencentSpider.log'
LOG_LEVEL = 'INFO'
Log levels
- Scrapy提供5層logging級(jí)別:
- CRITICAL - 嚴(yán)重錯(cuò)誤(critical)
- ERROR - 一般錯(cuò)誤(regular errors)
- WARNING - 警告信息(warning messages)
- INFO - 一般信息(informational messages)
- DEBUG - 調(diào)試信息(debugging messages)
logging設(shè)置
通過(guò)在settings.py中進(jìn)行以下設(shè)置可以被用來(lái)配置logging:
- LOG_ENABLED 默認(rèn): True,啟用logging
- LOG_ENCODING 默認(rèn): 'utf-8'怨咪,logging使用的編碼
- LOG_FILE 默認(rèn): None屋剑,在當(dāng)前目錄里創(chuàng)建logging輸出文件的文件名
- LOG_LEVEL 默認(rèn): 'DEBUG',log的最低級(jí)別
- LOG_STDOUT 默認(rèn): False 如果為 True诗眨,進(jìn)程所有的標(biāo)準(zhǔn)輸出(及錯(cuò)誤)將會(huì)被重定向到log中唉匾。例如,執(zhí)行 print "hello" ,其將會(huì)在Scrapy log中顯示