通過(guò)以下命令可以快速創(chuàng)建CrawlSpider模板的代碼
scrapy genspider -t crawl tencent tencent.com
class scrapy.spider.CrawlSpider
它是Spider的派生類(lèi),spider類(lèi)的設(shè)計(jì)原則只是爬去start_url列表中的網(wǎng)頁(yè),而CrawlSpider類(lèi)定義了一些規(guī)則來(lái)提供跟進(jìn)link的方便機(jī)制楚里,從爬取的網(wǎng)頁(yè)中獲取link的方便機(jī)制,從爬取的網(wǎng)頁(yè)中獲取link并繼續(xù)爬取的工作更合適称近。
源碼參考
class CrawlSpider(spider):
rules = ()
def __init__(self,*a,**kw):
super(CrawlSpider,self).__init__(*a,**kw)
self._compile_rules()
#首先調(diào)用parse()來(lái)處理start_urls中的返回的response對(duì)象
#parse()則將這些response對(duì)象傳遞給了_parse_respose()函數(shù)處理撇贺,并設(shè)置了調(diào)回函數(shù)parse_start_url()
#設(shè)置了跟進(jìn)標(biāo)志位True
#parse將返回item和跟進(jìn)了Request對(duì)象
def parse(self,response):
return self._parse_response(response,self. parse_start_url,cb_kwargs={},follow=True)
#處理start_url中返回的response趟脂,需要重寫(xiě)
def parse_start_url(self,response):
return []
def process_results(self, response,results):
return results
#從response中抽取符合任意用戶定義’規(guī)則‘的鏈接兵迅,并構(gòu)造成response對(duì)象返回
def _requests_to_follow(self,response):
if not isinstance(response,HtmlResponse):
return
seen = set()
#抽取之內(nèi)的所有鏈接抢韭,只要通過(guò)任意一個(gè)規(guī)則,即表示合法.
for n,rule in enumerate(self._rules):
links = [l for l in rule.link_extractor.extract_links(response)if l not in seen]
#使用用戶指定的process_links處理的每個(gè)鏈接
if links and rule.process_links:
links = rule.process_links(links)
#將鏈接加入到seen集合恍箭,為每個(gè)鏈接生產(chǎn)qequest對(duì)象刻恭,并設(shè)置回調(diào)函數(shù)為_(kāi)response_download()
for link in links:
seen.add(link)
#構(gòu)造request對(duì)象,并將rule規(guī)則中定義的回調(diào)函數(shù)作為這個(gè)request的回調(diào)函數(shù)
r=Request(url=link.url,callback=self._repose_download)
r.meta.update(rule=n,link_text=link_text)
#對(duì)每個(gè) Request調(diào)用process_request()扯夭。該函數(shù)默認(rèn)為indentify,即不做任何處理鳍贾,直接返回Request
yield rule.process_request(r)
#處理通過(guò)rule提取出的連接,并返回item以及request
def _response_downloaded(self, response):
rule = self._rules[response.meta['rule']]
return self._parse_response(response, rule.callback, rule.cb_kwargs, rule.follow)
#解析response對(duì)象交洗,會(huì)用callback解析處理他骑科,并返回request或Item對(duì)象
def _parse_response(self, response, callback, cb_kwargs, follow=True):
#首先判斷是否設(shè)置了回調(diào)函數(shù)。(該回調(diào)函數(shù)可能是rule中的解析函數(shù)藕筋,也可能是 parse_start_url函數(shù))
#如果設(shè)置了回調(diào)函數(shù)(parse_start_url())纵散,那么首先用parse_start_url()處理response對(duì)象,
#然后再交給process_results處理隐圾。返回cb_res的一個(gè)列表
if callback:
#如果是parse調(diào)用的,則會(huì)解析成Request對(duì)象
#如果是rule callback掰茶,則會(huì)解析成Item
cb_res = callback(response, **cb_kwargs) or ()
cb_res = self.process_results(response, cb_res)
for requests_or_item in iterate_spider_output(cb_res):
yield requests_or_item
#如果需要跟進(jìn)暇藏,那么使用定義的Rule規(guī)則提取并返回這些Request對(duì)象
if follow and self._follow_links:
#返回每個(gè)Request對(duì)象
for request_or_item in self._requests_to_follow(response):
yield request_or_item
def _compile_rules(self):
def get_method(method):
if callable(method):
return method
elif isinstance(method, basestring):
return getattr(self, method, None)
self._rules = [copy.copy(r) for r in self.rules]
for rule in self._rules:
rule.callback = get_method(rule.callback)
rule.process_links = get_method(rule.process_links)
rule.process_request = get_method(rule.process_request)
def set_crawler(self, crawler):
super(CrawlSpider, self).set_crawler(crawler)
self._follow_links = crawler.settings.getbool('CRAWLSPIDER_FOLLOW_LINKS', True)
LinkExtractors
class scrapy.linkextractors.LinkExtractor
Link Extractors 的目的很簡(jiǎn)單: 提取鏈接?
每個(gè)LinkExtractor有唯一的公共方法是 extract_links(),它接收一個(gè) Response 對(duì)象濒蒋,并返回一個(gè) scrapy.link.Link 對(duì)象盐碱。
Link Extractors要實(shí)例化一次,并且 extract_links 方法會(huì)根據(jù)不同的 response 調(diào)用多次提取鏈接?
class scrapy.linkextractors.LinkExtractor(
allow = (),
deny = (),
allow_domains = (),
deny_domains = (),
deny_extensions = None,
restrict_xpaths = (),
tags = ('a','area'),
attrs = ('href'),
canonicalize = True,
unique = True,
process_value = None
)
主要參數(shù):
allow:滿足括號(hào)中“正則表達(dá)式”的值會(huì)被提取沪伙,如果為空瓮顽,則全部匹配。
deny:與這個(gè)正則表達(dá)式(或正則表達(dá)式列表)不匹配的URL一定不提取围橡。
allow_domains:會(huì)被提取的鏈接的domains暖混。
deny_domains:一定不會(huì)被提取鏈接的domains。
restrict_xpaths:使用xpath表達(dá)式翁授,和allow共同作用過(guò)濾鏈接拣播。