LinkExtractor
對于提取鏈接,之前提到過可以通過Selector
來提取,但Selector
比較適合于爬去的連接比較簡單其模式比較固定的情況囤锉。scrapy提供了另一個(gè)鏈接提取的方法scrapy.linkextractors.LinkExtractor
丈莺,這種方法比較適合于爬去整站鏈接,并且只需聲明一次就可使用多次拐迁。先來看看LinkExtractor
構(gòu)造的參數(shù):
LinkExtractor(allow=(), deny=(), allow_domains=(), deny_domains=(), deny_extensions=None, restrict_xpaths=(), restrict_css=(), tags=('a', 'area'), attrs=('href', ), canonicalize=False, unique=True, process_value=None, strip=True)
下面看看各個(gè)參數(shù)并用實(shí)例講解:
body = """
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<div class='scrapyweb'>
<p>Scrapy主站</p>
<a >Download</a>
<a >Doc</a>
<a >Resources</a>
</div>
<div class='scrapydoc'>
<p>Scrapy 開發(fā)文檔</p>
<a >Scrapy at a glance</a>
<a >Installation guide</a>
<a >Scrapy Tutorial</a>
<a >Docs in github</a>
</div>
</body>
</html>
""".encode('utf8')
response = scrapy.http.HtmlResponse(url='', body = body)
allow
:一個(gè)正則表達(dá)式或正則表達(dá)式的列表蹭劈,只有匹配正則表達(dá)式的才會被提取出來,如果沒有提供线召,就會爬取所有鏈接铺韧。
>>> pattern = r'/intro/\w+$'
>>> link_extractor = LinkExtractor(allow = pattern)
>>>
>>> links = link_extractor.extract_links(response)
>>> links
[Link(url='https://docs.scrapy.org/en/latest/intro/overview.html', text='Scrapy at a glance', fragment='', nofollow=False), Link(url='https://docs.scrapy.org/en/latest/intro/install.html', text='Installation guide', fragment='', nofollow=False), Link(url='https://docs.scrapy.org/en/latest/intro/tutorial.html', text='Scrapy Tutorial', fragment='', nofollow=False)]
deny
:一個(gè)正則表達(dá)式或正則表達(dá)式的列表,與allow
相反缓淹,匹配該正則表達(dá)式的鏈接不會被提取哈打。
>>> pattern = r'/intro/\w+$'
>>> link_extractor = LinkExtractor(deny = pattern)
>>> links = link_extractor.extract_links(response)
>>> links
[Link(url='https://scrapy.org/download/', text='Download', fragment='', nofollow=False), Link(url='https://scrapy.org/doc/', text='Doc', fragment='', nofollow=False), Link(url='https://scrapy.org/resources/', text='Resources', fragment='', nofollow=False), Link(url='https://docs.scrapy.org/en/latest/intro/overview.html', text='Scrapy at a glance', fragment='', nofollow=False), Link(url='https://docs.scrapy.org/en/latest/intro/install.html', text='Installation guide', fragment='', nofollow=False), Link(url='https://docs.scrapy.org/en/latest/intro/tutorial.html', text='Scrapy Tutorial', fragment='', nofollow=False)]
allow_domains
:域名或域名列表,該域名下的鏈接會被爬妊逗料仗;
>>> allow_domain = 'docs.scrapy.org'
>>> link_extractor = LinkExtractor(allow_domains = allow_domain)
>>> links = link_extractor.extract_links(response)
>>> links
[Link(url='https://docs.scrapy.org/en/latest/intro/overview.html', text='Scrapy at a glance', fragment='', nofollow=False), Link(url='https://docs.scrapy.org/en/latest/intro/install.html', text='Installation guide', fragment='', nofollow=False), Link(url='https://docs.scrapy.org/en/latest/intro/tutorial.html', text='Scrapy Tutorial', fragment='', nofollow=False)]
deny_domains
:域名或域名列表,該域名下的鏈接不會被爬确谩立轧;
>>> deny_domain = 'docs.scrapy.org'
>>> link_extractor = LinkExtractor(deny_domains = deny_domain)
>>> links = link_extractor.extract_links(response)
>>> links
[Link(url='https://scrapy.org/download/', text='Download', fragment='', nofollow=False), Link(url='https://scrapy.org/doc/', text='Doc', fragment='', nofollow=False), Link(url='https://scrapy.org/resources/', text='Resources', fragment='', nofollow=False)]
deny_extensions
:字符串或字符串的列表,屬于該后綴名的鏈接不會被爬取躏吊,若不提供的話氛改,會使用默認(rèn)選項(xiàng);
>>> deny_extensions = 'html'
>>> link_extractor = LinkExtractor(deny_extensions = deny_extensions)
>>> links = link_extractor.extract_links(response)
>>> links
[Link(url='https://scrapy.org/download/', text='Download', fragment='', nofollow=False), Link(url='https://scrapy.org/doc/', text='Doc', fragment='', nofollow=False), Link(url='https://scrapy.org/resources/', text='Resources', fragment='', nofollow=False), Link(url='https://github.com/scrapy/scrapy/blob/1.5/docs/topics/media-pipeline.rst', text='Docs in github', fragment='', nofollow=False)]
restrict_xpahs
:xpath或xpath的列表颜阐,符合該xpath的列表才會被爬绕骄健;
>>> xpath = '//div[@class="scrapyweb"]'
>>> link_extractor = LinkExtractor(restrict_xpaths=xpath)
>>> links = link_extractor.extract_links(response)
>>> links
[Link(url='https://scrapy.org/download/', text='Download', fragment='', nofollow=False), Link(url='https://scrapy.org/doc/', text='Doc', fragment='', nofollow=False), Link(url='https://scrapy.org/resources/', text='Resources', fragment='', nofollow=False)]
restrict_css
:同上凳怨;
>>> css = 'div.scrapyweb'
>>> link_extractor = LinkExtractor(restrict_css=css)
>>> links = link_extractor.extract_links(response)
>>> links
[Link(url='https://scrapy.org/download/', text='Download', fragment='', nofollow=False), Link(url='https://scrapy.org/doc/', text='Doc', fragment='', nofollow=False), Link(url='https://scrapy.org/resources/', text='Resources', fragment='', nofollow=False)]
tags
:tag或tag的list瑰艘,提取指定標(biāo)簽中的鏈接是鬼,默認(rèn)為[a,area]
;
attrs
:屬性或?qū)傩缘牧斜碜闲拢崛≈付▽傩詢?nèi)的鏈接均蜜,默認(rèn)為'href';
>>> body = b"""<img src="http://p0.so.qhmsg.com/bdr/326__/t010ebf2ec5ab7eed55.jpg"/>"""
>>> response = scrapy.http.HtmlResponse(url='', body = body)
>>> tag = 'img'
>>> attr='src'
>>> link_extractor = LinkExtractor(tags = tag, attrs=attr, deny_extensions='') #默認(rèn)jpg是不會爬到的
>>> links = link_extractor.extract_links(response)
>>> links
[Link(url='http://p0.so.qhmsg.com/bdr/326__/t010ebf2ec5ab7eed55.jpg', text='', fragment='', nofollow=False)]
process_value
:回調(diào)函數(shù)芒率,該函數(shù)會對每一個(gè)鏈接進(jìn)行處理囤耳,回調(diào)函數(shù)要么返回一個(gè)處理后的鏈接,要么返回None表示忽略該鏈接偶芍,默認(rèn)函數(shù)為lambda x:x
充择。
>>> from urllib.parse import urljoin
>>> def process(href):
... return urljoin('http://example.com', href)
...
>>> body = b"""<a href="example.html"/>"""
>>> response = scrapy.http.HtmlResponse(url='', body = body)
>>> link_extractor = LinkExtractor(process_value = process)
>>> links = link_extractor.extract_links(response)
>>> links
[Link(url='http://example.com/example.html', text='', fragment='', nofollow=False)]
下面我們用LinkExtractor
來提取第二篇博客如何編寫一個(gè)Spider的下一頁鏈接為例,看怎么在scrapy中應(yīng)用LinkExtractor
匪蟀;
修改后的quotes
如下:
# -*- coding: utf-8 -*-
import scrapy
from ..items import QuoteItem
from scrapy.linkextractors import LinkExtractor
class QuotesSpider(scrapy.Spider):
name = 'quotes'
allowed_domains = ['quotes.toscrape.com']
start_urls = ['http://quotes.toscrape.com/']
link_extractor = LinkExtractor(allow=r'/page/\d+/',restrict_css='li.next') #聲明一個(gè)LinkExtractor對象
# def start_requests(self):
# url = "http://quotes.toscrape.com/"
# yield scrapy.Request(url, callback = self.parse)
def parse(self, response):
quote_selector_list = response.css('body > div > div:nth-child(2) > div.col-md-8 div.quote')
for quote_selector in quote_selector_list:
quote = quote_selector.css('span.text::text').extract_first()
author = quote_selector.css('span small.author::text').extract_first()
tags = quote_selector.css('div.tags a.tag::text').extract()
yield QuoteItem({'quote':quote, 'author':author, 'tags':tags})
links = self.link_extractor.extract_links(response) #爬取鏈接
if links:
yield scrapy.Request(links[0].url, callback = self.parse)
LinkExtrator
與CrawlSpider
結(jié)合用的比較多椎麦,后面提到CrawlSpider
的時(shí)候回講到如何應(yīng)用。
CrawlSpider
scrapy除了提供基礎(chǔ)的spider
類材彪,還提供了一個(gè)更為強(qiáng)大的類CrawlSpider
观挎,CrawlSpider
是基于Spider
改造的,是為全站爬取而生的段化,非常適合爬取京東嘁捷、知乎這張有規(guī)律的網(wǎng)站。CrawlSpider
基于ExtractorLink
制定了跟進(jìn)url
的規(guī)則显熏,如果打算從網(wǎng)頁中獲得url
之后繼續(xù)爬取雄嚣,非常適合使用 CrawlSpider
。
先來看下scrapy.spiders.CrawlSpider
佃延,CrawlSpider
有2個(gè)新的屬性:
-
rules
:Rule
對象的列表现诀,定義了爬取link的規(guī)則及處理方式; -
parse_start_url(response)
:用來爬取起始相應(yīng)履肃,默認(rèn)返回空列表仔沿,子類可重寫,可返回Item對象或Request對象尺棋,或者它們的可迭代對象封锉。
在來看下Rule
,
scrapy.spiders.Rule(link_extractor, callback=None, cb_kwargs=None, follow=None, process_links=None, process_request=None)
-
link_extractor
:LinkExtractor對象膘螟; -
callback
:爬取后連接的回調(diào)函數(shù)成福,該回調(diào)函數(shù)接收Response對象,并返回Item/Response()或它們的子類(不要使用parse作為其回調(diào)荆残,CrawlSpider
使用parse方法實(shí)現(xiàn)了自己的邏輯)奴艾; -
cb_kwargs
:字典,用于作為**kwargs
參數(shù)内斯,傳遞給callback蕴潦; -
follow
:是否跟進(jìn)像啼,若callback=None
,則follow
默認(rèn)為True
潭苞,否則默認(rèn)為False
忽冻; -
process_links
:可調(diào)用對象,針對每一個(gè)link_extractor
提取的鏈接會調(diào)用該對象此疹,通常作為鏈接的預(yù)處理用僧诚; -
process_request
:可調(diào)用對象,針對每一個(gè)鏈接構(gòu)成的Request
對象會調(diào)用蝗碎,返回一個(gè)Request
對象或None
湖笨。
下面以爬取http://books.toscrape.com/的網(wǎng)站獲取書名和對象的價(jià)格來看下怎么使用CrawlSpider
;代碼如下:
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
class BookToscrapeSpider(CrawlSpider):
name = 'book_toscrape'
allowed_domains = ['books.toscrape.com']
start_urls = ['http://books.toscrape.com/']
rules = (
Rule(LinkExtractor(allow=r'catalogue/[\w\-\d]+/index.html'), callback='parse_item', follow=False), #爬取詳情頁衍菱,不follow
Rule(LinkExtractor(allow=r'page-\d+.html')), #爬取下一頁赶么,默認(rèn)follow
)
def parse_item(self, response):
title = response.css('div.product_main h1::text').extract_first()
price = response.css('div.product_main p.price_color::text').extract_first()
#簡單返回
yield {
'title':title,
'price':price,
}
運(yùn)行scrapy crawl book_toscrape -o sell.csv
肩豁,可以獲取到該網(wǎng)站的所有書名與對應(yīng)價(jià)格脊串。
總結(jié)
本篇簡單介紹了用于爬取固定模板鏈接的LinkExtractor,然后講了與之經(jīng)常使用的爬蟲類CrawlSpider
清钥,使用CrawlSpider
可以用來爬取模式比較固定的網(wǎng)站琼锋。下篇我們來看看scrapy中間件的使用。