Scrapy1.4最新官方文檔總結(jié) 1 介紹·安裝
Scrapy1.4最新官方文檔總結(jié) 2 Tutorial
Scrapy1.4最新官方文檔總結(jié) 3 命令行工具
這是官方文檔的Tutorial(https://docs.scrapy.org/en/latest/intro/tutorial.html)目锭。
推薦四個(gè)Python學(xué)習(xí)資源:
- Dive Into Python 3
- Python Tutorial
- Learn Python The Hard Way
- this list of Python resources for non-programmers
創(chuàng)建項(xiàng)目
使用命令:
scrapy startproject tutorial
會生成以下文件:
在tutorial/spiders文件夾新建文件quotes_spider.py奕剃,它的代碼如下:
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
def start_requests(self):
urls = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
page = response.url.split("/")[-2]
filename = 'quotes-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
start_requests方法返回 scrapy.Request對象。每收到一個(gè)捐下,就實(shí)例化一個(gè)Response對象邢滑,并調(diào)用和request綁定的調(diào)回方法(即parse)蹂午,將response作為參數(shù)。
切換到根目錄翔怎,運(yùn)行爬蟲:
scrapy crawl quotes
根目錄下會產(chǎn)生兩個(gè)文件窃诉,quotes-1.html和quotes-2.html。
另一種方法是定義一個(gè)包含URLs的類赤套,parse( )是Scrapy默認(rèn)的調(diào)回方法飘痛,即使沒有指明調(diào)回,也會執(zhí)行:
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
]
def parse(self, response):
page = response.url.split("/")[-2]
filename = 'quotes-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
提取信息
學(xué)習(xí)Scrapy提取信息的最好方法是使用Scrapy Shell容握,win7 shell運(yùn)行:
scrapy shell "http://quotes.toscrape.com/page/1/"
或者宣脉,gitbash運(yùn)行,注意有單引號和雙引號的區(qū)別:
scrapy shell 'http://quotes.toscrape.com/page/1/'
輸出如下:
利用CSS進(jìn)行提忍奘稀:
>>> response.css('title')
[<Selector xpath='descendant-or-self::title' data='<title>Quotes to Scrape</title>'>]
只提取標(biāo)題的文本:
>>> response.css('title::text').extract()
['Quotes to Scrape']
::text表示只提取文本塑猖,去掉的話,顯示如下:
>>> response.css('title').extract()
['<title>Quotes to Scrape</title>']
因?yàn)榉祷貙ο笫且粋€(gè)列表谈跛,只提取第一個(gè)的話羊苟,使用:
>>> response.css('title::text').extract_first()
'Quotes to Scrape'
或者,使用序號:
>>> response.css('title::text')[0].extract()
'Quotes to Scrape'
前者更好币旧,可以避免潛在的序號錯(cuò)誤践险。
除了使用 extract()和 extract_first(),還可以用正則表達(dá)式:
>>> response.css('title::text').re(r'Quotes.*')
['Quotes to Scrape']
>>> response.css('title::text').re(r'Q\w+')
['Quotes']
>>> response.css('title::text').re(r'(\w+) to (\w+)')
['Quotes', 'Scrape']
XPath簡短介紹
Scrapy還支持XPath:
>>> response.xpath('//title')
[<Selector xpath='//title' data='<title>Quotes to Scrape</title>'>]
>>> response.xpath('//title/text()').extract_first()
'Quotes to Scrape'
其實(shí)吹菱,CSS是底層轉(zhuǎn)化為XPath的巍虫,但XPath的功能更為強(qiáng)大,比如它可以選擇包含next page的鏈接鳍刷。更多見 using XPath with Scrapy Selectors here占遥。
繼續(xù)提取名人名言
http://quotes.toscrape.com的每個(gè)名言的HTML結(jié)構(gòu)如下:
<div class="quote">
<span class="text">“The world as we have created it is a process of our
thinking. It cannot be changed without changing our thinking.”</span>
<span>
by <small class="author">Albert Einstein</small>
<a href="/author/Albert-Einstein">(about)</a>
</span>
<div class="tags">
Tags:
<a class="tag" href="/tag/change/page/1/">change</a>
<a class="tag" href="/tag/deep-thoughts/page/1/">deep-thoughts</a>
<a class="tag" href="/tag/thinking/page/1/">thinking</a>
<a class="tag" href="/tag/world/page/1/">world</a>
</div>
</div>
使用:
$ scrapy shell "http://quotes.toscrape.com"
將HTML的元素以列表的形式提取出來:
response.css("div.quote")
只要第一個(gè):
quote = response.css("div.quote")[0]
提取出標(biāo)題、作者输瓜、標(biāo)簽:
>>> title = quote.css("span.text::text").extract_first()
>>> title
'“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”'
>>> author = quote.css("small.author::text").extract_first()
>>> author
'Albert Einstein'
標(biāo)簽是一組字符串:
>>> tags = quote.css("div.tags a.tag::text").extract()
>>> tags
['change', 'deep-thoughts', 'thinking', 'world']
弄明白了提取每個(gè)名言瓦胎,現(xiàn)在提取所有的:
>>> for quote in response.css("div.quote"):
... text = quote.css("span.text::text").extract_first()
... author = quote.css("small.author::text").extract_first()
... tags = quote.css("div.tags a.tag::text").extract()
... print(dict(text=text, author=author, tags=tags))
{'tags': ['change', 'deep-thoughts', 'thinking', 'world'], 'author': 'Albert Einstein', 'text': '“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”'}
{'tags': ['abilities', 'choices'], 'author': 'J.K. Rowling', 'text': '“It is our choices, Harry, that show what we truly are, far more than our abilities.”'}
... a few more of these, omitted for brevity
>>>
用爬蟲提取信息
使用Python的yield:
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
]
def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').extract_first(),
'author': quote.css('small.author::text').extract_first(),
'tags': quote.css('div.tags a.tag::text').extract(),
}
運(yùn)行爬蟲芬萍,日志如下:
2016-09-19 18:57:19 [scrapy.core.scraper] DEBUG: Scraped from <200 http://quotes.toscrape.com/page/1/>
{'tags': ['life', 'love'], 'author': 'André Gide', 'text': '“It is better to be hated for what you are than to be loved for what you are not.”'}
2016-09-19 18:57:19 [scrapy.core.scraper] DEBUG: Scraped from <200 http://quotes.toscrape.com/page/1/>
{'tags': ['edison', 'failure', 'inspirational', 'paraphrased'], 'author': 'Thomas A. Edison', 'text': "“I have not failed. I've just found 10,000 ways that won't work.”"}
保存數(shù)據(jù)
最便捷的方式是使用feed export,保存為json搔啊,命令如下:
scrapy crawl quotes -o quotes.json
保存為json lines:
scrapy crawl quotes -o quotes.jl
保存為csv:
scrapy crawl quotes -o quotes.csv
提取下一頁
首先看下一頁的鏈接:
<ul class="pager">
<li class="next">
<a href="/page/2/">Next <span aria-hidden="true">→</span></a>
</li>
</ul>
提取出來:
>>> response.css('li.next a').extract_first()
'<a href="/page/2/">Next <span aria-hidden="true">→</span></a>'
只要href:
>>> response.css('li.next a::attr(href)').extract_first()
'/page/2/'
利用urljoin生成完整的url螃成,生成下一頁的請求,就可以循環(huán)抓取了:
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com/page/1/',
]
def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').extract_first(),
'author': quote.css('small.author::text').extract_first(),
'tags': quote.css('div.tags a.tag::text').extract(),
}
next_page = response.css('li.next a::attr(href)').extract_first()
if next_page is not None:
next_page = response.urljoin(next_page)
yield scrapy.Request(next_page, callback=self.parse)
更簡潔的方式是使用 response.follow:
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com/page/1/',
]
def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').extract_first(),
'author': quote.css('span small::text').extract_first(),
'tags': quote.css('div.tags a.tag::text').extract(),
}
next_page = response.css('li.next a::attr(href)').extract_first()
if next_page is not None:
yield response.follow(next_page, callback=self.parse)
直接將參數(shù)傳遞給response.follow:
for href in response.css('li.next a::attr(href)'):
yield response.follow(href, callback=self.parse)
對于a標(biāo)簽今缚,response.follow可以直接使用它的屬性调窍,這樣就可以變得更簡潔:
for a in response.css('li.next a'):
yield response.follow(a, callback=self.parse)
下面這個(gè)爬蟲提取作者信息,使用了調(diào)回和自動獲取下一頁:
import scrapy
class AuthorSpider(scrapy.Spider):
name = 'author'
start_urls = ['http://quotes.toscrape.com/']
def parse(self, response):
# 作者鏈接
for href in response.css('.author + a::attr(href)'):
yield response.follow(href, self.parse_author)
# 分頁鏈接
for href in response.css('li.next a::attr(href)'):
yield response.follow(href, self.parse)
def parse_author(self, response):
def extract_with_css(query):
return response.css(query).extract_first().strip()
yield {
'name': extract_with_css('h3.author-title::text'),
'birthdate': extract_with_css('.author-born-date::text'),
'bio': extract_with_css('.author-description::text'),
}
使用爬蟲參數(shù)
在命令行中使用參數(shù)旧蛾,只要添加 -a:
scrapy crawl quotes -o quotes-humor.json -a tag=humor
將humor傳遞給tag:
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
def start_requests(self):
url = 'http://quotes.toscrape.com/'
tag = getattr(self, 'tag', None)
if tag is not None:
url = url + 'tag/' + tag
yield scrapy.Request(url, self.parse)
def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').extract_first(),
'author': quote.css('small.author::text').extract_first(),
}
next_page = response.css('li.next a::attr(href)').extract_first()
if next_page is not None:
yield response.follow(next_page, self.parse)
更多例子
https://github.com/scrapy/quotesbot上有個(gè)叫做quotesbot的爬蟲莽龟,提供了CSS和XPath兩種寫法:
import scrapy
class ToScrapeCSSSpider(scrapy.Spider):
name = "toscrape-css"
start_urls = [
'http://quotes.toscrape.com/',
]
def parse(self, response):
for quote in response.css("div.quote"):
yield {
'text': quote.css("span.text::text").extract_first(),
'author': quote.css("small.author::text").extract_first(),
'tags': quote.css("div.tags > a.tag::text").extract()
}
next_page_url = response.css("li.next > a::attr(href)").extract_first()
if next_page_url is not None:
yield scrapy.Request(response.urljoin(next_page_url))
import scrapy
class ToScrapeSpiderXPath(scrapy.Spider):
name = 'toscrape-xpath'
start_urls = [
'http://quotes.toscrape.com/',
]
def parse(self, response):
for quote in response.xpath('//div[@class="quote"]'):
yield {
'text': quote.xpath('./span[@class="text"]/text()').extract_first(),
'author': quote.xpath('.//small[@class="author"]/text()').extract_first(),
'tags': quote.xpath('.//div[@class="tags"]/a[@class="tag"]/text()').extract()
}
next_page_url = response.xpath('//li[@class="next"]/a/@href').extract_first()
if next_page_url is not None:
yield scrapy.Request(response.urljoin(next_page_url))
Scrapy1.4最新官方文檔總結(jié) 1 介紹·安裝
Scrapy1.4最新官方文檔總結(jié) 2 Tutorial
Scrapy1.4最新官方文檔總結(jié) 3 命令行工具