1.ImagesPipeline簡介
Scrapy用ImagesPipeline類提供一種方便的方式來下載和存儲圖片。
特點(diǎn):
- 將下載圖片轉(zhuǎn)換成通用的JPG和RGB格式
- 避免重復(fù)下載
- 縮略圖生成
- 圖片大小過濾
2.ImagesPipeline工作流程
當(dāng)使用圖片管道 ImagePipeline,典型的工作流程如下:
- 在一個爬蟲里,你抓取一個項(xiàng)目,把其中圖片的URL放入image_urls組內(nèi)指黎。
- 項(xiàng)目從爬蟲內(nèi)返回,進(jìn)入項(xiàng)目管道。
- 當(dāng)項(xiàng)目進(jìn)入ImagePipeline, image_urls組內(nèi)的URLs將被Scrapy的調(diào)度器和下載器安排下載(這意味著調(diào)度器和中間件可以復(fù)用),當(dāng)優(yōu)先級更高,會在其他頁面被抓取前處理. 項(xiàng)目會在這個特定的管道階段保持"locker"的狀態(tài),直到完成圖片的下載(或者由于某些原因未完成下載)且改。
- 當(dāng)圖片下載完, 另一個組(images)將被更新到結(jié)構(gòu)中,這個組將包含一個字典列表,其中包括下載圖片的信息,比如下載路徑,源抓取地址(從image_urls組獲得)和圖片的校驗(yàn)碼. images列表中的圖片順序?qū)⒑驮磇mage_urls組保持一致.如果某個圖片下載失敗,將會記錄下錯誤信息,圖片也不會出現(xiàn)在images組中掀抹。
3.操作過程
項(xiàng)目目錄結(jié)構(gòu):
<font size=5>要想成功爬取圖片蹬竖,需要經(jīng)過以下幾個步驟:
(1) 在items.py中添加image_urls、images和image_paths字段丈冬,代碼如下:
class DoubanImgsItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
image_urls = Field()
images = Field()
image_paths = Field()
(2)在settings.py中設(shè)置條件和屬性嘱函,代碼如下:
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
# ImagePipeline的自定義實(shí)現(xiàn)類
ITEM_PIPELINES = {
'douban_imgs.pipelines.DoubanImgDownloadPipeline': 300,
}
#設(shè)置圖片下載路徑
IMAGES_STORE = 'D:\\doubanimgs'
# 過期天數(shù)
IMAGES_EXPIRES = 90 #90天內(nèi)抓取的都不會被重抓
(3)在spiders/download_douban.py中書寫ImageSpider的代碼:
# coding=utf-8
from scrapy.spiders import Spider
import re
from scrapy import Request
from ..items import DoubanImgsItem
class download_douban(Spider):
name = 'download_douban'
default_headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, sdch, br',
'Accept-Language': 'zh-CN,zh;q=0.8,en;q=0.6',
'Cache-Control': 'max-age=0',
'Connection': 'keep-alive',
'Host': 'www.douban.com',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36',
}
def __init__(self, url='1638835355', *args, **kwargs):
self.allowed_domains = ['douban.com']
self.start_urls = [
'http://www.douban.com/photos/album/%s/' % (url)]
self.url = url
# call the father base function
# super(download_douban, self).__init__(*args, **kwargs)
def start_requests(self):
for url in self.start_urls:
yield Request(url=url, headers=self.default_headers, callback=self.parse)
def parse(self, response):
list_imgs = response.xpath('//div[@class="photolst clearfix"]//img/@src').extract()
if list_imgs:
item = DoubanImgsItem()
item['image_urls'] = list_imgs
yield item
(4)在pipelines.py中自定義ImagePipeline代碼:
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
from scrapy.pipelines.images import ImagesPipeline
from scrapy.exceptions import DropItem
from scrapy import Request
from scrapy import log
class DoubanImgsPipeline(object):
def process_item(self, item, spider):
return item
class DoubanImgDownloadPipeline(ImagesPipeline):
default_headers = {
'accept': 'image/webp,image/*,*/*;q=0.8',
'accept-encoding': 'gzip, deflate, sdch, br',
'accept-language': 'zh-CN,zh;q=0.8,en;q=0.6',
'cookie': 'bid=yQdC/AzTaCw',
'referer': 'https://www.douban.com/photos/photo/2370443040/',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36',
}
def get_media_requests(self, item, info):
for image_url in item['image_urls']:
self.default_headers['referer'] = image_url
yield Request(image_url, headers=self.default_headers)
def item_completed(self, results, item, info):
image_paths = [x['path'] for ok, x in results if ok]
if not image_paths:
raise DropItem("Item contains no images")
item['image_paths'] = image_paths
return item
在自定義ImagePipeline代碼中,作為重要的是要重載get_media_requests(self, item, info)和item_completed(self, results, item, info)這兩個函數(shù)埂蕊。
- get_media_requests(self,item, info):
ImagePipeline根據(jù)image_urls中指定的url進(jìn)行爬取往弓,可以通過get_media_requests為每個url生成一個Request。如:
for image_url in item['image_urls']:
self.default_headers['referer'] = image_url
yield Request(image_url, headers=self.default_headers)
- item_completed(self, results, item, info):
圖片下載完畢后粒梦,處理結(jié)果會以二元組的方式返回給item_completed()函數(shù)亮航。這個二元組定義如下:
(success, image_info_or_failure)
其中,第一個元素表示圖片是否下載成功匀们;第二個元素是一個字典缴淋。如:
def item_completed(self, results, item, info):
image_paths = [x['path'] for ok, x in results if ok]
if not image_paths:
raise DropItem("Item contains no images")
item['image_paths'] = image_paths
return item
4.爬取結(jié)果
運(yùn)行結(jié)果如下:
下載成功以后,你就會在剛才設(shè)置的保存圖片的路徑里看到下載完成的圖片:IMAGES_STORE = 'D:\doubanimgs'
5.擴(kuò)展
默認(rèn)情況下泄朴,使用ImagePipeline組件下載圖片的時候重抖,圖片名稱是以圖片URL的SHA1值進(jìn)行保存的。
如:
圖片URL:http://www.example.com/image.jpg
SHA1結(jié)果:3afec3b4765f8f0a07b78f98c07b83f013567a0a
則圖片名稱:3afec3b4765f8f0a07b78f98c07b83f013567a0a.jpg
如果想進(jìn)行更改祖灰,請參考:使用scrapy框架的ImagesPipeline下載圖片如何保持原文件名呢钟沛?