先上一波福利
66347-2.jpg
66250-5.jpg
66158-3.jpg
68185.jpg
68205.jpg
俗話說:授人予魚不如授人以漁吱瘩,上代碼講解
項目結(jié)構(gòu)
WeChat74777e08e1d4298d3488d1c28ed68255.png
首先獲取到妹子圖的定位來獲取URL
屏幕快照 2020-06-20 上午11.44.27.png
右擊copy選擇Xpath就可以直接復制了
#找到妹子圖的鏈接
url = response.xpath('//*[@id="home-collections"]/ul/li[4]/div/div[1]/a/@href').extract()[0]
進入到下個頁面獲取頁面的全部URL,但是我發(fā)現(xiàn)是用ajax動態(tài)加載圖片的揍堰。
怎么獲得ajax信息獲取下一頁呢?
我們F12選擇Network,點擊XHR可以發(fā)現(xiàn)靜態(tài)文件
屏幕快照 2020-06-20 上午11.55.48.png
點擊滑到最下面,可以看到FormData的數(shù)據(jù)
屏幕快照 2020-06-20 上午11.57.08.png
沒錯捧弃,就是這數(shù)據(jù)
for i in range(76): #妹子圖一共有76頁
formdata = {
'type': 'collection29',
'paged': str(i+1) #paged是從1開始的
}
# Scrapy動態(tài)加載ajax請求,并使用回調(diào)函數(shù)進行深一層的爬取
yield scrapy.FormRequest(url, callback=self.parse_page, formdata=formdata)
然后進入更深層次的挖掘,獲取到所有頁面上的URL
屏幕快照 2020-06-20 下午12.03.24.png
def parse_page(self, response):
urls = response.xpath('//*[@id="main"]/div[1]/div/div/div[1]/a/@href').extract() #獲取到當前頁面的全部url
for url in urls:
#使用回調(diào)函數(shù)進行深一層的爬取
yield scrapy.Request(url, callback=self.parse_img)
然后進入更深層次的挖掘擦囊,圖片標題的爬取,和每張圖片的爬取
屏幕快照 2020-06-20 下午12.07.23.png
def parse_img(self, response):
item = BeautifulgirlItem()
name = response.xpath('//*[@id="post-single"]/h1/text()').extract()[0] #獲取到每個圖集的標題
print(name)
item['name'] = name
img_urls = response.xpath('//*[@id="content-innerText"]/p/img/@src').extract() #獲取到每張圖片的URL
for img_url in img_urls:
item['img_url'] = img_url
print(img_url)
yield item
下面我把代碼整合一下
peibanni.py
# -*- coding: utf-8 -*-
import scrapy
from beautifulgirl.items import BeautifulgirlItem
class PeibanniSpider(scrapy.Spider):
name = 'peibanni'
allowed_domains = ['www.peibanni.com']
start_urls = ['https://www.peibanni.com/']
def parse(self, response):
url = response.xpath('//*[@id="home-collections"]/ul/li[4]/div/div[1]/a/@href').extract()[0] #找到妹子圖的鏈接
for i in range(76):
formdata = {
'type': 'collection29',
'paged': str(I+1)
}
# Scrapy動態(tài)加載ajax請求,并使用回調(diào)函數(shù)進行深一層的爬取
yield scrapy.FormRequest(url, callback=self.parse_page, formdata=formdata)
def parse_page(self, response):
urls = response.xpath('//*[@id="main"]/div[1]/div/div/div[1]/a/@href').extract() #獲取到當前頁面的全部url
for url in urls:
yield scrapy.Request(url, callback=self.parse_img) #使用回調(diào)函數(shù)進行深一層的爬取
def parse_img(self, response):
item = BeautifulgirlItem()
name = response.xpath('//*[@id="post-single"]/h1/text()').extract()[0] #獲取到每個圖集的標題
print(name)
item['name'] = name
img_urls = response.xpath('//*[@id="content-innerText"]/p/img/@src').extract() #獲取到每張圖片的URL
for img_url in img_urls:
item['img_url'] = img_url
print(img_url)
yield item
接著是圖片數(shù)據(jù)的保存
在item.py中
import scrapy
class BeautifulgirlItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
name = scrapy.Field() #圖片的名稱
img_url = scrapy.Field() #圖片的URL
在pipeline中嘴办,使用內(nèi)置的圖片解析ImagesPipeline
import scrapy
from scrapy.pipelines.images import ImagesPipeline
# class BeautifulgirlPipeline(object):
# def process_item(self, item, spider):
# return item
class BeautifulgirlPipeline(ImagesPipeline):
def get_media_requests(self, item, info):
#這個方法會循環(huán)執(zhí)行瞬场,前面每次傳入一個item,這個item就交給引擎
#引擎又會交給管道來執(zhí)行涧郊,管道里有一系列的內(nèi)置方法贯被,這些方法會依次執(zhí)行
yield scrapy.Request(url=item['img_url'], meta={'item':item})
def file_path(self, request, response=None, info=None):
item = request.meta['item']
#獲取到每張圖片在URL中的名字,來達到去重到目的
# 比如"https://www.peibanni.com/wp-content/uploads/2020/06/68215.jpg"妆艘,就獲取到了68215.jpg彤灶,
img_tail = request.url.split('/')[-1]
path = u'{0}/{1}'.format(item['name'],img_tail)
return path
屏幕快照 2020-06-20 下午12.23.17.png
最后別忘記在setting.py中設置,
IMAGES_STORE不能拼錯,拼錯了就無法保存本地了批旺!
IMAGES_STORE = 'images'
setting.py的完整版
# -*- coding: utf-8 -*-
# Scrapy settings for beautifulgirl project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'beautifulgirl'
SPIDER_MODULES = ['beautifulgirl.spiders']
NEWSPIDER_MODULE = 'beautifulgirl.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'beautifulgirl (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_3) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0.3 Safari/605.1.15',
}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'beautifulgirl.middlewares.BeautifulgirlSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'beautifulgirl.middlewares.BeautifulgirlDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'beautifulgirl.pipelines.BeautifulgirlPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
IMAGES_STORE = 'images'
如果覺得我寫的還不錯幌陕,就給我一個大大贊,你的支持是我的動力汽煮!