創(chuàng)建項(xiàng)目 scrapy startproject yingke cd yingke
創(chuàng)建爬蟲 scrapy genspider live
分析http://www.inke.cn/hotlive_list.html網(wǎng)頁的response,找到響應(yīng)里面數(shù)據(jù)的規(guī)律筛圆,并找到的位置喉前,通過response.xpath()獲取到
通過在pipline里面進(jìn)行數(shù)據(jù)的清洗怠缸,過濾童社,保存
實(shí)現(xiàn)翻頁戳表,進(jìn)行下一頁的請(qǐng)求處理
運(yùn)行爬蟲 scrapy crawl live
說明:這個(gè)程序直接在parse方法里面進(jìn)行圖片保存鄙煤,保存在本地耻煤,正常使用yield關(guān)鍵字進(jìn)行在pipline中保存具壮。
# -*- coding: utf-8 -*-
import scrapy
import re
class LiveSpider(scrapy.Spider):
name = 'live'
allowed_domains = ['inke.cn']
start_urls = ['http://www.inke.cn/hotlive_list.html?page=1']
def parse(self, response):
div_list = response.xpath("http://div[@class='list_box']")
for div in div_list:
item = {}
img_src = div.xpath("./div[@class='list_pic']/a/img/@src").extract_first()
item["user_name"] = div.xpath(
"./div[@class='list_user_info']/span[@class='list_user_name']/text()").extract_first()
print(item["user_name"])
yield scrapy.Request( # 發(fā)送詳情頁的請(qǐng)求
img_src,
callback=self.parse_img,
meta={"item": item}
)
# 下一頁
now_page = re.findall("page=(.*)", response.request.url)[0]
now_page= int(now_page)
next_url = "http://www.inke.cn/hotlive_list.html?page={}".format(str(now_page+ 1))
yield scrapy.Request(
next_url,
callback=self.parse
)
def parse_img(self, response):
user_name = response.meta["item"]["user_name"]
with open("images/{}.png".format(user_name), "wb") as f:
f.write(response.body)
運(yùn)行效果:
966412-20171108235525653-109786030.png