這里用到Python的scapy 框架中的basic 模板
因?yàn)橛胋asic模板不會(huì)自動(dòng)跟進(jìn)link丙号,所以要用Request進(jìn)行遞歸爬取網(wǎng)頁(yè)
在爬取網(wǎng)頁(yè)時(shí)會(huì)遇到一些小問題需要處理:
1须眷,url帶有中文字符
需求分析:
頂級(jí)url:需要爬取0-10的url
https://www.xxxcf.com/htm/girllist10/2.htm(2.htm-10.htm)
次級(jí)url:
進(jìn)入頂級(jí)url后是這樣的頁(yè)面:
然后每一個(gè)url需要繼續(xù)跟進(jìn) 啦鸣,獲得其底級(jí)url:
進(jìn)入底級(jí)url:
這個(gè)底級(jí)的jpg圖片的url才是我們需要retrive的數(shù)據(jù):
import scrapy
from first.items import FirstItem
import urllib
'''
add browser head
'''
from scrapy.http import Request
class SkySpider(scrapy.Spider):
name = "name"
allowed_domains = ["xxxcf.com"]
#反扒機(jī)制--request
def start_requests(self):
ua={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like GeChrome/63.0.3239.84 Safari/537.36'}
yield Request('https://www.xxxcf.com/htm/girllist10/0.htm',headers=ua)
這里第一個(gè)parse是吧頂級(jí)url壓入request棧谈飒,全部入棧后艰额,對(duì)棧內(nèi)的url調(diào)用pars2方法
def parse(self, response):
for i in range(1,8):
url = 'https://www.xxxcf.com/htm/girllist10/'+str(i)+'.htm'
yield Request(url,self.parse2)
在parse2中臂港,response里有所有的頂級(jí)url對(duì)應(yīng)的頁(yè)面,所以對(duì)每個(gè)url對(duì)應(yīng)頁(yè)面進(jìn)行再次獲取次級(jí)url
def parse2(self, response):
for sel in response.xpath('//li'):
url2 = sel.xpath("a[@target='_blank']/@href").extract()
for i in url2:
#https://www.xxxcf.com/htm/girl10/2200.htm
yield Request('https://www.xxxcf.com'+i, self.parse3)
在parse3的response中有次級(jí)url對(duì)應(yīng)的頁(yè)面配紫,所以對(duì)每個(gè)次級(jí)url對(duì)應(yīng)的底級(jí)頁(yè)面抓取jpg的url
def parse3(self, response):
for sel2 in response.xpath('//div'):
item = FirstItem()
#<div class ="content" > < br / > < img src="https://com/girl/TuiGirl/110/01.jpg" / > < br >
item['link'] = sel2.xpath("img/@src").extract()
yield item
這樣就進(jìn)行了對(duì)url的連續(xù)深層爬取径密。