根據(jù)scrapy官方文檔:http://doc.scrapy.org/en/master/topics/practices.html#avoiding-getting-banned里面的描述,要防止scrapy被ban芹扭,主要有以下幾個(gè)策略麻顶。
- 動(dòng)態(tài)設(shè)置user agent
- 禁用cookies/啟用cookie
- 設(shè)置延遲下載
- 使用Google cache (未記錄)
- 使用IP地址池(Tor project、VPN和代理IP)
- 利用第三方平臺(tái)crawlera做scrapy爬蟲防屏蔽 (未記錄)
動(dòng)態(tài)設(shè)置user agent
# -*- coding:utf-8 -*-
import random
def get_headers():
useragent_list = [
'Mozilla/5.0 (Windows NT 6.1; rv,2.0.1) Gecko/20100101 Firefox/4.0.1',
'Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11',
'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
'Mozilla/5.0 (Windows NT 6.1; rv,2.0.1) Gecko/20100101 Firefox/4.0.1',
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36',
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Maxthon/4.9.2.1000 Chrome/39.0.2146.0 Safari/537.36',
'Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11',
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/532.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/532.3',
'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5',
'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.81 Safari/537.36'
]
useragent = random.choice(useragent_list)
header = {'User-Agent': useragent}
return header
禁用cookies/啟用cookie
cookies(來(lái)自維基百科)因?yàn)?a target="_blank" rel="nofollow">HTTP協(xié)議是無(wú)狀態(tài)的舱卡,即服務(wù)器不知道用戶上一次做了什么辅肾,這嚴(yán)重阻礙了交互式Web應(yīng)用程序的實(shí)現(xiàn)。在典型的網(wǎng)上購(gòu)物場(chǎng)景中轮锥,用戶瀏覽了幾個(gè)頁(yè)面矫钓,買了一盒餅干和兩飲料。最后結(jié)帳時(shí),由于HTTP的無(wú)狀態(tài)性新娜,不通過(guò)額外的手段赵辕,服務(wù)器并不知道用戶到底買了什么。 所以Cookie就是用來(lái)繞開HTTP的無(wú)狀態(tài)性的“額外手段”之一概龄。服務(wù)器可以設(shè)置或讀取Cookies中包含信息还惠,借此維護(hù)用戶跟服務(wù)器會(huì)話中的狀態(tài)。
Cookie另一個(gè)典型的應(yīng)用是當(dāng)?shù)卿浺粋€(gè)網(wǎng)站時(shí)私杜,網(wǎng)站往往會(huì)請(qǐng)求用戶輸入用戶名和密碼蚕键,并且用戶可以勾選“下次自動(dòng)登錄”。如果勾選了衰粹,那么下次訪問(wèn)同一網(wǎng)站時(shí)锣光,用戶會(huì)發(fā)現(xiàn)沒(méi)輸入用戶名和密碼就已經(jīng)登錄了。這正是因?yàn)榍耙淮蔚卿洉r(shí)寄猩,服務(wù)器發(fā)送了包含登錄憑據(jù)(用戶名加密碼的某種加密形式)的Cookie到用戶的硬盤上嫉晶。第二次登錄時(shí),(如果該Cookie尚未到期)瀏覽器會(huì)發(fā)送該Cookie田篇,服務(wù)器驗(yàn)證憑據(jù),于是不必輸入用戶名和密碼就讓用戶登錄了箍铭。
采用selenium + PhantomJS 模擬瀏覽器登錄Lagou泊柬,獲取cookie
# -*- coding:utf-8 -*-
import sys
import time
import random
from selenium import webdriver
reload(sys)
sys.setdefaultencoding('utf-8')
def random_sleep_time():
sleeptime = random.randint(0, 10)
return time.sleep(sleeptime)
def get_headers_with_cookie():
driver = webdriver.PhantomJS(executable_path="D:\phantomjs-2.1.1-windows\\bin\phantomjs.exe") #需下載PhantomJS并解壓到某一路徑
url_login = 'https://passport.lagou.com/login/login.html'
driver.get(url_login)
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[1]/input').clear()
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[1]/input').send_keys('username') #需替換可用賬戶
random_sleep_time()
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[2]/input').clear()
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[2]/input').send_keys('password') #需替換可用賬戶
random_sleep_time()
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[5]/input').click()
random_sleep_time()
cookies = "; ".join([item["name"] + "=" + item["value"] for item in driver.get_cookies()])
headers = get_headers()
headers['cookie'] = cookies.encode('utf-8')
return headers
XPath解析 Copy XPath技巧
參考向右奔跑-009 - 使用XPath解析網(wǎng)頁(yè)
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[1]/input')
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[2]/input')
driver.find_element_by_xpath('/html/body/section/div[1]/form/div/div[5]/input')
Scrapy 禁用Cookie
在setting.py
中設(shè)置
COOKIES_ENABLED=False
代理設(shè)置 PROXIES
在setting.py
中設(shè)置
PROXIES = [
{'ip_port': '111.11.228.75:80', 'user_pass': ''},
{'ip_port': '120.198.243.22:80', 'user_pass': ''},
{'ip_port': '111.8.60.9:8123', 'user_pass': ''},
{'ip_port': '101.71.27.120:80', 'user_pass': ''},
{'ip_port': '122.96.59.104:80', 'user_pass': ''},
{'ip_port': '122.224.249.122:8088', 'user_pass': ''},
]
設(shè)置下載延遲
在setting.py
中設(shè)置
DOWNLOAD_DELAY=3
創(chuàng)建中間件(middlewares.py)
import random
import base64
from settings import PROXIES
class RandomUserAgent(object):
"""Randomly rotate user agents based on a list of predefined ones"""
def __init__(self, agents):
self.agents = agents
@classmethod
def from_crawler(cls, crawler):
return cls(crawler.settings.getlist('USER_AGENTS'))
def process_request(self, request, spider):
# print "**************************" + random.choice(self.agents)
request.headers.setdefault('User-Agent', random.choice(self.agents))
class ProxyMiddleware(object):
def process_request(self, request, spider):
proxy = random.choice(PROXIES)
if proxy['user_pass'] is not None:
request.meta['proxy'] = "http://%s" % proxy['ip_port']
encoded_user_pass = base64.encodestring(proxy['user_pass'])
request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
print "**************ProxyMiddleware have pass************" + proxy['ip_port']
else:
print "**************ProxyMiddleware no pass************" + proxy['ip_port']
request.meta['proxy'] = "http://%s" % proxy['ip_port']
設(shè)置下載中間件
DOWNLOADER_MIDDLEWARES = {
# 'myproject.middlewares.MyCustomDownloaderMiddleware': 543,
'myproject.middlewares.RandomUserAgent': 1,
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110,
# 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110,
'myproject.middlewares.ProxyMiddleware': 100,
}
參考資料
[1] 如何讓你的scrapy爬蟲不再被ban
[2] 為何大量網(wǎng)站不能抓取?爬蟲突破封禁的6種常見方法
[3] 互聯(lián)網(wǎng)網(wǎng)站的反爬蟲策略淺析
[4] 用 Python 爬蟲抓站的一些技巧總結(jié)
[5] 如何識(shí)別PhantomJs爬蟲
[6] 麻袋理財(cái)之反爬蟲實(shí)踐
[7] 中間件