中國大學(xué)mooc上的爬取淘寶頁面商品已經(jīng)因?yàn)樘詫毜木S護(hù)而無法爬取
比如,只出現(xiàn)個表頭:
這是我按照嵩天老師代碼學(xué)習(xí)擂仍,遇到的問題囤屹。
原代碼如下:
import requests
import re
def getHTMLText(url):
try:
r= requests.get(url,timeout=30)
r.raise_for_status()
r.encoding = r.apparent_encoding
return r.text
except:
return ""
def parsePage(ilt,html):
try:
plt = re.findall(r'"view_price":"[\d+.]*"',html)
tlt = re.findall(r'"raw_title":".*?"',html)
for i in range(len(plt)):
price = eval(plt[i].split(':')[1])
title = eval(tlt[i].split(':')[1])
ilt.append([price,title])
except:
print("F")
def printGoodsList(ilt):
tplt = "{:4}\t{:8}\t{:16}"
print(tplt.format("序號","價格","商品名稱"))
count = 0
for g in ilt:
count = count +1
print(tplt.format(count,g[0],g[1]))
def main():
goods = '書包'
depth = 2
start_url = "https://s.taobao.com/search?q="+ goods
infoList = []
for i in range(depth):
try:
url = start_url +'&s='+str(44*i)
html = getHTMLText(url)
parsePage(infoList,html)
except:
continue
printGoodsList(infoList)
main()
這段代碼在過去是可以爬取淘寶商品信息,但是因?yàn)樘詫毜姆窗羌夹g(shù)升級逢渔,便不能讓你大搖大擺地進(jìn)出自如了肋坚。
所以,想要用爬蟲爬淘寶,先要學(xué)會偽裝智厌。
簡單來說诲泌,就本課例而言,我們需要把headers內(nèi)容中的referer和cookies進(jìn)行替換铣鹏,改頭換面敷扫,就可以爬取我們所需要的淘寶信息了。
實(shí)際操作如下:
1.首先打開淘寶頁面诚卸,搜索書包
有時必須登錄才能搜索葵第,或不,但都不會影響到爬蟲運(yùn)行合溺。
2.然后按F12卒密,進(jìn)行如下圖的操作,即按照紅色箭頭以此操作:Network→All→右擊search文件→Copy→ Copy as cURL(bash)
3.然后將復(fù)制內(nèi)容復(fù)制到https://curl.trillworks.com/中的curl command窗口中
4.將python requests框內(nèi)的headers={****}內(nèi)容進(jìn)行復(fù)制,如下:
headers = {
'authority': 's.taobao.com',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36',
'sec-fetch-dest': 'document',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,/;q=0.8,application/signed-exchange;v=b3;q=0.9',
'sec-fetch-site': 'same-origin',
'sec-fetch-mode': 'navigate',
'sec-fetch-user': '?1',
'referer': ***********,
'accept-language': 'zh-CN,zh;q=0.9',
'cookie': ***********棠赛,
}
(此處的headers信息中referer和cookie已經(jīng)被我隱藏了哮奇,你直接復(fù)制你自己的headers={}就可以了)
5.最后,修改到原程序里睛约,如下:
import requests
import re
def getHTMLText(url):
try:
header = { 'authority': 's.taobao.com',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36',
'sec-fetch-dest': 'document',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,/;q=0.8,application/signed-exchange;v=b3;q=0.9',
'sec-fetch-site': 'same-origin',
'sec-fetch-mode': 'navigate',
'sec-fetch-user': '?1',
'referer': '**********',
'accept-language': 'zh-CN,zh;q=0.9',
'cookie': ‘***********’,}
r= requests.get(url,headers = header)
r.raise_for_status()
r.encoding = r.apparent_encoding
return r.text
except:
return ""
def parsePage(ilt,html):
try:
plt = re.findall(r'"view_price":"[\d+.]*"',html)
tlt = re.findall(r'"raw_title":".*?"',html)
for i in range(len(plt)):
price = eval(plt[i].split(':')[1])
title = eval(tlt[i].split(':')[1])
ilt.append([price,title])
except:
print("F")
def printGoodsList(ilt):
tplt = "{:4}\t{:8}\t{:16}"
print(tplt.format("序號","價格","商品名稱"))
count = 0
for g in ilt:
count = count +1
print(tplt.format(count,g[0],g[1]))
def main():
goods = '書包'
depth = 2
start_url = "https://s.taobao.com/search?q="+ goods
infoList = []
for i in range(depth):
try:
url = start_url +'&s='+str(44*i)
html = getHTMLText(url)
parsePage(infoList,html)
except:
continue
printGoodsList(infoList)
main()
除了添加更改header外鼎俘,還要記得
r.requests.get(url,timeout=30)
改成
r.requests.get(url,headers=header)
運(yùn)行
成功