今天的爬蟲(chóng)是爬取某網(wǎng)站的商品信息恐锦,難點(diǎn)在于網(wǎng)頁(yè)瀏覽量的爬取融击,不僅需要偽造Referer,而且瀏覽量的獲取不能直接抓取吓歇,否則會(huì)為0孽水。此項(xiàng)是由js控制的,如果使用chrome瀏覽器城看,可以在network里找到有一頁(yè)控制瀏覽量的文件女气。
http://jst1.58.com/counter?infoid={}
通過(guò)infoid來(lái)獲取瀏覽量,而此參數(shù)是商品網(wǎng)址的一部分测柠,所以需要從網(wǎng)址中提取出來(lái)炼鞠。
代碼入下:
from bs4 import BeautifulSoup
import requests
import time
headers = {
'User-Agent': 'xxxxx',
'Referer': 'xxxxx',
'Cookie': 'xxxxx'
}
# 獲取爬取頁(yè)面?zhèn)€數(shù)以及其鏈接
def get_pages_num(who_sells, page_num):
base_urls = ['http://cd.58.com/taishiji/{}/pn{}'.format(who_sells, page_num) for page_num in range(1, page_num+1)]
return base_urls
# 獲取所有鏈接
def get_links_from(who_sells, page_num):
base_urls = get_pages_num(who_sells, page_num)
links = []
for url in base_urls:
time.sleep(1)
r = requests.get(url, headers=headers).text
soup = BeautifulSoup(r, 'lxml')
for link in soup.select('td.t > a'):
if len(link.get('href').split('?')[0]) == 46:
links.append(link.get('href').split('?')[0])
return links
# 獲取瀏覽量
def get_views(url):
id_num = url.split('/')[-1].strip('x.shtml')
api = 'http://jst1.58.com/counter?infoid={}'.format(id_num)
js = requests.get(api, headers=headers)
views = js.text.split('=')[-1]
return views
# 獲取詳細(xì)信息
def get_item_info(who_sells=0, page_num=1):
urls = get_links_from(who_sells, page_num)
for url in urls:
time.sleep(2)
r = requests.get(url, headers=headers)
soup = BeautifulSoup(r.text, 'lxml')
title = soup.title.text
price = soup.findAll('span', 'price c_f50')[0].text
area = list(soup.select('.c_25d')[-1].stripped_strings)
data = soup.select('li.time')[0].text
data = {
'title': title,
'price': price,
'data': data,
'area': ''.join(area) if len(list(soup.select('.c_25d'))) == 2 else None,
'cate': '個(gè)人' if who_sells == 0 else '商家', #通過(guò)參數(shù)來(lái)判斷賣家
'views': get_views(url)
}
print(data)
get_item_info(page_num=3)
此代碼的2個(gè)參數(shù)一個(gè)是對(duì)應(yīng)賣家的,0代表個(gè)人轰胁,1代表商家谒主,另一個(gè)是對(duì)應(yīng)爬取多少頁(yè)的。