學(xué)習(xí)爬蟲(chóng)一周后獨(dú)立完成的第一個(gè)作業(yè)項(xiàng)目:爬取58同城平板電腦數(shù)據(jù)晓勇。
1帜矾、首先確定URL玉凯,并抓取詳情頁(yè)中需要的信息
首先我們確定好需要爬取的網(wǎng)頁(yè)URL是:http://zhuanzhuan.58.com/detail/762548881638506498z.shtml ,需要爬取網(wǎng)頁(yè)中商品的標(biāo)題固蚤、瀏覽量送火、價(jià)格祖很、地區(qū),通過(guò)下面的代碼獲取需要的信息并打印出來(lái)漾脂,代碼如下:
url = 'http://zhuanzhuan.58.com/detail/762548881638506498z.shtml'
wb_data = requests.get(url)
soup = BeautifulSoup(wb_data.text, 'lxml')
title = soup.title.text
price = soup.select('span.price_now > i')
city = soup.select('.palce_li > span > i')
browse = soup.select('.look_time')
data = {
'title': title,
'price': price[0].text,
'city': city[0].text,
'browse': browse[0].text
}
print(data)
2假颇、提取每頁(yè)中所有的商品鏈接
首先需要觀察網(wǎng)頁(yè)的信息,確認(rèn)分頁(yè)情況骨稿。URL:http://bj.58.com/pbdn/pn2 中的數(shù)字2代表第二頁(yè)笨鸡,這樣我們可以傳入不同的數(shù)值獲取相應(yīng)的頁(yè)面,然后抓取出每個(gè)頁(yè)面中的商品鏈接坦冠,代碼如下:
urls = ['http://bj.58.com/pbdn/pn{}'.format(num) for num in range(1, 10)]
wb_data = requests.get(url)
soup = BeautifulSoup(wb_data.text, 'lxml')
links = soup.select('tr.zzinfo > td.img > a')
for link in links:
href = link.get('href').split('?')[0]
3形耗、爬取所有頁(yè)面中的商品信息
通過(guò)上面的步驟已經(jīng)獲取到所有商品的鏈接,然后就可以爬取每個(gè)商品的詳細(xì)信息辙浑,將每部分代碼構(gòu)造成函數(shù)激涤,并將各個(gè)函數(shù)調(diào)用即可完成。
為了避免網(wǎng)站發(fā)現(xiàn)是爬蟲(chóng)行為判呕,添加瀏覽器訪(fǎng)問(wèn)的headers,并設(shè)置爬取間隔倦踢,整合后的代碼如下送滞,這樣就完成了所有商品的爬取。
源碼如下:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2016/8/5 10:12
# @Author : flyme
# @Site :
# @File : homework1.py
# @Software: PyCharm Community Edition
import time
from bs4 import BeautifulSoup
import requests
import json
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36',
'Cookie': 'f=n; f=n; id58=c5/nn1eVynx77ecIKaUkAg==; als=0; myfeet_tooltip=end; bj58_id58s="QWxzSUIwVjVyT210NDk2Nw=="; bdshare_firstime=1470115696241; bangbigtip2=1; 58home=sh; __utma=253535702.1191797781.1469434512.1470108831.1470209645.3; __utmz=253535702.1470209645.3.3.utmcsr=sh.58.com|utmccn=(referral)|utmcmd=referral|utmcct=/; bangtoptipclose=1; city=bj; ipcity=sh%7C%u4E0A%u6D77; sessionid=38925ed6-e5d5-4fad-bb41-047d705569a9; final_history=21972416366734%2C26851497575235%2C26727826953024%2C26097540789057%2C26062681492781; f=n; bj58_new_session=1; bj58_init_refer=""; bj58_new_uv=9; 58tj_uuid=1cc76a99-48fd-4337-b2c2-f0788d3b59c5; new_session=0; new_uv=11; utm_source=; spm=; init_refer='
}
# 獲取每個(gè)頁(yè)面中所有的鏈接
def get_links(url):
wb_data = requests.get(url, headers=headers)
time.sleep(2)
soup = BeautifulSoup(wb_data.text, 'lxml')
links = soup.select('tr.zzinfo > td.img > a')
for link in links:
href = link.get('href').split('?')[0]
key = 'zhuanzhuan'
get_detail_info(href)
# 獲取詳情頁(yè)內(nèi)容
def get_detail_info(url):
wb_data = requests.get(url)
soup = BeautifulSoup(wb_data.text, 'lxml')
title = soup.title.text
price = soup.select('span.price_now > i')
city = soup.select('.palce_li > span > i')
browse = soup.select('.look_time')
data = {
'title': title,
'price': price[0].text,
'city': city[0].text,
'browse': browse[0].text
}
print(data)
save_to_text(data)
# 保存數(shù)據(jù)到文本文件
def save_to_text(content):
content = json.dumps(content, ensure_ascii=False)
with open('58.txt', 'a', encoding='utf-8') as f:
f.write(content)
f.write('\r\n')
urls = ['http://bj.58.com/pbdn/pn{}'.format(num) for num in range(1, 10)]
# 從鏈接列表中辱挥,用for一個(gè)個(gè)取出來(lái)
for single_url in urls:
# 把得到的列表頁(yè)面鏈接犁嗅,傳給函數(shù),這個(gè)函數(shù)可以得到詳情頁(yè)鏈接
get_links(single_url)
總結(jié):
1晤碘、在獲取商品詳情信息中需注意獲取商品的方式褂微,多分析抓取內(nèi)容的diamond,盡量使用最簡(jiǎn)便的方式
2园爷、在獲取所有商品鏈接時(shí)需注意商品中的推廣信息宠蚂,分析鏈接的不同之處使用相應(yīng)的方法來(lái)篩選并剔除推廣數(shù)據(jù)