爬蟲(chóng)實(shí)戰(zhàn)第三天
任務(wù)
爬取北京58同城二手平板交易頁(yè)面的數(shù)據(jù)詳情殊橙,包括標(biāo)題,價(jià)格买猖,地區(qū)等。
成果
共爬取1750個(gè)頁(yè)面的數(shù)據(jù)滋尉,并保存到.xls中玉控。
源碼
import requests
import time
from bs4 import BeautifulSoup
info = []
urls = ['http://bj.58.com/pbdn/0/pn{}/?PGTID=0d305a36-0000-1e57-ae30-de136659c08c&ClickID=2'.format(str(i)) for i in range(1, 51)]
def get_views(view_link):
# wb_data.url == view_link
wb_data = requests.get(view_link)
# 如果鏈接用urllib.request中的urlopen解析,則wb_data不需要.text
soup = BeautifulSoup(wb_data.text, 'lxml')
# 58網(wǎng)站前三個(gè)加精的網(wǎng)頁(yè)與后面結(jié)構(gòu)有所不同狮惜,可以直接不管高诺,篩選規(guī)則按照后面網(wǎng)頁(yè)的寫(xiě)法
# 嚴(yán)格來(lái)說(shuō)cate刪除換行符空格等處理最好用正則表達(dá)式,否則會(huì)漏掉不符合strip規(guī)則的頁(yè)面;;實(shí)際上完全取出來(lái)了讽挟,哈哈懒叛!
data = {
'cate': soup.find_all(attrs={'class': 'crb_i'})[-1].get_text().strip('\n\r\n\t ').strip('\n'),
'title': soup.select('body > div.content > div > div.box_left > div.info_lubotu.clearfix > div.box_left_top > h1')[0].get_text(),
'price': soup.select('body > div.content > div > div.box_left > div.info_lubotu.clearfix > div.info_massege.left > div.price_li > span > i')[0].get_text(),
'region': soup.select('body > div.content > div > div.box_left > div.info_lubotu.clearfix > div.info_massege.left > div.palce_li > span > i')[0].get_text(),
'look_time': soup.select('body > div.content > div > div.box_left > div.info_lubotu.clearfix > div.box_left_top > p > span.look_time')[0].get_text()
}
print(data)
info.append(data)
def get_links(start_link):
wb_data = requests.get(start_link)
soup = BeautifulSoup(wb_data.text, 'lxml')
view_links = soup.select('#infolist > div.infocon > table > tbody > tr > td.t > a')
return view_links
# 由于使用了try語(yǔ)句,因此不符合上述規(guī)則的網(wǎng)頁(yè)不會(huì)被爬取到耽梅,這是優(yōu)點(diǎn)也是缺點(diǎn)
for url in urls:
view_links = get_links(url)
for view_link in view_links:
try:
get_views(view_link['href'])
except Exception as e:
pass
time.sleep(1)
with open('bj58.xls', 'w') as f:
for i in info:
for key in i:
try:
f.write(key)
f.write('\t')
f.write(i[key])
f.write('\t')
except Exception as e:
break
f.write('\n')
小結(jié)
- 注意區(qū)分貌似同一樣式的頁(yè)面薛窥,如首頁(yè)前三個(gè)加精網(wǎng)頁(yè),網(wǎng)頁(yè)HTML結(jié)構(gòu)與后面其他網(wǎng)頁(yè)的結(jié)構(gòu)是不同的眼姐。
- try...except語(yǔ)句對(duì)于爬蟲(chóng)錯(cuò)誤處理特別好使诅迷,不至于在某個(gè)網(wǎng)頁(yè)處理不成功時(shí)停止爬取。