Requests-BS4-Re庫實(shí)驗(yàn)
[TOC]
1. 中國大學(xué)排名定向爬蟲
基本流程:爬取內(nèi)容揪漩,分析,輸出
功能描述:
? 輸入:大學(xué)排名URL
? 輸出:大學(xué)排名信息的屏幕輸出(排名吏口,大學(xué)名稱奄容,總分)
技術(shù)路線:requests-bs4
文本分析:
<tbody class="hidden_zhpm" style="text-align:center;">
<tr class="alt"><td>1</td>
<td><div align="left">清華大學(xué)</div></td>
<td>北京市</td><td>95.9</td><td class="hidden-xs need-hidden indicator5">100.0</td><td class="hidden-xs need-hidden indicator6" style="display:none;">97.90%</td><td class="hidden-xs need-hidden indicator7" style="display:none;">37342</td><td class="hidden-xs need-hidden indicator8" style="display:none;">1.298</td><td class="hidden-xs need-hidden indicator9" style="display:none;">1177</td><td class="hidden-xs need-hidden indicator10" style="display:none;">109</td><td class="hidden-xs need-hidden indicator11" style="display:none;">1137711</td><td class="hidden-xs need-hidden indicator12" style="display:none;">1187</td><td class="hidden-xs need-hidden indicator13" style="display:none;">593522</td></tr><tr><td>2</td>
可以看到,<tbody>標(biāo)簽下是大學(xué)排行表格产徊,<td>標(biāo)簽內(nèi)是排名昂勒、名稱和位置、總分舟铜,所以只需要提取相應(yīng)標(biāo)簽中的內(nèi)容即可戈盈,使用BS4庫中find_all函數(shù)
main函數(shù)
def main():
uinfo = []
url = 'http://www.zuihaodaxue.cn/zuihaodaxuepaiming2016.html'
html = getHTMLText(url)
fillUnivList(uinfo, html)
printUnivList(uinfo, 20)
getHTMLText函數(shù)
def getHTMLText(url): #input url
try:
r = requests.get(url, timeout = 30)
r.raise_for_status()
r.encoding = r.apparent_encoding
return r.text
except:
return ''
fillUnivList函數(shù)
def fillUnivList(ulist, html):
soup = BeautifulSoup(html,"html.parser")
for tr in soup.find('tbody').children: #子結(jié)點(diǎn)迭代
if isinstance(tr, bs4.element.Tag): #判斷是否是Tag
tds = tr('td') #find_all
ulist.append([tds[0].string,tds[1].string,tds[2].string]) #將tag中字符串構(gòu)成新元素添加到表尾
printUnivList函數(shù)
def printUnivList(ulist, num):
tmp = '{0:^10}\t{1:{3}^10}\t{2:^10}'
print(tmp.format("排名","學(xué)校名稱","分?jǐn)?shù)", 'c')) #格式化輸出
for i in range(num):
u = ulist[i]
print(tmp.format(u[0],u[1],u[2],chr(12288)))
print("suc" + str(num))
2. 淘寶商品比價(jià)定向爬蟲
功能描述:
? 目標(biāo):獲取淘寶搜索頁面的信息,提取商品名稱和價(jià)格
? 內(nèi)容:
? 淘寶搜索接口(URL格式)
? 翻頁的處理
技術(shù)路線:requests-bs4-re
查看robots.txt協(xié)議時(shí)深滚,盡管里面禁止一切爬蟲奕谭,不過考慮到我們這里只有一次訪問,訪問量類人痴荐,所以不用遵循協(xié)議。
此外官册,由于淘寶頁面使用了反爬蟲生兆,所以在getHTML時(shí),將用戶域設(shè)為瀏覽器膝宁。
main函數(shù)
def main():
goods = 'sd卡'
depth = 3 #定義頁數(shù)
start_url = 'http://s.taobao.com/search?q=' + goods #關(guān)鍵詞接口
uli = []
for i in range(depth):
try:
url = start_url + '&s=' + str(44 * i)
html = getHTML(url)
fillList(uli, html)
except:
print('error')
continue
printList(uli)
fillList函數(shù)
#從文本獲得商品價(jià)格列表
def fillList(uli, html):
try:
tlt = re.findall(r'\"raw_title\"\:\".*?\"', html) #使用正則表達(dá)式匹配鸦难,最小匹配
plt = re.findall(r'\"view_price\"\:\"[\d\.]*\"', html)
for i in range(len(tlt)):
title = eval(tlt[i].split(':')[1])
price = eval(plt[i].split(':')[1])
uli.append([price,title])
except:
print("")
printList函數(shù)
#打印商品價(jià)格列表
def printList(uli):
tplt = '{:4}\t{:8}\t{:16}'
print(tplt.format("序號(hào)",'價(jià)格','名稱'))
count = 0
for u in uli:
count = count + 1
print(tplt.format(count, u[0], u[1]))
3. 股票數(shù)據(jù)定向爬蟲
這次采用兩個(gè)網(wǎng)站搭配使用,先從東方財(cái)富網(wǎng)提取所有股票代碼员淫,接著利用百度股票的接口查詢個(gè)股信息合蔽,輸出到文件。
main函數(shù)
def main():
stock_list_url = 'http://quote.eastmoney.com/stocklist.html'
stock_info_url = 'https://gupiao.baidu.com/stock/'
output_file = 'D:/BaiduStockInfo.txt'
slist = []
print('start...')
getStockList(slist, stock_list_url)
getStockInfo(slist, stock_info_url, output_file)
getStockList函數(shù)
def getStockList(lst, stockURL):
html = getHTML(stockURL)
soup = BeautifulSoup(html,'html.parser')
a = soup.find_all('a')
for i in a:
try:
href = i.attrs['href']
lst.append(re.findall(r'[s][hz]\d{6}',href)[0])
except:
continue
getStockInfo函數(shù)
def getStockInfo(lst, stockURL, fpath):
for stock in lst: #對(duì)所有股票代碼遍歷
url = stockURL + stock + '.html' #構(gòu)成百度股票URL
html = getHTML(url)
try:
if html == '':
continue
infoDict = {}
soup = BeautifulSoup(html, 'html.parser')
stockinfo = soup.find('div',attrs = {'class':'stock-bets'})
name = stockinfo.find('a',attrs = {'class':'bets-name'})
infoDict.update({'股票名稱':name.text.split()[0]}) #更新字典
keylist = stockinfo.find_all('dt')
valuelist = stockinfo.find_all('dd')
for i in range(len(keylist)):
key = keylist[i].text
value = valuelist[i].text
infoDict[key] = value
with open(fpath, 'a', encoding = 'utf-8') as f:
f.write(str(infoDict) + '\n')
except:
traceback.print_exc()
continue