網(wǎng)頁數(shù)據(jù)的采集及相關(guān)庫
-
在python中我們通過網(wǎng)絡(luò)庫進(jìn)行數(shù)據(jù)的采集和下載。
網(wǎng)絡(luò)庫.png
? urllib庫胰锌、requests庫,他們倆是http協(xié)議常用庫
? BeautifulSoup庫,它是xml格式處理庫
from urllib import request
url = "http://www.baidu.com"
response = request.urlopen(url,timeout=2)
print(response.read().decode('utf-8'))
- 在python中字符串和字節(jié)之間的相互轉(zhuǎn)換是通過encode()糊肤、decode()、bytes()氓鄙、str()這些函數(shù)實(shí)現(xiàn)的馆揉。Python3中嚴(yán)格區(qū)分了字符串和字節(jié)的概念,在網(wǎng)絡(luò)的底層傳輸?shù)奈淖志褪亲止?jié)形式抖拦,而上層是字符串形式升酣,我們”看得懂“的是字符串,但是很多底層函數(shù)需要用到字節(jié)形式态罪。就必然產(chǎn)生了字符串和字節(jié)之間相互轉(zhuǎn)換的需求噩茄,那么上面提到的幾個函數(shù)就是用來做字符串和字節(jié)之間轉(zhuǎn)換的。
? 如我們需要字符串轉(zhuǎn)換成字節(jié)复颈,可以使用bytes()或encode()進(jìn)行轉(zhuǎn)換:
s='你好'
b1=s.encode('utf-8') # 使用utf8編碼將字符串轉(zhuǎn)換為字節(jié)
b2=bytes(s, encoding='utf-8') #和上面功能一樣
? 將字節(jié)轉(zhuǎn)換回字符串:
b1.decode('utf-8')
str(b2, encoding='utf-8')
? 這就是這幾個函數(shù)之間的區(qū)別
網(wǎng)頁常用的兩種請求方式get和post
http://httpbin.org/是一個網(wǎng)絡(luò)請求的網(wǎng)站
from urllib import request
from urllib import parse
# get請求和post請求
getresponse = request.urlopen("http://httpbin.org/get",timeout=1)
print(getresponse.read().decode('utf-8'))
data = bytes(parse.urlencode({'world':'hello'}),encoding='utf-8')
postresponse = request.urlopen("http://httpbin.org/post",data=data)
print(postresponse.read().decode('utf-8'))
在網(wǎng)絡(luò)請求中一般需要設(shè)置超時時間timeout绩聘,否則當(dāng)發(fā)生超時會一直卡住。我們通過設(shè)置timeout= 0.1s來模擬超時耗啦,并通過try捕獲異常
import socket
import urllib
try:
response2 = request.urlopen("http://httpbin.org/get",timeout=0.1)
print(response2.read().decode('utf-8'))
except urllib.error.URLError as e:
# 判斷異常的原因是不是socket超時導(dǎo)致的
if isinstance(e.reason,socket.timeout):
print("time out")
http頭部信息的模擬
當(dāng)我們使用urllib庫進(jìn)行網(wǎng)絡(luò)信息的請求時凿菩,有時會被拒絕。這是因?yàn)榫W(wǎng)站為了防止用戶惡意獲取數(shù)據(jù)帜讲,增加了一些驗(yàn)證衅谷,主要是驗(yàn)證我們的請求是不是一個標(biāo)準(zhǔn)的瀏覽器。當(dāng)我們使用urllib請求和瀏覽器請求時似将,請求的頭部信息headers是有區(qū)別的获黔。urllib的headers客戶端信息是"User-Agent": "Python-urllib/3.6"
,而瀏覽器的headers中客戶端信息是"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36"
所以我們在提交時不僅可以在提交我們的url請求玩郊,同時還可以提交一份我們自己模擬的http頭部信息肢执。
from urllib import request, parse
url = 'http://httpbin.org/post'
headers = {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
# "Accept-Encoding": "gzip, deflate",
"Accept-Language": "zh-CN,zh;q=0.8",
"Connection": "close",
"Cookie": "_gauges_unique_hour=1; _gauges_unique_day=1; _gauges_unique_month=1; _gauges_unique_year=1; _gauges_unique=1",
"Referer": "http://httpbin.org/",
"Upgrade-Insecure-Requests": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 LBBROWSER"
}
dict = {
'name': 'value'
}
data = bytes(parse.urlencode(dict), encoding='utf8')
req = request.Request(url=url, data=data, headers=headers, method='POST')
response = request.urlopen(req)
print(response.read().decode('utf-8'))
注:上面的示例中注釋掉了"Accept-Encoding": "gzip, deflate"
,否則會報(bào)如下錯:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
首先 HTTP Header中Accept-Encoding 是瀏覽器發(fā)給服務(wù)器,聲明瀏覽器支持的編碼類型 译红,就是定義了客戶端和瀏覽器傳輸傳輸”文本內(nèi)容“時是否需要壓縮预茄,而gzip, deflate就是客戶端和服務(wù)端通用的壓縮算法。為什么會出現(xiàn)上面的UnicodeDecodeError的錯誤呢?是因?yàn)镻ython默認(rèn)按照utf-8的編碼讀取網(wǎng)頁文件時耻陕,發(fā)現(xiàn)是亂碼(因?yàn)楸粔嚎s過了)拙徽,所以報(bào)錯了。就像是一個txt的文本文件如果被rar壓縮軟件壓縮過诗宣,再用記事本打開是亂碼是同樣的道理膘怕。所以結(jié)論就是要根據(jù)服務(wù)端的網(wǎng)頁編碼確定是否需要進(jìn)行 'Accept-Encoding':' gzip, deflate' 的解壓縮操作。
Requests庫的基本使用
requests是對urllib封裝的第三方庫召庞,方便我們進(jìn)行g(shù)et和post的網(wǎng)絡(luò)請求岛心。
-
安裝方式
也是需要通過pip來進(jìn)行安裝,pip install requests
-
requests進(jìn)行網(wǎng)絡(luò)請求
import requests # get請求 url = 'http://httpbin.org/get' data = {'key': 'value', 'abc': 'xyz'} # .get是使用get方式請求url篮灼,字典類型的data不用進(jìn)行額外處理 response = requests.get(url,data) print(response.text) # post請求 url = 'http://httpbin.org/post' data = {'key': 'hello', 'text': 'world'} # .post表示為post方法 response = requests.post(url,data) # 返回類型可以為text忘古,也可以為json格式 print(response.json())
?
Request結(jié)合正則表達(dá)式爬取圖片鏈接
import requests
import re
content = requests.get('http://www.cnu.cc/discoveryPage/hot-人像').text
# print(content)
# <div class="grid-item work-thumbnail">
# <a class="thumbnail" target="_blank">
# <div class="title">四十七</div>
# <div class="author">拍照的古德卡特</div>
# re.S叫做單行模式,就是你用正則要匹配的內(nèi)容在多行里诅诱,會增加你要匹配的難度髓堪,這時候使用re.S把每行最后的換行符\n當(dāng)做正常的一個字符串來進(jìn)行匹配的一種小技巧
pattern = re.compile(r'<a href="(.*?)".*?title">(.*?)</div>',re.S)
results = re.findall(pattern,content)
print(results)
for result in results:
url,name = result
# '\s'匹配空白的字符,在utf-8編碼下娘荡,換行符 空格等都可以匹配
print(url,re.sub('\s','',name))
Beautiful Soup的安裝和使用
-
安裝
通過pip安裝:
pip install bs4
-
示例
html_doc = """ <html><head><title>The Dormouse's story</title></head> <body> <p class="title"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were three little sisters; and their names were <a class="sister" id="link1">Elsie</a>, <a class="sister" id="link2">Lacie</a> and <a class="sister" id="link3">Tillie</a>; and they lived at the bottom of a well.</p> <p class="story">...</p> """ from bs4 import BeautifulSoup # 指定以'xml'的格式來解析 soup = BeautifulSoup(html_doc,'lxml') # 通過soup.prettify()來對混亂的格式進(jìn)行格式化處理 print(soup.prettify()) # 找到title標(biāo)簽 # print(soup.title) # title 標(biāo)簽里的內(nèi)容 # print(soup.title.string) # 找到p標(biāo)簽 print(soup.p) # 找到p標(biāo)簽class的名字 soup.p['class']默認(rèn)取第一個 print(soup.p['class']) # 找到第一個a標(biāo)簽 # print(soup.a) # 找到所有的a標(biāo)簽 # print(soup.find_all('a')) # 找到id為link3的的標(biāo)簽 print(soup.find(id="link3")) # 找到所有<a>標(biāo)簽的鏈接 # for link in soup.find_all('a'): # print(link.get('href')) # 找到文檔中所有的文本內(nèi)容 # print(soup.get_text())
當(dāng)提示
bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: lxml. Do you need to install a parser library?
報(bào)錯時干旁,需要通過pip install lxml
安裝lxml解決
使用爬蟲爬取新聞網(wǎng)站
爬取百度新聞網(wǎng)頁內(nèi)容
from bs4 import BeautifulSoup
import requests
headers = {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Accept-Language": "zh-CN,zh;q=0.8",
"Connection": "close",
"Cookie": "_gauges_unique_hour=1; _gauges_unique_day=1; _gauges_unique_month=1; _gauges_unique_year=1; _gauges_unique=1",
"Referer": "http://www.infoq.com",
"Upgrade-Insecure-Requests": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 LBBROWSER"
}
# url = 'http://www.infoq.com/cn/news'
url = 'http://news.baidu.com/'
def craw(url):
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'lxml')
for hotnews in soup.find_all('div', class_='hotnews'):
for news in hotnews.find_all('a'):
print(news.text,end=' ')
print(news.get('href'))
# 獲取新聞標(biāo)題
craw(url)
使用爬蟲爬取圖片鏈接并下載圖片
from bs4 import BeautifulSoup
import requests
import os
import shutil
headers = {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Accept-Language": "zh-CN,zh;q=0.8",
"Connection": "close",
"Cookie": "_gauges_unique_hour=1; _gauges_unique_day=1; _gauges_unique_month=1; _gauges_unique_year=1; _gauges_unique=1",
"Referer": "http://www.infoq.com",
"Upgrade-Insecure-Requests": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 LBBROWSER"
}
url = 'http://www.infoq.com/cn/presentations'
# 取得圖片
def craw3(url):
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'lxml')
for pic_href in soup.find_all('div', class_='news_type_video'):
for pic in pic_href.find_all('img'):
imgurl = pic.get('src')
dir = os.path.abspath('.')
# 通過os.path.basename()只取圖片url地址后面的 xx.jpg名字
filename = os.path.basename(imgurl)
imgpath = os.path.join(dir, filename)
print('開始下載 %s' % imgurl)
download_jpg(imgurl, imgpath)
# 下載圖片
# Requests 庫封裝復(fù)雜的接口,提供更人性化的 HTTP 客戶端炮沐,但不直接提供下載文件的函數(shù)争群。
# 需要通過為請求設(shè)置特殊參數(shù) stream 來實(shí)現(xiàn)。當(dāng) stream 設(shè)為 True 時央拖,
# 上述請求只下載HTTP響應(yīng)頭祭阀,并保持連接處于打開狀態(tài),
# 直到訪問 Response.content 屬性時才開始下載響應(yīng)主體內(nèi)容
def download_jpg(image_url, image_localpath):
response = requests.get(image_url,stream = True)
if response.status_code == 200:
with open(image_localpath,'wb') as f:
response.raw.decode_content = True
shutil.copyfileobj(response.raw,f)
# 翻頁
j = 0
for i in range(12, 37, 12):
url = 'http://www.infoq.com/cn/presentations' + str(i)
j += 1
print('第 %d 頁' % j)
craw3(url)
主流爬蟲框架
python爬蟲框架非常多鲜戒,比較流行主要有Scrapy专控、PySpider。Scrapy因?yàn)橛蠿Path和CSS選擇器遏餐,從個人使用習(xí)慣更好用一些伦腐。pyspider更簡單,上手較快失都。還有值得學(xué)習(xí)的有urllib2柏蘑、urllib3、selenium這些包粹庞,簡單的爬蟲用urllib就可以實(shí)現(xiàn)了咳焚。selenium可以調(diào)用瀏覽器,完整的解析js庞溜,執(zhí)行速度上要慢革半,這些都是編寫爬蟲常用的包