1.Requests簡介
Requests圖標
Requests是基于urllib络它,使用Apache2 Licensed許可證開發(fā)的HTTP庫断傲。其在python內(nèi)置模塊的基礎上進行了高度封裝捌肴,使得Requests能夠輕松完成瀏覽器相關的任何操作宁炫。
Requests能夠模擬瀏覽器的請求示罗,比起上一代的urllib庫,Requests實現(xiàn)爬蟲更加便捷迅速。
2.爬蟲原理
爬蟲基本流程:
網(wǎng)絡爬蟲
發(fā)起請求:
通過HTTP庫向目標站點發(fā)起請求呀忧,等待目標站點服務器響應。
獲取響應:
若服務器正常響應溃睹,會返回一個Response而账,該Response即為獲取得頁面內(nèi)容,Response可以是HTML因篇、JSON字符串泞辐、二進制數(shù)據(jù)等數(shù)據(jù)類型。
解析內(nèi)容:
利用正則表達式竞滓、網(wǎng)頁解析庫對HTML進行解析咐吼;將json數(shù)據(jù)轉(zhuǎn)為JSON對象進行解析;保存我們需要得二進制數(shù)據(jù)(圖片商佑、視頻)锯茄。
保存數(shù)據(jù):
可將爬取并解析后的內(nèi)容保存為文本,或存至數(shù)據(jù)庫等茶没。
3.Requests總覽
image
requests
requests請求 | 功能 |
---|---|
requests.get( ) | 從服務器獲取數(shù)據(jù) |
requests.post( ) | 向服務器提交數(shù)據(jù) |
requests.put( ) | 從客戶端向服務器傳送的數(shù)據(jù)取代指定的文檔的內(nèi)容 |
requests.delete( ) | 請求服務器刪除指定頁面 |
requests.head( ) | 請求頁面頭部信息 |
requests.options( ) | 獲取服務器支持的HTTP請求方法 |
requests.patch( ) | 向HTML提交局部修改請求肌幽,對應于HTTP的PATCH |
requests.connect( ) | 把請求連接轉(zhuǎn)換到透明的TCP/IP通道 |
requests.trace( ) | 回環(huán)測試請求,查看請求是否被修改 |
requests.session( ).get( ) | 構(gòu)造會話對象 |
requesets請求參數(shù) | 含義 |
---|---|
url | 請求的網(wǎng)址 |
allow_redirects | 設置是否重新定向 |
auth | 設置HTTP身份驗證 |
cert | 指定證書文件或密鑰的字符串 |
cookies | 要發(fā)送至指定網(wǎng)址的Cookie字典 |
headers | 要發(fā)送到指定網(wǎng)址的HTTP標頭字典 |
proxies | URL代理的協(xié)議字典 |
stream | 指定響應后是否進行流式傳輸 |
timeout | 設置等待客戶端連接的時間 |
verify | 用于驗證服務器TLS證書的布爾值或字符串指示 |
"""get請求"""
import requests
url = 'https://tse4-mm.cn.bing.net/th/id/OIP-C.w3cHPxIHKpLZodnlBoIZXgHaMx?w=182&h=314&c=7&o=5&dpr=1.45&pid=1.7'
response = requests.get(url)
print(res.status_code)
"""添加請求頭:header"""
headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'}
response = requests.get('https://www.zhihu.com/explore',headers=headers)
print(response.status_code)
"""帶請求參數(shù)"""
params = {'wd':'python'}
response = requests.get('https://www.baidu.com/',params=params)
print(response.status_code)
"""代理設置"""
proxies = {'http':'http://127.0.0.1:9743',
'https':'https://127.0.0.1:9742',}
response = requests.get('https://www.taobao.com',proxies=proxies)
print(rsponse.status_code)
"""SSL證書驗證"""
response = requests.get('https://www.12306.cn',verify=False)
print(response.status_code)
"""超時設置"""
from requests.exceptions import ReadTimeout
try:
response = requests.get("http://httpbin.org/get", timeout = 0.5)
print(response.status_code)
except ReadTimeout:
print('timeout')
"""認證設置"""
from requests.auth import HTTPBasicAuth
response = requests.get("http://120.27.34.24:9001/",auth=HTTPBasicAuth("user","123"))
print(response.status_code)
"""post請求"""
import requests
import json
host = 'http://httpbin.org/'
endpoint = 'post'
url = ''.join([host,endpoint])
"""帶數(shù)據(jù)的post"""
data = {'key1':'value1','key2':'value2'}
response = requests.post(url,data=data)
print(response.status_code)
print(response.text)
"""帶headers的post"""
headers = {'User-Agent':'test request headers'}
response = requests.post(url,headers=headers)
print(response.status_code)
print(response.text)
"""帶json的post"""
data = {
'sites':[
{'name':'test','url':'www.test.com'},
{'name':'google','url':'www.google.com'},
{'name':'weibo','url':'www.weibo.com'}
]
}
response = requests.post(url,json=data)
print(response.status_code)
print(response.text)
"""帶參數(shù)的post"""
params = {'key1':'params1','key2':'params'}
response = requests.post(url,params=params)
print(response.status_code)
print(response.text)
"""文件上傳"""
files = {'file':open('fisrtgetfile.txt','rb')}
response = requests.post(url,files=files)
print(response.status_code)
print(response.text)
"""put請求"""
import requests
import json
url = 'http://127.0.0.1:8080'
header = {'Content-Type':'application/json'}
param = {'myObjectField':'hello'}
payload = json.dumps(param)
response = requests.put(url,data=payload,headers=headers)
"""head請求"""
import requests
response = requests.head('https://pixabay.com/zh/')
print(response.status_code)
"""delete請求"""
import requests
url = 'https://api.github.com/user/emails'
email = '2475757652@qq.com'
response = requests.delete(url,json=email,auth=('username','password'))
print(response.status_code)
"""options請求"""
import requests
import json
url = 'https://www.baidu.com/s'
response = requests.options(url)
print(response.status_code)
response
response屬性 | 功能 |
---|---|
response.text | 獲取文本內(nèi)容 |
response.content | 獲取二進制數(shù)據(jù) |
response.status_code | 獲取狀態(tài)碼 |
response.headers | 獲取響應頭 |
response.cookies | 獲取cookies信息 |
response.cookies.get_dict | 以字典形式獲取cookies信息 |
response.cookies.items | 以列表形式獲取cookies信息 |
response.url | 獲取請求的URL |
response.historty | 獲取跳轉(zhuǎn)前的URL |
response.json | 獲取json數(shù)據(jù) |
import requests
url = 'https://baike.baidu.com/item/%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB/5162711'
response = requests.get(url)
print(response.status_code)
print(response.text)
print(response.json)
print(response.content)
print(response.headers)
print(response.cookies)
print(response.cookies.items)
print(response.url)
print(response.history)
print(response.cookies.get_dict)
"""爬蟲下載圖片"""
import requests
import matplotlib.pyplot as plt
url = 'https://cn.bing.com/images/search?view=detailV2&ccid=qr8JYj0b&id=6CEE679B0BCE19C94FB9C7595986720942C92261&thid=OIP.qr8JYj0bcms3xayruiZmnAHaJQ&mediaurl=https%3a%2f%2ftse1-mm.cn.bing.net%2fth%2fid%2fR-C.aabf09623d1b726b37c5acabba26669c%3frik%3dYSLJQglyhllZxw%26riu%3dhttp%253a%252f%252fp1.ifengimg.com%252f2019_02%252f95A41E54C3C8EB3B3B148A30CE716314B0AED504_w1024_h1280.jpg%26ehk%3d2EhKcVkSnCvT6uBfgisn%252fdwtghMXFWjGa5WgqEbBSPc%253d%26risl%3d%26pid%3dImgRaw&exph=1280&expw=1024&q=%e7%9f%b3%e5%8e%9f%e9%87%8c%e7%be%8e&simid=607996751665040666&FORM=IRPRST&ck=399D74E04F8507D6711ADC8F53A714D7&selectedIndex=0&ajaxhist=0&ajaxserp=0'
response = requests.get(url)
img = response.content
with open('shiyuanlimei.jpg','wb') as f:
f.write(img)
HTTP狀態(tài)碼分類 | 分類含義 |
---|---|
1** | 信息抓半,服務器收到請求喂急,需要請求者繼續(xù)執(zhí)行操作 |
2** | 成功,請求被成功接收并處理 |
3** | 重定向琅关,需要進一步的操作以完成請求 |
4** | 客戶端錯誤,請求包含語法錯誤或無法完成請求 |
5** | 服務器錯誤讥蔽,服務器在處理請求的過程中發(fā)送錯誤 |
從以上內(nèi)容可知涣易,Requests庫在向目標網(wǎng)址發(fā)送各種請求方面是非常簡單易操作的;但Requests庫只是完成了網(wǎng)絡爬蟲中發(fā)起請求冶伞、獲取響應這兩個步驟新症;之后的內(nèi)容解析、數(shù)據(jù)保存需要用到另一個庫:BeautifulSoup响禽。
寫在最后
下一篇文章將為大家?guī)鞡eautifulSoup庫的超全面講解徒爹!
歡迎大家來到【人類之奴】個眾,獲取更多有趣有料的知識芋类!