Requests庫
什么是Request庫
Requests是用Python語言編寫传透,基于urllib琳水,采用Apache2 Licensed開源協(xié)議的HTTP庫次和。它比urllib更加方便濒旦,可以節(jié)約我們大量的工作者春,完全滿足HTTP測試需求巨朦。一句話--Python實(shí)現(xiàn)的簡單易用的HTTP庫蝎亚。
安裝Requests
pip3 install requests
request詳解
- 實(shí)例引入
import requests
response = requests.get('https://www.baidu.com')
print(type(response)) #<class 'requests.models.Response'>
print(response.status_code) #200
print(type(response.text))#class'str'
print(response.text)#響應(yīng)的內(nèi)容登颓,返回的頁面html
print(response.cookies)#<RequestsCookieJar[<Cookie BDORZ=27315 for .baidu.com/>]>
- 各種請(qǐng)求方法
import requests
requests.post('http://httpbin.org/post')
requests.put('http://httpbin.org/put')
requests.delete('http://httpbin.org/delete')
requests.head('http://httpbin.org/get')
requests.options('http://httpbin.org/get')
-
請(qǐng)求
1.基本用法
import requests
#response = requests.get('http://www.baidu.com')
response = requests.get('http://httpbin.org/get')
print(response.text)
2.帶參數(shù)的get請(qǐng)求
import requests
response = requests.get("http://httpbin.org/get?name=xiexie&age=22")
print(response.text)
這個(gè)參數(shù)編寫起來蠻復(fù)雜的月帝,以下是更清楚的做法躏惋,使用帶參數(shù)的requests.get方法
import requests
data = {
'name':'xiexie',
'age':89
}
response = requests.get('http://httpbin.org/get',params=data)
print(response.text)
3.解析Json
import requests
response = requests.get('http://httpbin.org/get')
print(response.text)
print(response.json())
print(type(response.json()))
這在ajax請(qǐng)求時(shí)比較常用
4.獲取二進(jìn)制數(shù)據(jù)
import requests
response = requests.get('https://github.com/favicon.ico')
print(type(response.text),type(response.content))
print(response.text)
print(response.content)
response.text是string類型,而response.content是二進(jìn)制流
保存二進(jìn)制流到本地嚷辅,圖片簿姨、視頻、音頻都可以
import requests
response = requests.get('http://github.com/favicon.ico')
with open('favicon.ico','wb') as f:
f.write(response.content)
f.close()
5.添加headers作為爬蟲來說,headers非常重要扁位,演戲演全套准潭。不然會(huì)被服務(wù)器識(shí)別出來被禁用。
import requests
response = requests.get('https://www.zhihu.com/explore')
print(response.text)
不用headers域仇,直接返回400 Bad Request 刑然,無法爬取,以下代碼添加headers就能爬取了
import requests
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'
}
response = requests.get('https://www.zhihu.com/explore',headers=headers)
print(response.text)
6.基本POST請(qǐng)求,需要構(gòu)造formdata
import requests
data = {'name':'xiexie','age':33} #傳個(gè)字典
response = requests.post('http://httpbin.org/post',data=data)
print(response.text)
response詳解
- response屬性
import requests
response = requests.get('http://www.reibang.com')
print(type(response.status_code),response.status_code)
print(type(response.headers),response.headers)
print(type(response.cookies),response.cookies)
print(type(response.url),response.url)
print(type(response.history),response.history)
import requests
response = requests.get('http://jianshu.com')
exit() if not response.status_code == 403 else print('Forbidden!')
request高級(jí)操作
1.文件上傳
import requests
files = {'file':open('favicon.ico','rb')}
response = requests.post('http://httpbin.org/post',files=files)
print(response.text)
2.獲取cookie
import requests
response = requests.get('https://www.baidu.com')
print(response.cookies)#response.cookies是一個(gè)列表
for key,value in response.cookies.items():
print(key + "=" + value)
3.會(huì)話維持暇务,用來模擬登陸用的泼掠。
模擬登陸 非常常用
import requests
requests.get('http://httpbin.org/cookies/set/number/1234567')
response = requests.get('http://httpbin.org/cookies')
print(response.text)
這里使用set設(shè)置了一個(gè)cookie,本意是下一句使用requests.get調(diào)用時(shí)希望返回這個(gè)cookie垦细,實(shí)際返回cookies為空择镇。實(shí)際上調(diào)用2次requests.get是互相獨(dú)立的,相當(dāng)于用不同的瀏覽器打開網(wǎng)頁括改。如果想返回剛剛設(shè)置的cookie需要保持會(huì)話腻豌。以下是會(huì)話維持的代碼,相當(dāng)于用同一個(gè)瀏覽器打開嘱能。
import requests
s = requests.Session()
s.get('http://httpbin.org/cookies/set/number/1234567')
response = s.get('http://httpbin.org/cookies')
print(response.text)
返回值:{
"cookies": {
"number": "1234567"
}
}
證書驗(yàn)證
有時(shí)候打開https的網(wǎng)站吝梅,而這個(gè)網(wǎng)站提供的證書沒有通過驗(yàn)證,那么拋出ssl錯(cuò)誤導(dǎo)致程序中斷惹骂。為了防止這種情況苏携,可以用varify參數(shù)。
import requests
from requests.packages import urllib3
urllib3.disable_warnings()#調(diào)用原生的urllib3中的disable_warnings()可以消除警告信息析苫。
response = requests.get('https://www.12306.cn',verify=False)
print(response.status_code)
也可以手動(dòng)導(dǎo)入ca證書和key,這樣也不會(huì)彈出錯(cuò)誤
import requests
response = requests.get('https://www.12306.cn',cert={'parth/server.crt','/path/key'})
print(response.status_code)
代理設(shè)置
import request
proxies = {
"http":"http://127.0.0.1:9998",
"https":"https://127.0.0.1:9998",
}
response = requests.get("https://www.taobao.com",proxies=proxies)
print(response.status_code)
有用戶名密碼的代理
import request
proxies = {
"http":"http://user:password@127.0.0.1:9998",
}
response = requests.get("https://www.taobao.com",proxies=proxies)
print(response.status_code)
ssr這種類型的socks代理怎么使用穿扳?
安裝:pip3 install 'requests[socks]'
import requests
proxies = {
"http":"sock5://127.0.0.1:9998",
"https":"sock5://127.0.0.1:9998",
}
response = requests.get("https://www.taobao.com",proxies=proxies)
print(response.status_code)
}
超時(shí)設(shè)置
import requests
try:
response = requests.get("http://httpbin.org/get",timeout=1)
print(response.status_code)
except requests.ReadTimeout:
print('Timeout')
認(rèn)證設(shè)置
import requests
response = requests.get("http://httpbin.org/get",auth=HTTPBasicAuth('user','123'))
print(response.status_code)
另一種字典的方式
import requests
response = requests.get("http://httpbin.org/get",auth={'user':'123'})
print(response.status_code)
異常處理衩侥,爬蟲的異常處理也很有必要
import requests
from requests.exceptions import ReadTimeout,ConnectionError,RequestException
try:
response = requests.get("http://httpbin.org/get",timeout=0.2)
print(response.status_code)
except ReadTimeout:
print('Timeout')
except ConnectionError:
print("Con error")
except RequestException:
print('Error')