Urllib庫是python內(nèi)置的庫
什么是Urllib
1.urllib.request 請求模塊
2.urllib.error 異常處理模塊
3.urllib.parse url解析模塊
4.urllib.robotparser robots.txt解析模塊
用法
-
urlopen
urllib.request.urlopen(url, data=None, [timeout, ]*, cafile=None, capath=None, cadefault=False, context=None)
GET類型的請求
import urllib.request response = urllib.request.urlopen('http://www.baidu.com') print(response.read().decode('utf-8'))
POST類型的請求
import urllib.parse import urllib.request data = bytes(urllib.parse.urlencode({'word':'hello'}),encoding='utf8') response = urllib.request.urlopen('http://httpbin.org/post',data=data) print(response.read())
帶超時參數(shù)
response = urllib.request.urlopen('http://httpbin.org/get',timeout=1) print(response.read())
測試較短的超時參數(shù)
import socket import urllib.request import urllib.error try: response = urllib.request.urlopen('http://httpbin.org/get',timeout=0.1) except urllib.error.URLError as e: if isinstance(e.reason,socket.timeout): print('TIMEOUT') else: print('It is OK!')
查看urlopen返回值的類型
import urllib.request response = urllib.request.urlopen('https://www.python.org') print(type(response))
輸出結(jié)果:<class 'http.client.HTTPResponse'>
響應(yīng)內(nèi)容--狀態(tài)碼和響應(yīng)頭
import urllib.request
response = urllib.request.urlopen('https://www.python.org')
print(response.status)
print(response.getheaders())
print(response.getheader('Server'))
print(response.read().decode('utf-8')) #響應(yīng)體控妻,用utf-8解碼
輸出結(jié)果:
200
[('Server', 'nginx'), ('Content-Type', 'text/html; charset=utf-8'), ('X-Frame-Options', 'SAMEORIGIN'), ('x-xss-protection', '1; mode=block'), ('X-Clacks-Overhead', 'GNU Terry Pratchett'), ('Via', '1.1 varnish'), ('Content-Length', '48809'), ('Accept-Ranges', 'bytes'), ('Date', 'Sat, 18 Aug 2018 12:56:38 GMT'), ('Via', '1.1 varnish'), ('Age', '129'), ('Connection', 'close'), ('X-Served-By', 'cache-iad2128-IAD, cache-nrt6150-NRT'), ('X-Cache', 'HIT, HIT'), ('X-Cache-Hits', '2, 48'), ('X-Timer', 'S1534596999.663138,VS0,VE0'), ('Vary', 'Cookie'), ('Strict-Transport-Security', 'max-age=63072000; includeSubDomains')]
nginx
-
request方法
如果使用復(fù)雜的請求可以在urlopen方法中使用request參數(shù),通過構(gòu)造request參數(shù)可以方便的設(shè)定請求的方式。
import urllib.request
request = urllib.request.Request('https://python.org')
response = urllib.request.urlopen(request)
print(response.read().decode('utf-8'))
輸出結(jié)果就是請求https://python.org的響應(yīng)體。
使用POST方法發(fā)送請求,并用request構(gòu)造函數(shù)構(gòu)造request妹笆,作為參數(shù)調(diào)用urlopen
from urllib import request,parse
url = 'http://httpbin.org/post'
headers = {
'User-Agent':'Mozilla/4.0(bompatible;MSIE 5.5;Windows NT)',
'Host':'httpbin.org'
}
dict = {
'name':"Germey"
}
data = bytes(parse.urlencode(dict),encoding='utf8')
req = request.Request(url=url,data=data,headers=headers,method='POST')
response = request.urlopen(req)
print(response.read().decode('utf-8'))
依然是使用POST方法發(fā)送請求,用Request構(gòu)造函數(shù)構(gòu)造request,作為參數(shù)傳遞給urlopen姐扮,但request中的headers不在構(gòu)造函數(shù)中指定,而在使用request.add_header添加header衣吠。如果有很多鍵值對要傳遞茶敏,可以用for循環(huán)多次調(diào)用add_header
from urllib import request,parse
url = 'http://httpbin.org/post'
dict = {
'name':'XieZ'
}
data = bytes(parse.urlencode(dict),encoding='utf8')
req = request.Request(url=url,data=data,method='POST')
req.add_header('User-Agent','Mozilla/4.0(compatible;MSIE 5.5;Windows NT)')
response = request.urlopen(req)
print(response.read().decode('utf-8'))
handler -- urllib中的高級用法,代理缚俏、cookie等等各種高級功能都是各種handler實現(xiàn)的惊搏。
-
代理
import urllib.request import urllib.request proxy_handler = urllib.request.ProxyHandler({ 'http':'http://127.0.0.1:9743', 'http':'https://127.0.0.1:9743' }) opener = urllib.request.build_opener(proxy_handler) response = opener.open('http://www.baidu.com') print(response.read())
-
Cookie--Cookie可以用來保存登錄會話信息
import http.cookiejar,urllib.request cookie = http.cookiejar.CookieJar() handler = urllib.request.HTTPCookieProcessor(cookie) opener = urllib.request.build_opener(handler) response = opener.open('http://www.baidu.com') for item in cookie: print(item.name+"="+item.value)
把Cookie保存至文件贮乳,方便將來爬蟲使用cookie登錄網(wǎng)站,保持登錄狀態(tài)
import http.cookiejar,urllib.request filename = "cookie.txt" cookie = http.cookiejar.MozillaCookieJar(filename) handler = urllib.request.HTTPCookieProcessor(cookie) opener = urllib.request.build_opener(handler) response = opener.open('http://www.baidu.com') cookie.save(ignore_discard=True,ignore_expires=True)
這樣Cookie就保存在文件中了恬惯,MozillaCookieJar是cookiejar的子類向拆,是火狐瀏覽器的cookie保存格式,還有其他的cookie保存格式酪耳,比如LWPCookieJar浓恳。在使用時,用什么格式保存就用什么格式讀取cookie就行
使用LWPCookieJar將cookie保存到文件碗暗,并且讀取此文件中的cookie颈将,并請求頁面
import http.cookiejar,urllib.request filename = "cookie.txt" cookie = http.cookiejar.LWPCookieJar(filename) handler = urllib.request.HTTPCookieProcessor(cookie) opener = urllib.request.build_opener(handler) response = opener.open('http://www.baidu.com') cookie.save(ignore_discard=True,ignore_expires=True) mycookie = http.cookiejar.LWPCookieJar() mycookie.load('cookie.txt',ignore_discard=True,ignore_expires=True) handler = urllib.request.HTTPCookieProcessor(mycookie) opener = urllib.request.build_opener(handler) response = opener.open('http://www.baidu.com') print(response.read().decode('utf-8'))
-
urllib的異常處理模塊
from urllib import request,error try: response = request.urlopen('http://ljlhhljl.com/index.htm') except error.URLError as e: print(e.reason)
結(jié)果顯示:[Errno -2] Name or service not known
HTTPError含有reason、code言疗、headers屬性
from urllib import request,error try: response = request.urlopen('http://www.sina.com.cn/99999.html') except error.HTTPError as e: print(e.reason,e.code,e.headers,sep='\n') print('This is end of HTTPError\n') except error.URLError as e: print(e.reason) else: print('Request Sucessfully')
結(jié)果顯示: Not Found 404 Server: nginx Date: Sun, 19 Aug 2018 22:06:25 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: close Vary: Accept-Encoding Age: 0 Via: http/1.1 ctc.nanjing.ha2ts4.77 (ApacheTrafficServer/6.2.1 [cMsSf ]) X-Cache: MISS.77 X-Via-CDN: f=edge,s=ctc.nanjing.ha2ts4.65.nb.sinaedge.com,c=61.171.236.224;f=Edge,s=ctc.nanjing.ha2ts4.77,c=202.102.94.65 X-Via-Edge: 1534716385494e0ecab3d7c5e66ca3150b8e6 This is end of HTTPError
-
URL解析模塊--urlparse和urlunparse
1.urlparse函數(shù)from urllib.parse import urlparse result = urlparse('http://www.baidu.com/index.html;user?id=5#comment') print(type(result),result)
*返回結(jié)果:<class 'urllib.parse.ParseResult'> ParseResult(scheme='http', netloc='www.baidu.com', path='/index.html', params='user', query='id=5', fragment='comment')
2.urlunparse函數(shù)
from urllib.parse import urlunparse
data = ['http','www.baidu.com','index.html','user','a=6','comment']
print(urlunparse(data))
*返回結(jié)果:http://www.baidu.com/index.html;user?a=6#comment
urlunparse就是urlparse的反函數(shù)吆鹤,把各種參數(shù)拼接為一個url
3.urljpin函數(shù)
from urllib.parse import urljoin
print(urljoin('http://www.baidu.com','FAQ.html'))
print(urljoin('http://www.baidu.com','https://lllll.com/FAQ.html'))
*返回結(jié)果:http://www.baidu.com/FAQ.html
https://lllll.com/FAQ.html
4.urlencode--可以把字典對象轉(zhuǎn)換成get請求參數(shù),很常用
from urllib.parse import urlencode
params = {
'name':'xiezheng',
'age':23
}
base_url = 'http://www.baidu.com?'
url = base_url + urlencode(params)
print(url)
*返回結(jié)果:http://www.baidu.com?name=xiezheng&age=23
- urllib.robotparser模塊洲守,用來解析robot.txt文件