目錄
- python爬蟲學習-day1
- python爬蟲學習-day2正則表達式
- python爬蟲學習-day3-BeautifulSoup
- python爬蟲學習-day4-使用lxml+xpath提取內容
- python爬蟲學習-day5-selenium
- python爬蟲學習-day6-ip池
- python爬蟲學習-day7-實戰(zhàn)
環(huán)境變量
python 3.7
pycharm 2018.3
1.1 學習get與post請求
通過requests實現
requests的get與post函數:
get(url, params=None, **kwargs)
Sends a GET request.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary, list of tuples or bytes to send
in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
post(url, data=None, json=None, **kwargs)
Sends a POST request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
image.png
示例
import requests
# get請求
url = 'https://www.baidu.com'
response = requests.get(url)
print(response.text)
print('--------------分割線--------------')
# post 請求
data = {
"name": "aa",
"school": 'linan'
}
response = requests.post(url, data=data)
print(response.text)
結果
get結果
02.png
post結果
03.png
json的處理模塊 json
json
json模塊提供了一種簡單的方式來編碼和解碼JSON數據帐姻,其主要函數是json.dumps()和json.loads()
import json
data = {
"name": "aa",
"school": 'linan'
}
# 將json對象轉換成json字符串
json_str = json.dumps(data)
print(json_str)
# 將json字符串轉換成json對象
data = json.loads(json_str)
print(data)
結果
{"name": "aa", "school": "linan"}
{'name': 'aa', 'school': 'linan'}
解釋:print輸出json字符串蝴悉,沒問題椭员,注意是的print(data)递雀,與輸出字符串結果一致,因為python解釋器底層實現的(這塊猜測,無法得知)
通過urllib實現
函數
urlopen(url, data=None, timeout=<object object at 0x035B48B8>, *, cafile=None, capath=None, cadefault=False, context=None)
Open the URL url, which can be either a string or a Request object.
*data* must be an object specifying additional data to be sent to
the server, or None if no such data is needed. See Request for
details.
urllib.request module uses HTTP/1.1 and includes a "Connection:close"
header in its HTTP requests.
The optional *timeout* parameter specifies a timeout in seconds for
blocking operations like the connection attempt (if not specified, the
global default timeout setting will be used). This only works for HTTP,
HTTPS and FTP connections.
If *context* is specified, it must be a ssl.SSLContext instance describing
the various SSL options. See HTTPSConnection for more details.
The optional *cafile* and *capath* parameters specify a set of trusted CA
certificates for HTTPS requests. cafile should point to a single file
containing a bundle of CA certificates, whereas capath should point to a
directory of hashed certificate files. More information can be found in
ssl.SSLContext.load_verify_locations().
The *cadefault* parameter is ignored.
This function always returns an object which can work as a context
manager and has methods such as
* geturl() - return the URL of the resource retrieved, commonly used to
determine if a redirect was followed
* info() - return the meta-information of the page, such as headers, in the
form of an email.message_from_string() instance (see Quick Reference to
HTTP Headers)
* getcode() - return the HTTP status code of the response. Raises URLError
on errors.
For HTTP and HTTPS URLs, this function returns a http.client.HTTPResponse
object slightly modified. In addition to the three new methods above, the
msg attribute contains the same information as the reason attribute ---
the reason phrase returned by the server --- instead of the response
headers as it is specified in the documentation for HTTPResponse.
For FTP, file, and data URLs and requests explicitly handled by legacy
URLopener and FancyURLopener classes, this function returns a
urllib.response.addinfourl object.
Note that None may be returned if no handler handles the request (though
the default installed global OpenerDirector uses UnknownHandler to ensure
this never happens).
In addition, if proxy settings are detected (for example, when a *_proxy
environment variable like http_proxy is set), ProxyHandler is default
installed and makes sure the requests are handled through the proxy.
urlopen可以進行get和post訪問
示例
import urllib.request as ur
print('--------get')
# get
url = 'https://www.baidu.com'
res = ur.urlopen(url)
# read first line
fristline = res.readline()
print(fristline)
print('-------post')
# post
req = ur.Request(url=url, data=b'the first day of web crawler')
res_data = ur.urlopen(req)
res = res_data.read()
print(res)
結果
04.png
示例2
import urllib.request, urllib
from urllib.request import URLError
basic_url = 'https://www.cnblogs.com/billyzh/p/5819957.html'
# 由str 轉換成byte
"".encode('utf-8')
# 獲取URLs
# 簡單的使用
# get_url_1
def get_url():
response = urllib.request.urlopen('https://www.cnblogs.com/billyzh/p/5819957.html')
html = response.read().decode('utf-8')
print(html)
# HTTP是基于請求和響應---客戶端發(fā)出請求和服務器端發(fā)送響應。Urllib2 對應Request對象表示你做出HTTP請求,最簡單的形式宜雀,創(chuàng)建一個指定要獲取的網址的Request對象。這個Request對象調用urlopen握础,返回URL請求的Response對象辐董。Response對象是一個類似于文件對象,你可以在Response中使用 .read()禀综。
# get_url_2
def get_url_2():
request = urllib.request.Request('https://www.cnblogs.com/billyzh/p/5819957.html')
response = urllib.request.urlopen(request)
html = response.read().decode('utf-8')
print(html)
# url_info
# 查看請求的信息
def url_info():
request = urllib.request.Request('https://www.cnblogs.com/billyzh/p/5819957.html')
response = urllib.request.urlopen(request)
info = response.info()
url = response.geturl()
print(info)
print('---------url---------')
print(url)
html = response.read().decode('utf-8')
# print(html)
# Data -post
def data_post():
url = 'http://www.someserver.com/cgi-bin/register.cgi'
values = {}
values['name'] = 'Alison'
values['password'] = 'Alison'
data = urllib.parse.urlencode(values)
request = urllib.request.Request(url, data)
response = urllib.request.urlopen(request)
this_page = response.read().decode('utf-8')
print(this_page)
# Data -get
def data_post():
url = 'http://user.51sole.com/'
values = {}
values['txtUserName'] = '1'
values['txtPwd'] = '1'
data = urllib.parse.urlencode(values).encode('utf-8')
request = urllib.request.Request(url, data)
response = urllib.request.urlopen(request)
this_page = response.read().decode('utf-8')
print(this_page)
斷開網絡后發(fā)出申請
這里繼承上面的urllib.request.urlopen()
發(fā)送請求之后简烘,出現如下圖的結果
05.png
請求頭
請求頭的作用,通俗來講菇存,就是能夠告訴被請求的服務器需要傳送什么樣的格式的信息夸研。由于時間關系,這里就貼一下從百度百科看來的一些我認為比較重要的請求頭類型:
Accept:瀏覽器可接受的MIME類型依鸥。
Accept-Charset:瀏覽器可接受的字符集亥至。
Accept-Language:瀏覽器所希望的語言種類,當服務器能夠提供一種以上的語言版本時要用到贱迟。
Authorization:授權信息姐扮,通常出現在對服務器發(fā)送的WWW-Authenticate頭的應答中。
Connection:表示是否需要持久連接衣吠。
Content-Length:表示請求消息正文的長度茶敏。
Cookie:這是最重要的請求頭信息之一。
User-Agent:瀏覽器類型缚俏,如果Servlet返回的內容與瀏覽器類型有關則該值非常有用惊搏。
…
如何添加請求頭?
在爬蟲的時候忧换,如果不添加請求頭恬惯,可能網站會阻止一個用戶的登陸,此時我們就需要添加請求頭來進行模擬偽裝亚茬,使用python添加請求頭方法如下酪耳。
import urllib.request, urllib
from urllib.request import URLError
from io import BytesIO
import gzip
# Headers
# 用戶代理(User-Agent)頭
def header_demo():
user_agent = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.108 Safari/537.36'
values = {}
values['name'] = 'alison'
values['passwd'] = '123'
data = urllib.parse.urlencode(values).encode('utf-8')
headers = {'user-agent': user_agent}
headers[
'accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3'
headers['accept-encoding'] = 'gzip, deflate, br'
headers['accept-language'] = 'zh-CN,zh;q=0.9,en;q=0.8'
headers['Connection'] = 'keep-alive'
req = urllib.request.Request(basic_url, data, headers)
response = urllib.request.urlopen(req)
print('---------response.geturl')
print(response.geturl())
# 以“b'\x1f\x8b\x08”開頭的數據是經過gzip壓縮過的數據,這里當然需要進行解壓了
buff = BytesIO(response.read())
f = gzip.GzipFile(fileobj=buff)
page = f.read().decode('UTF-8')
print('----------page')
print(page)
print('------------info')
print(response.info())
print('------------req.data')
print(req.data)
后期調用的話刹缝,直接調用方法即可
PS: 若你覺得可以碗暗、還行颈将、過得去、甚至不太差的話言疗,可以“關注或點贊”一下晴圾,就此謝過!