數(shù)據(jù)購買
數(shù)據(jù)公司
數(shù)據(jù)交易所
爬取數(shù)據(jù)
數(shù)據(jù)獲取
數(shù)據(jù)清洗
第三方框架:scrapy凭迹、scrapy-redis
反爬蟲- 反反爬蟲
網(wǎng)絡(luò)部分
HTTP協(xié)議
HTTPS
網(wǎng)絡(luò)爬蟲
爬取數(shù)據(jù)的原理:使用程序批量獲取數(shù)據(jù)-->用程序模擬一個(gè)瀏覽器允青,發(fā)送HTTP請(qǐng)求(GET、POST)
HTTP原理:(BS結(jié)構(gòu))
客戶端請(qǐng)求 -- 服務(wù)器響應(yīng)
url
protocol hostname post path
e.g
https://www.baidu.com/s?ie=UTF-8&wd=scrapy%E6%A1%86%E6%9E%B6
url光戈?參數(shù):【ie=UTF-8&wd=scrapy%E6%A1%86%E6%9E%B6】
ie 參數(shù)名控硼、UTF-8參數(shù)值(兩者間由=分割漱牵,不同的參數(shù)間由&分割)
注:%E6%A1%86%E6%9E%B6對(duì)中文的加密轉(zhuǎn)碼
中文編碼、解碼工具
- request
相當(dāng)于發(fā)送一個(gè)字符串(帶格式的)
請(qǐng)求行步做、請(qǐng)求頭副渴、空行、數(shù)據(jù) - post
方式由服務(wù)器決定:
若http協(xié)議中包含數(shù)據(jù)部分全度,那么請(qǐng)求方式為post【對(duì)數(shù)據(jù)要求較高的情況下】
url后面參數(shù) - get方式提交(無請(qǐng)求體)
(爬蟲為方便不適用accept encoding煮剧,直接使用明文傳遞)
響應(yīng):
狀態(tài)行、消息報(bào)頭将鸵、空行勉盅、響應(yīng)正文
狀態(tài)(e.g.200正常、404)
import urllib.request
import urllib.parse#只能加密字典
kw = {
"kw":"北京航空航天大學(xué)"
}
url = "http://tieba.baidu.com/f?"
url += urllib.parse.urlencode(kw)
request = urllib.request.Request(url)
response = urllib.request.urlopen(request)
content = response.read().decode("utf-8")
with open("demo1.html","a",encoding = "utf-8") as f:
f.write(content)
import urllib.request
import urllib.parse#只能加密字典
keyword = input("請(qǐng)輸入貼吧名")
kw = {
"kw":keyword
}
startIndex = int(input("起始頁"))
endIndex = int(input("結(jié)束頁"))
url = "http://tieba.baidu.com/f?"
url += urllib.parse.urlencode(kw)
for index in range(startIndex, endIndex+1):
pn = str((index-1)*50)
url += "&pn="+ pn
request = urllib.request.Request(url)
response = urllib.request.urlopen(request)
content = response.read().decode("utf-8")
with open(f"E:\py\demo{index}.html","a",encoding = "utf-8") as f:
f.write(content)
xml--xpath
image.png
image.png
屬性過濾:
- id
xpath正確但python中不返回正確內(nèi)容:反爬蟲
可能是js生成