前期R爬蟲必備—httr+ POST請求類爬蟲(網(wǎng)易云課堂)這篇推文主要介紹了httr包如何進行POST請求類爬蟲引有,POST請求往往和異步加載組合出現(xiàn),有關(guān)于異步加載在推文R爬蟲必備基礎(chǔ)—動態(tài)異步加載中也做過介紹葫慎。今天呢雁竞,主要來講講GET請求類爬蟲。對于這類請求瞒津,往往用rvest潮饱、httr或RCurl包都可以来氧,但我們主要還是推薦httr和RCurl。上一期 R爬蟲必備基礎(chǔ)—rvest為什么不用于動態(tài)網(wǎng)頁香拉?里已經(jīng)提過饲漾,對于網(wǎng)絡(luò)請求有限制的網(wǎng)頁,盡可能不要使用rvest缕溉,因為存在風(fēng)險考传。除了極少數(shù)靜態(tài)網(wǎng)頁對網(wǎng)絡(luò)請求完全沒有限制外,目前基本上接觸的大部分網(wǎng)頁或多或少對網(wǎng)絡(luò)請求都設(shè)置证鸥。當(dāng)然在很多情況下僚楞,對于這類GET請求類的網(wǎng)頁勤晚,用rvest包其實也可以成功抓取少量內(nèi)容,但很容易被封泉褐。所以呢赐写,保險點,還是用httr/RCurl吧膜赃!
今天呢挺邀,主要就httr包如何開展GET請求類爬蟲做簡單介紹!首先跳座,先看下httr這個R包中GET函數(shù)的大致用法:GET(url = NULL, config = list(), ..., handle = NULL)端铛,這里面比較重要的是config參數(shù)(設(shè)置請求頭、cookies和query)疲眷。具體參數(shù)解釋如下:
- url :the url of the page to retrieve
- config:Additional configuration settings such as http authentication (authenticate()), additional headers (add_headers()), cookies (set_cookies()) etc. See config() for full details and list of helpers. Further named parameters, such as query, path, etc, passed on to modify_url(). Unnamed parameters will be combined with config().
- handle:The handle to use with this request. If not supplied, will be retrieved and reused from the handle_pool() based on the scheme, hostname and port of the url. By default httr requests to the same scheme/host/port combo. This substantially reduces connection time, and ensures that cookies are maintained over multiple requests to the same host. See handle_pool() for more details.
接下來禾蚕,為了更好理解httr包如何完成一項GET請求類爬蟲,下面以解螺旋課程為例做簡單介紹狂丝!
解螺旋課程案例說明
首先進入解螺旋官網(wǎng)(https://www.helixlife.cn/)换淆,點擊精品課程,然后右擊-檢查打開瀏覽器開發(fā)工具几颜,點擊Network面板倍试,然后ctrl+R刷新,點擊第一條信息蛋哭,發(fā)現(xiàn)該網(wǎng)頁是個GET請求類县习。想獲取這頁信息,對https://www.helixlife.cn/courses/boutique網(wǎng)址走GET請求即可具壮!
但我們發(fā)現(xiàn)精品課程共計4頁准颓,想要獲取所有的課程信息應(yīng)該怎么辦? 首先哈蝇,需要分析每個頁面的網(wǎng)址棺妓,有沒有規(guī)律,當(dāng)我們逐條點擊不同頁面時炮赦,發(fā)現(xiàn)瀏覽器開發(fā)后臺的General中Request URL的變化規(guī)律如下:
https://www.helixlife.cn/courses/boutique?page=1
https://www.helixlife.cn/courses/boutique?page=2
https://www.helixlife.cn/courses/boutique?page=3
https://www.helixlife.cn/courses/boutique?page=4
同時怜跑,Query String Parameters的變化規(guī)律如下:
page: 1
page: 2
page: 3
page: 4
在正式爬取之前,需要對下面爬蟲主要涉及的參數(shù)做下介紹:General里面的Request URL吠勘、Request Method性芬、Status Code;Response Headers里面的Content-Type剧防;Request Headers 里面的 Accept植锉、Content-Type、Cookie峭拘、Referer俊庇、User-Agent等以及最后Query String Parameters里面的所有參數(shù)狮暑。
- General里面的Request URL和Request Method方法即是即決定訪問的資源對象和使用的技術(shù)手段。
- Response Headers里面的Content-Type決定著你獲得的數(shù)據(jù)以什么樣的編碼格式返回辉饱。
- Request Headers 里面的 Accept搬男、Content-Type、Cookie彭沼、Referer缔逛、User-Agent等是你客戶端的瀏覽器信息,其中Cookie是你瀏覽器登錄后緩存在本地的登錄狀態(tài)信息姓惑,使用Cookie登入可以避免爬蟲程序被頻繁拒絕褐奴。這其中的參數(shù)不一定全部需要提交。
- Query String Parameters也很重要挺益,是GET請求必備的定位信息歉糜,該到哪一頁了。
如何實際操作望众?
1. 加載所需要的R包匪补,沒安裝的提前安裝。
rm(list=ls())
library("httr")
library("magrittr")
library("rvest")
library("xml2")
library("stringr")
2. 首先構(gòu)造第一頁的url烂翰。
url <- c('https://www.helixlife.cn/courses/boutique?page=1')
3. 構(gòu)造請求提交信息夯缺,根據(jù)Request headers的內(nèi)容填寫,如下圖所示甘耿。
mycookie <- 'Hm_lvt_4b46c1065cade82ef3fa0c6e05cb0f7a=1592799906; XSRF-TOKEN=eyJpdiI6IkEzcWcwZWlhWkJtOGlIWTZWSzZ3V3c9PSIsInZhbHVlIjoiMkxXV3pLbEx3R3lwY2x0SW1tdEh2VGxpSXNjejdDVTVGeFMwV1NrdHpZbkMxSlowcXlnc1J6cVFJdUV2V3dpQiIsIm1hYyI6ImEwZjhjOGZlNTBiMGZmNzdkNmViYTNkNzc4OTM3YzBmZWZlNmFhOTQ0OTAwN2JlNzYzNDQzNjY3MmMzODJmY2YifQ%3D%3D; _session=eyJpdiI6IktaMzFDYTRaOHh6UWM4RmFHYU9yM2c9PSIsInZhbHVlIjoielEzVmRzK0toVzE2Q1dWVjFwSWZZTXRKUEszd3Y5UU9vWFFxV0xUTXNPNk56WkNFcEZ5SHpHTFN0Zk9SNnFGTCIsIm1hYyI6ImU2Mjc2YWI2MGMxY2FkZDA2N2E4NGMwYmRiOTUzOGYwZjQ3NWY1MTg1ZjMyMzk0Zjg0Mjk5OGY1NGU5NTVkODMifQ%3D%3D; SERVERID=3f56386521b609ab6e34c0e5ca694901|1592802688|1592802016; Hm_lpvt_4b46c1065cade82ef3fa0c6e05cb0f7a=1592802695'
myheaders <- c('accept' ='text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'accept-encoding' = 'gzip, deflate, br',
'accept-language' = 'zh-CN,zh;q=0.9',
'user-agent' = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36',
'cookie' = mycookie)
4. 構(gòu)造請求頭參數(shù)信息踊兜,根據(jù)Query String Parameters內(nèi)容填寫。以下代碼是第一頁的Query String Parameters參數(shù)佳恬。
mypayload <- list("page"= 1)
5. 執(zhí)行第一頁捏境,httr的GET函數(shù),這里用什么函數(shù)是由General里的Request Method參數(shù)決定的毁葱。
GET(url , add_headers(.headers = 待爬取網(wǎng)頁的頭部信息)垫言,
set_cookies(.cookies =自己的cookie,
query = Query String Parameters(構(gòu)建成特定格式)倾剿,
timeout(最大請求時長/秒)筷频,
use_proxy(代理IP)...)
根據(jù)GET函數(shù)用法,正式進行網(wǎng)絡(luò)請求前痘,服務(wù)器響應(yīng)返回包含有json格式課程的數(shù)據(jù)凛捏。
response <- GET(url = url, add_headers(.headers = myheaders), timeout(10), query = mypayload)
這時候通過httr包的GET函數(shù)成功請求并獲取到網(wǎng)頁信息,接下來坯癣,交給rvest包進行下游的解析和提取,這方面rvest包更強大示罗。
url <- c('https://www.helixlife.cn/courses/boutique?page=1')
mycookie <- 'Hm_lvt_4b46c1065cade82ef3fa0c6e05cb0f7a=1592799906; XSRF-TOKEN=eyJpdiI6IkEzcWcwZWlhWkJtOGlIWTZWSzZ3V3c9PSIsInZhbHVlIjoiMkxXV3pLbEx3R3lwY2x0SW1tdEh2VGxpSXNjejdDVTVGeFMwV1NrdHpZbkMxSlowcXlnc1J6cVFJdUV2V3dpQiIsIm1hYyI6ImEwZjhjOGZlNTBiMGZmNzdkNmViYTNkNzc4OTM3YzBmZWZlNmFhOTQ0OTAwN2JlNzYzNDQzNjY3MmMzODJmY2YifQ%3D%3D; _session=eyJpdiI6IktaMzFDYTRaOHh6UWM4RmFHYU9yM2c9PSIsInZhbHVlIjoielEzVmRzK0toVzE2Q1dWVjFwSWZZTXRKUEszd3Y5UU9vWFFxV0xUTXNPNk56WkNFcEZ5SHpHTFN0Zk9SNnFGTCIsIm1hYyI6ImU2Mjc2YWI2MGMxY2FkZDA2N2E4NGMwYmRiOTUzOGYwZjQ3NWY1MTg1ZjMyMzk0Zjg0Mjk5OGY1NGU5NTVkODMifQ%3D%3D; SERVERID=3f56386521b609ab6e34c0e5ca694901|1592802688|1592802016; Hm_lpvt_4b46c1065cade82ef3fa0c6e05cb0f7a=1592802695'
myheaders <- c('accept' ='text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'accept-encoding' = 'gzip, deflate, br',
'accept-language' = 'zh-CN,zh;q=0.9',
'user-agent' = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36',
'cookie' = mycookie)
mypayload <- list("page"= 1)
response <- GET(url = url, add_headers(.headers = myheaders),query = mypayload, encode="raw")
#read_html()函數(shù)讀入并解析網(wǎng)頁內(nèi)容
web <- read_html(response, encoding ="utf-8")
#提取課程名稱
course_names <- web %>% html_nodes("div.course-info-container a") %>% html_text()
#提取課程級別
course_class <- web %>% html_nodes("span.course-rank") %>% html_text() %>% str_replace_all(" ","") %>% str_replace_all("\n","")
#提取觀看人數(shù)
course_people <- web %>% html_nodes("div.lean-num img") %>% html_attr("alt")
#提取課程評分
course_grade <- web %>% html_nodes("div.course-info-text span.f-l") %>% html_text() %>% str_replace_all(" ","") %>% str_replace_all("\n","") %>% str_replace_all("評分:","")
course_grade <- course_grade[seq(2,length(course_grade),2)]
#提取課程價格
course_price <- web %>% html_nodes("div.course-price") %>% html_text() %>% str_replace_all("¥","")
#創(chuàng)建數(shù)據(jù)框存儲以上信息
course <- data.frame(course_names,course_class,course_people,course_grade,course_price)
course_inf <- rbind(course_inf,course)
- 執(zhí)行獲取所有頁面信息,共4頁,利用循環(huán)抓取。
i=1
course_inf <- data.frame()
for (i in 1:4){
url <- c(paste0('https://www.helixlife.cn/courses/boutique?page=',i))
mycookie <- 'Hm_lvt_4b46c1065cade82ef3fa0c6e05cb0f7a=1592799906; XSRF-TOKEN=eyJpdiI6IkEzcWcwZWlhWkJtOGlIWTZWSzZ3V3c9PSIsInZhbHVlIjoiMkxXV3pLbEx3R3lwY2x0SW1tdEh2VGxpSXNjejdDVTVGeFMwV1NrdHpZbkMxSlowcXlnc1J6cVFJdUV2V3dpQiIsIm1hYyI6ImEwZjhjOGZlNTBiMGZmNzdkNmViYTNkNzc4OTM3YzBmZWZlNmFhOTQ0OTAwN2JlNzYzNDQzNjY3MmMzODJmY2YifQ%3D%3D; _session=eyJpdiI6IktaMzFDYTRaOHh6UWM4RmFHYU9yM2c9PSIsInZhbHVlIjoielEzVmRzK0toVzE2Q1dWVjFwSWZZTXRKUEszd3Y5UU9vWFFxV0xUTXNPNk56WkNFcEZ5SHpHTFN0Zk9SNnFGTCIsIm1hYyI6ImU2Mjc2YWI2MGMxY2FkZDA2N2E4NGMwYmRiOTUzOGYwZjQ3NWY1MTg1ZjMyMzk0Zjg0Mjk5OGY1NGU5NTVkODMifQ%3D%3D; SERVERID=3f56386521b609ab6e34c0e5ca694901|1592802688|1592802016; Hm_lpvt_4b46c1065cade82ef3fa0c6e05cb0f7a=1592802695'
myheaders <- c('accept' ='text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'accept-encoding' = 'gzip, deflate, br',
'accept-language' = 'zh-CN,zh;q=0.9',
'user-agent' = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36',
'cookie' = mycookie)
mypayload <- list("page"= i)
response <- GET(url = url, add_headers(.headers = myheaders),query = mypayload, )
#讀入并解析網(wǎng)頁內(nèi)容
web <- read_html(response, encoding ="utf-8")
#提取課程名稱
course_names <- web %>% html_nodes("div.course-info-container a") %>% html_text()
#提取課程級別
course_class <- web %>% html_nodes("span.course-rank") %>% html_text() %>% str_replace_all(" ","") %>% str_replace_all("\n","")
#提取觀看人數(shù)
course_people <- web %>% html_nodes("div.lean-num img") %>% html_attr("alt")
#提取課程評分
course_grade <- web %>% html_nodes("div.course-info-text span.f-l") %>% html_text() %>% str_replace_all(" ","") %>% str_replace_all("\n","") %>% str_replace_all("評分:","")
course_grade <- course_grade[seq(2,length(course_grade),2)]
#提取課程價格
course_price <- web %>% html_nodes("div.course-price") %>% html_text() %>% str_replace_all("¥","")
#創(chuàng)建數(shù)據(jù)框存儲以上信息
course <- data.frame(course_names,course_class,course_people,course_grade,course_price)
course_inf <- rbind(course_inf,course)
}
#將數(shù)據(jù)寫入csv文檔
write.csv(course_inf, file="course_inf.csv")
最終爬取結(jié)果如下鹉勒,解螺旋官網(wǎng)上的精品課程共計52門。
下面簡單對這部分課程信息做進一步分析锯厢,比如想知道這些課程在不同級別上的分布?經(jīng)分析發(fā)現(xiàn)脯倒,共計四種課程分級:入門实辑、初級、中級和高級藻丢,其中初級類課程最多悠反,高級類課程最少残黑≌瘢可見解螺旋的課程設(shè)置偏向小白用戶,致力于入門和打基礎(chǔ)疫诽。
rm(list=ls())
library("readx")
course_inf <- read.csv("course_inf.csv", header = T,stringsAsFactors = F)
##整理數(shù)據(jù),修改數(shù)據(jù)格式
course_inf$course_people <- as.numeric(as.character(course_inf$course_people))
course_inf$course_grade <- as.numeric(as.character(course_inf$course_grade))
course_inf$course_price <- as.numeric(as.character(course_inf$course_price))
str(course_inf)
#課程在不同級別上的分布
course_class <- as.data.frame(sort(table(course_inf$course_class),decreasing = T))
library(ggplot2)
ggplot(course_class,aes(Var1,Freq)) + geom_bar(stat = "identity") +
labs(x = "課程分類", y = "課程數(shù)量") +
theme(panel.background=element_rect(fill='transparent')) +
geom_text(mapping = aes(label = Freq),size=4,vjust=-1,color = "black")
往往官網(wǎng)的課程設(shè)置也是有講究的旦委,一般適用群體范圍較大的課程會花更多時間和精力去打造,從上述分析中偏基礎(chǔ)入門類的課程多達41門摩钙,占所有課程的80%追葡。那是不是這部分課程的確受用戶偏愛呢奕短?如下圖,入門類課程的觀看人數(shù)多達26000多人翎碑,占據(jù)份額的一大半。從這些數(shù)據(jù)不難看出日杈,解螺旋的精品課程中入門類課程設(shè)置最多佑刷,且也是用戶最偏愛的一類課程酿炸。
rumen <- sum(course_inf[course_inf$course_class=="入門","course_people"])
chuji <- sum(course_inf[course_inf$course_class=="初級","course_people"])
zhongji <- sum(course_inf[course_inf$course_class=="中級","course_people"])
gaoji <- sum(course_inf[course_inf$course_class=="高級","course_people"])
tmp1 <- data.frame("入門"=rumen,"初級"= chuji,"中級" = zhongji, "高級" = gaoji)
tmp1 <- as.data.frame(t(tmp1))
tmp1$V2 <- factor(rownames(tmp1),levels = c("入門", "初級" , "中級", "高級"))
ggplot(tmp1,aes(V2,V1)) + geom_bar(stat = "identity") +
labs(x = "課程分類", y = "觀看人數(shù)") +
theme(panel.background=element_rect(fill='transparent')) +
geom_text(mapping = aes(label = V1),size=4,vjust=-1,color = "black")
以上只是做了些簡單分析,有興趣的小伙伴填硕,可以根據(jù)自己需求,個性化的進行分析扁眯。此外姻檀,若有小伙伴想重復(fù)此代碼,需要更新自己的cookies绣版,否則容易運行失敗。
往期回顧
R爬蟲在工作中的一點妙用
R爬蟲必備基礎(chǔ)——HTML和CSS初識
R爬蟲必備基礎(chǔ)——靜態(tài)網(wǎng)頁+動態(tài)網(wǎng)頁
R爬蟲必備——rvest包的使用
R爬蟲必備基礎(chǔ)——CSS+SelectorGadget
R爬蟲必備基礎(chǔ)—Chrome開發(fā)者工具(F12)
R爬蟲必備基礎(chǔ)—HTTP協(xié)議
R爬蟲必備—httr+POST請求類爬蟲(網(wǎng)易云課堂)
R爬蟲必備基礎(chǔ)—rvest為什么不用于動態(tài)網(wǎng)頁概作?