一.收集相關(guān)信息
1.找到需要爬取的專欄URL
https://zhuanlan.zhihu.com/(專欄名稱)
比如:
https://zhuanlan.zhihu.com/lingkou-solution
2.找到請(qǐng)求URL, cookie, user-agent等信息
找到articles的URL, 縮略地址如下
https://zhuanlan.zhihu.com/api/columns/lingkou-solution/articles
完整地址如下
https://zhuanlan.zhihu.com/api/columns/lingkou-solution/articles?include=data%5B*%5D.admin_closed_comment%2Ccomment_count%2Csuggest_edit%2Cis_title_image_full_screen%2Ccan_comment%2Cupvoted_followees%2Ccan_open_tipjar%2Ccan_tip%2Cvoteup_count%2Cvoting%2Ctopics%2Creview_info%2Cauthor.is_following%2Cis_labeled%2Clabel_info
3.分析接口
根據(jù)完整的地址, 請(qǐng)求, 分析返回的數(shù)據(jù)
如圖1-3-1, 根據(jù)key可以得到一些信息
"paging": {
"is_end": false, // 是否是最后一頁(yè)
"totals": 39, // 該專欄文章總數(shù)
// 上一頁(yè)地址
"previous": "https://zhuanlan.zhihu.com/columns/lingkou-solution/articles?include=data%5B%2A%5D.admin_closed_comment%2Ccomment_count%2Csuggest_edit%2Cis_title_image_full_screen%2Ccan_comment%2Cupvoted_followees%2Ccan_open_tipjar%2Ccan_tip%2Cvoteup_count%2Cvoting%2Ctopics%2Creview_info%2Cauthor.is_following%2Cis_labeled%2Clabel_info&limit=10&offset=0",
"is_start": true, // 是否是第一頁(yè)
// 下一頁(yè)地址
"next": "https://zhuanlan.zhihu.com/columns/lingkou-solution/articles?include=data%5B%2A%5D.admin_closed_comment%2Ccomment_count%2Csuggest_edit%2Cis_title_image_full_screen%2Ccan_comment%2Cupvoted_followees%2Ccan_open_tipjar%2Ccan_tip%2Cvoteup_count%2Cvoting%2Ctopics%2Creview_info%2Cauthor.is_following%2Cis_labeled%2Clabel_info&limit=10&offset=10"
},
如圖1-3-2, data是文章列表, 一頁(yè)有10個(gè),我們要獲取里面的id和title
二.開始爬取
1.代碼準(zhǔn)備
需要安裝wkhtmltopdf + pdfkit
wkhtmltopdf要從官網(wǎng)下載, 如果是Windows使用, 還需要配置路徑
https://wkhtmltopdf.org/downloads.html
config = pdfkit.configuration(wkhtmltopdf='wkhtmltopdf.exe 存在路徑')
pdfkit.from_url("目標(biāo)網(wǎng)址", "輸出檔案", configuration=config)
pdfkit 是對(duì)此工具封裝的 Python 庫(kù),可從 pip 安裝:
pip install pdfkit
import requests
from requests import RequestException
from bs4 import BeautifulSoup
import pdfkit
import os
import lxml
import re
import time
CURRENT_FILE_PATH = os.path.dirname(os.path.abspath('__file__'))
2.準(zhǔn)備URL, header,user-angent等
url = 'https://zhuanlan.zhihu.com/api/columns/lingkou-solution/articles?include=data%5B*%5D.admin_closed_comment%2Ccomment_count%2Csuggest_edit%2Cis_title_image_full_screen%2Ccan_comment%2Cupvoted_followees%2Ccan_open_tipjar%2Ccan_tip%2Cvoteup_count%2Cvoting%2Ctopics%2Creview_info%2Cauthor.is_following%2Cis_labeled%2Clabel_info'
cookie = '_xsrf=3bb33dbe-5749-4743-b897-e7aa515bf65a; _zap=53a6c2b5-1d4c-4a0e-81e3-8b5d56019c35; d_c0="AEChZA3T_g-PTn1jyfsKuj_apKrFA5GHFVs=|1567579015"; tgw_l7_route=66cb16bc7f45da64562a077714739c11'
user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36'
headers = {'cookie': cookie, 'user-agent': user_agent}
3.開始爬取, 獲取所以的文章
def get_zhihu_data() -> list:
array_list = []
global url
while True:
try:
resp = requests.get(url, headers=headers)
except RequestException as error:
print('get data error', error)
else:
if resp.status_code != 200:
print('get data status_code error')
break
j = resp.json()
data = j['data']
for article in data:
print(article.get('id'), article.get('title'))
info = {
'id': article.get('id'),
'title': article.get('title'),
}
array_list.append(info)
paging = j.get('paging')
if paging['is_end']:
break
url = paging['next']
url = url.replace('zhuanlan.zhihu.com', 'zhuanlan.zhihu.com/api')
time.sleep(2)
# 我只抓取第一頁(yè)數(shù)據(jù), 如要抓取所有, 注釋掉break
break
return array_list
4. 訪問(wèn)每個(gè)文章主頁(yè), 保存到本地html
def save_data_html(array_list):
index = 1
for item in array_list:
url = 'https://zhuanlan.zhihu.com/p/%s' % item['id']
name = f'{index:03}' + '-' + item['title']
while '/' in name:
name = name.replace('/', '')
html = requests.get(url, headers=headers).text
soup = BeautifulSoup(html, 'lxml')
content = soup.prettify()
# content = soup.find(class_='Post-Main Post-NormalMain').prettify()
content = content.replace('data-actual', '')
content = content.replace('h1>', 'h2>')
content = re.sub(r'<noscript>.*?</noscript>', '', content)
content = re.sub(r'src="data:image.*?"', '', content)
# content = '<!DOCTYPE html><html><head><meta charset="utf-8"></head><body><h1>%s</h1>%s</body></html>' % (name, content)
with open('%s.html' % name, 'w') as f:
f.write(content)
index += 1
三.把html轉(zhuǎn)成PDF
def cover_html_to_pdf():
file_list = os.listdir(CURRENT_FILE_PATH)
all_html_list = []
for path in file_list:
file_extension = os.path.splitext(path)[1]
if file_extension == '.html':
all_html_list.append(path)
all_html_list.sort()
print(all_html_list)
pdfkit.from_file(all_html_list, 'zhihu.pdf')
轉(zhuǎn)成后的效果如下圖
完整代碼地址
https://github.com/yangyu2010/Crossin-Day21/blob/master/Other/cross_zhihu.py