一個(gè)做了反爬的36氪手素,返回?cái)?shù)據(jù)惡心唤蔗,感覺是一堆垃圾。這里只是記錄一下爬取過程番挺。
(一)爬取環(huán)境
- win10
- python3
- scrapy
(二)爬取過程
(1)入口:搜索
(2)動(dòng)態(tài)js數(shù)據(jù)加載跃赚,查看下一頁操作:
(3)返回?cái)?shù)據(jù):
(4)請(qǐng)求鏈接
http://36kr.com/api//search/entity-search?page=4&per_page=40&keyword=機(jī)器人&entity_type=post&ts=1532794031142&_=1532848230039
分析:這里的ts及后面的都為時(shí)間戳格式笆搓,可不要,entity_type=post這個(gè)是必須要的,可變參數(shù)為page
(4)列表頁的json數(shù)據(jù)砚作,id為詳情頁鏈接所需標(biāo)志
(5)詳情頁數(shù)據(jù)
抓取內(nèi)容:
字段:標(biāo)題窘奏,作者嘹锁,日期葫录,簡(jiǎn)要,標(biāo)簽领猾,內(nèi)容
查看源碼米同,數(shù)據(jù)全在var props所包含的script標(biāo)簽里面
(6)正則獲取并將之轉(zhuǎn)為正常的json數(shù)據(jù)(理由:json文件可以更好的獲取某個(gè)字段的內(nèi)容,單純?nèi)谜齽t截取的話摔竿,不好獲取或者直接是獲取不到)
源碼:
# -*- coding: utf-8 -*-
# @Time : 2018/7/28 17:13
# @Author : 蛇崽
# @Email : 643435675@QQ.com 1532773314218
# @File : 36kespider.py
import json
import re
import scrapy
import time
class ke36Spider(scrapy.Spider):
name = 'ke36'
allowed_domains = ['www.36kr.com']
start_urls = ['https://36kr.com/']
def parse(self, response):
print('start parse ------------------------- ')
word = '機(jī)器人'
t = time.time()
page = '1'
print('t',t)
for page in range(1,200):
burl = 'http://36kr.com/api//search/entity-search?page={}&per_page=40&keyword={}&entity_type=post'.format(page,word)
yield scrapy.Request(burl,callback=self.parse_list,dont_filter=True)
def parse_list(self,response):
res = response.body.decode('utf-8')
# print(res)
jdata = json.loads(res)
code = jdata['code']
timestamp = jdata['timestamp']
timestamp_rt = jdata['timestamp_rt']
items = jdata['data']['items']
m_id = items[0]['id']
for item in items:
m_id = item['id']
b_url = 'http://36kr.com/p/{}.html'.format(str(m_id))
# b_url = 'http://36kr.com/p/5137751.html'
yield scrapy.Request(b_url,callback=self.parse_detail,dont_filter=True)
def parse_detail(self,response):
res = response.body.decode('utf-8')
content = re.findall(r'<script>var props=(.*?)</script>',res)
temstr = content[0]
minfo = re.findall('\"detailArticle\|post\"\:(.*?)"hotPostsOf30',temstr)[0]
print('minfo ----------------------------- ')
minfo = minfo.rstrip(',')
jdata = json.loads(minfo)
print('j'*40)
published_at = jdata['published_at']
username = jdata['user']['name']
title = jdata['user']['title']
extraction_tags = jdata['extraction_tags']
content = jdata['content']
print(published_at,username,title,extraction_tags)
print('*'*50)
print(content)
更多資源請(qǐng)?jiān)L問:
https://blog.csdn.net/xudailong_blog/article/details/78762262
歡迎光臨我的小網(wǎng)站:http://www.00reso.com
陸續(xù)優(yōu)化中面粮,后續(xù)會(huì)開發(fā)更多更好玩的有趣的小工具