標(biāo)題中的英文首字母大寫比較規(guī)范本涕,但在python實際使用中均為小寫放典。
爬取伯樂在線網(wǎng)站所有文章的詳情頁面
1.網(wǎng)頁持久化
1.1 新建爬蟲工程
新建爬蟲工程命令:scrapy startproject BoleSave2
進(jìn)入爬蟲工程目錄命令:cd BoleSave2
新建爬蟲文件命令:scrapy genspider save blog.jobbole.com
1.2 編輯save.py文件
網(wǎng)頁持久化只需要編輯爬蟲文件就可以笼裳,下面是save.py文件的代碼刊头。
第13行dirName變量的值可以設(shè)置網(wǎng)頁文件保存的位置骄噪,例如:
dirName = "d:/saveWebPage"將網(wǎng)頁文件保存在D盤的saveWebPage文件夾中刻盐。
可以根據(jù)個人情況進(jìn)行修改掏膏,不能將其設(shè)置為工程所在文件夾,因為Pycharm對工程內(nèi)大量新文件進(jìn)行索引會導(dǎo)致卡頓敦锌。
import scrapy
import os
import re
def reFind(pattern,sourceStr,nth=1):
if len(re.findall(pattern,sourceStr)) >= nth:
return re.findall(pattern,sourceStr)[nth-1]
else:
return 1
def saveWebPage(response,id,prefix):
# 持久化目錄頁面
dirName = "d:/saveWebPage2"
if not os.path.isdir(dirName):
os.mkdir(dirName)
html = response.text
fileName = "%s%05d.html" %(prefix,id)
filePath = "%s/%s" %(dirName, fileName)
with open(filePath, 'w', encoding="utf-8") as file:
file.write(html)
print("網(wǎng)頁持久化保存為%s文件夾中的%s文件" %(dirName,fileName))
class SaveSpider(scrapy.Spider):
name = 'save'
allowed_domains = ['blog.jobbole.com']
start_urls = ['http://blog.jobbole.com/all-posts/']
def parse(self, response):
pageNum = response.xpath("http://a[@class='page-numbers']/text()")[-1].extract()
for i in range(1, int(pageNum) + 1):
url = "http://blog.jobbole.com/all-posts/page/{}/".format(i)
yield scrapy.Request(url, callback=self.parse1)
def parse1(self, response):
page_id = int(reFind("\d+", response.url))
saveWebPage(response,page_id,'directory')
#獲得詳情頁面的鏈接馒疹,并調(diào)用下一級解析函數(shù)
article_list = response.xpath("http://div[@class='post floated-thumb']")
count = 0
for article in article_list:
url = article.xpath("div[@class='post-meta']/p/a[1]/@href").extract_first()
count += 1
article_id = (page_id - 1) * 20 + count
yield scrapy.Request(url,self.parse2,meta={'id':article_id})
def parse2(self, response):
saveWebPage(response,response.meta['id'],'detail')
1.3 編輯settings.py文件
改變并發(fā)請求數(shù)量,取消變量CONCURRENT_REQUESTS的注釋乙墙,并改變值為96颖变。
CONCURRENT_REQUESTS = 96
1.4 運(yùn)行結(jié)果
運(yùn)行命令:scrapy crawl save
559個目錄頁面生均,11172個詳情頁面,兩種頁面相加共有11731個頁面悼做。
而網(wǎng)頁持久化保存的文件個數(shù)也是11731個疯特,說明已經(jīng)完成頁面持久化。
從下圖中可以看出開始時間與結(jié)束時間相差12分鐘肛走,則11731個頁面持久化耗時12分鐘漓雅。
持久化速度:977頁/分,16.29頁/秒
2.解析伯樂在線文章詳情頁面
已經(jīng)把11731個網(wǎng)頁文件打包成一個壓縮文件朽色,下載鏈接: https://pan.baidu.com/s/19MDHdwrqrSRTEgVWA9fMzg 密碼: x7nk
2.1 新建爬蟲工程
新建爬蟲工程命令:scrapy startproject BoleParse2
進(jìn)入爬蟲工程目錄命令:cd BoleParse2
新建爬蟲文件命令:scrapy genspider parse blog.jobbole.com
2.2 在Pycharm中導(dǎo)入工程
導(dǎo)入工程的按鈕位置如下圖所示:
選中工程文件夾邻吞,然后點擊OK,如下圖所示:
工程文件夾的結(jié)構(gòu)如下圖所示:
2.3 編寫items.py文件
共有12個字段,文章識別碼id葫男、標(biāo)題title抱冷、發(fā)布時間publishTime、分類category梢褐、摘要digest旺遮、圖片鏈接imgUrl、詳情鏈接detailUrl盈咳、原文出處originalSource耿眉、內(nèi)容content、點贊數(shù)favourNumber鱼响、收藏數(shù)collectNumber鸣剪、評論數(shù)commentNumber。
import scrapy
from scrapy import Field
class Boleparse2Item(scrapy.Item):
id = Field()
title = Field()
publishTime = Field()
category = Field()
digest = Field()
imgUrl = Field()
detailUrl = Field()
originalSource = Field()
content = Field()
favourNumber = Field()
collectNumber = Field()
commentNumber = Field()
2.4 編寫parse.py文件
parse函數(shù)解析目錄頁面丈积,得到7個字段的值添加進(jìn)item中筐骇,并通過response攜帶meta傳遞給下一級解析函數(shù)。
parse2函數(shù)解析詳情頁面江滨,通過item = response.meta['item']得到已經(jīng)解析一部分內(nèi)容的item铛纬,再對網(wǎng)頁解析得到剩余的5個字段跃脊,最后yield item將item傳給管道進(jìn)行處理谎砾。
注意:修改第13行變量dirName的值
import scrapy
import re
from ..items import Boleparse2Item
def reFind(pattern,sourceStr,nth=1):
if len(re.findall(pattern,sourceStr)) >= nth:
return re.findall(pattern,sourceStr)[nth-1]
else:
return ''
class ArticleSpider(scrapy.Spider):
name = 'parse'
dirName = "E:/saveWebPage2"
start_urls = []
for i in range(1,560):
fileName = "directory%05d.html" %i
filePath = "file:///%s/%s" %(dirName,fileName)
start_urls.append(filePath)
def parse(self, response):
def find(xpath, pNode=response):
if len(pNode.xpath(xpath)):
return pNode.xpath(xpath).extract()[0]
else:
return ''
article_list = response.xpath("http://div[@class='post floated-thumb']")
pattern = self.dirName + "/directory(\d+).html"
page_id_str = reFind(pattern,response.url)
page_id = int(page_id_str)
count = 0
for article in article_list:
count += 1
item = Boleparse2Item()
item['id'] = (page_id - 1) * 20 + count
item['title'] = find("div[@class='post-meta']/p[1]/a/@title",article)
pTagStr = find("div[@class='post-meta']/p",article)
item['publishTime'] = re.search("\d+/\d+/\d+",pTagStr).group(0)
item['category'] = find("div[@class='post-meta']/p/a[2]/text()",article)
item['digest'] = find("div[@class='post-meta']/span/p/text()",article)
item['imgUrl'] = find("div[@class='post-thumb']/a/img/@src",article)
item['detailUrl'] = find("div[@class='post-meta']/p/a[1]/@href", article)
fileName = "detail%05d.html" %item['id']
nextUrl = "file:///%s/%s" %(self.dirName,fileName)
yield scrapy.Request(nextUrl,callback=self.parse1,meta={'item':item})
def parse1(self, response):
def find(xpath, pNode=response):
if len(pNode.xpath(xpath)):
return pNode.xpath(xpath).extract()[0]
else:
return ''
item = response.meta['item']
item['originalSource'] = find("http://div[@class='copyright-area']"
"/a[@target='_blank']/@href")
item['content'] = find("http://div[@class='entry']")
item['favourNumber'] = find("http://h10/text()")
item['collectNumber'] = find("http://div[@class='post-adds']"\
"/span[2]/text()").strip("收藏").strip()
commentStr = find("http://a[@href='#article-comment']/span")
item['commentNumber'] = reFind("(\d+)\s評論",commentStr)
yield item
2.5 編寫pipelines.py文件
采用數(shù)據(jù)庫連接池提高往數(shù)據(jù)庫中插入數(shù)據(jù)的效率韧衣。
下面代碼有2個地方要修改:1.數(shù)據(jù)庫名僚楞;2.連接數(shù)據(jù)庫的密碼。
設(shè)置數(shù)據(jù)庫編碼方式掀亥,default charset=utf8mb4創(chuàng)建表默認(rèn)編碼為utf8mb4,因為插入字符可能是4個字節(jié)編碼。
item['content'] = my_b64encode(item['content'])將網(wǎng)頁內(nèi)容進(jìn)行base64編碼防止發(fā)生異常。
from twisted.enterprise import adbapi
import pymysql
import time
import os
import base64
def my_b64encode(content):
byteStr = content.encode("utf-8")
encodeStr = base64.b64encode(byteStr)
return encodeStr.decode("utf-8")
class Boleparse2Pipeline(object):
def __init__(self):
self.params = dict(
dbapiName='pymysql',
cursorclass=pymysql.cursors.DictCursor,
host='localhost',
db='bole',
user='root',
passwd='...your password',
charset='utf8mb4',
)
self.tableName = "article_details"
self.dbpool = adbapi.ConnectionPool(**self.params)
self.startTime = time.time()
self.dbpool.runInteraction(self.createTable)
def createTable(self, cursor):
drop_sql = "drop table if exists %s" %self.tableName
cursor.execute(drop_sql)
create_sql = "create table %s(id int primary key," \
"title varchar(200),publishtime varchar(30)," \
"category varchar(30),digest text," \
"imgUrl varchar(200),detailUrl varchar(200)," \
"originalSource varchar(500),content mediumtext," \
"favourNumber varchar(20)," \
"collectNumber varchar(20)," \
"commentNumber varchar(20)) " \
"default charset = utf8mb4" %self.tableName
cursor.execute(create_sql)
self.dbpool.connect().commit()
def process_item(self, item, spider):
self.dbpool.runInteraction(self.insert, item)
return item
def insert(self, cursor, item):
try:
if len(item['imgUrl']) >= 200:
item.pop('imgUrl')
item['content'] = my_b64encode(item['content'])
fieldStr = ','.join(['`%s`' % k for k in item.keys()])
valuesStr = ','.join(['"%s"' % v for v in item.values()])
insert_sql = "insert into %s(%s) values(%s)"\
% (self.tableName,fieldStr, valuesStr)
cursor.execute(insert_sql)
print("往mysql數(shù)據(jù)庫中插入第%d條數(shù)據(jù)成功" %item['id'])
except Exception as e:
if not os.path.isdir("Log"):
os.mkdir("Log")
filePath = "Log/" + time.strftime('%Y-%m-%d-%H-%M.log')
with open(filePath, 'a+') as file:
datetime = time.strftime('%Y-%m-%d %H:%M:%S')
logStr = "%s log:插入第%d條數(shù)據(jù)發(fā)生異常\nreason:%s\n"
file.write(logStr % (datetime, item['id'], str(e)))
def close_spider(self, spider):
print("程序總共運(yùn)行%.2f秒" % (time.time() - self.startTime))
2.6 編寫settings.py文件
BOT_NAME = 'BoleParse2'
SPIDER_MODULES = ['BoleParse2.spiders']
NEWSPIDER_MODULE = 'BoleParse2.spiders'
ROBOTSTXT_OBEY = False
CONCURRENT_REQUESTS = 96
CONCURRENT_ITEMS = 200
ITEM_PIPELINES = {
'BoleParse2.pipelines.Boleparse2Pipeline': 300,
}
2.7 運(yùn)行結(jié)果
運(yùn)行命令:scrapy crawl parse
從上圖可以看出惹挟,插入數(shù)據(jù)總共需要花費(fèi)420秒,即25條/秒缝驳,1558條/分连锯。
從上圖可以看出插入數(shù)據(jù)總共使用硬盤容量679.5M归苍,條數(shù)共11172條,成功插入每一條數(shù)據(jù)运怖。
3.查找插入異常原因
mysql中查看字符集命令:show variables like "character%"
content中有組合字符\"導(dǎo)致發(fā)生SQL syntax error