前面都已經(jīng)學(xué)習(xí)的差不多了,現(xiàn)在又有另一個(gè)網(wǎng)站要爬取齐莲,http://moku.kaibei.com/categories/7
由于分析到只是7位置的變化,把以前讀取列表的方法改了下磷箕,在方法里傳值進(jìn)來(lái)选酗,頁(yè)數(shù)我是直接寫(xiě)的999頁(yè),然后看到他們網(wǎng)站上如果是沒(méi)有那一頁(yè)會(huì)返回一個(gè)content-box的div岳枷,里面顯示還沒(méi)有作品芒填,我就查找這個(gè)div呜叫,如果有這個(gè)div存在就跳出循環(huán)。
比上次增加了一個(gè)圖片文件夾不存在自動(dòng)創(chuàng)建的幾句殿衰,其它的都差不多
def kaibei_list(self,item_type):
item_type = item_type
vs = MySQLHelper()
query = '%s%s%s' % ("SELECT plan.page FROM plan WHERE plan.item = ",item_type," ORDER BY plan.time DESC LIMIT 0, 1")
text = vs.queryAll(query)
vs.close()
try:
start_page = int(text[0]['page']) + 1
except Exception as e:
start_page = 1
for x in range(start_page,999):
print(str(item_type) + "_page_" + str(x))
url = 'http://moku.kaibei.com/categories/'+str(item_type)+'/?p='+str(x)+'&sort_by=&order=desc'
f = request.urlopen(url)
html = f.read()
#----檢查是否是最后一頁(yè)朱庆,如果是就跳出
html1 = str(html).replace(" ","").replace("\r\n","").strip()
imglist = re.findall(r'content-box',html1)
if len(imglist) == 1:
break
url = Selector(text=html).xpath('/html/body/div[5]/div/div/ul[1]/li/a//@href').extract()
img_src = Selector(text=html).xpath('/html/body/div[5]/div/div/ul[1]/li/a/img//@data-original').extract()
img_title = Selector(text=html).xpath('/html/body/div[5]/div/div/ul[1]/li/a/img//@alt').extract()
vs = MySQLHelper()
curr_time = int(time.time())
for a in range(len(url)):
query = '%s%s%s' % ("SELECT list.id FROM list WHERE list.url = '",url[a],"' LIMIT 0, 1") #保證數(shù)據(jù)不重復(fù)
if vs.query(query) == 0:
img_src_len = img_src[a]
url_len = url[a]
title_len = img_title[a].replace("'","")
file_url = os.getcwd()
random_str = self.random_str() #隨機(jī)字符串,用于圖片名
img_path ='%s%s%s' % (file_url,'/image/',item_type)
#判斷目錄是否存在
path_exts = os.path.exists(img_path)
if path_exts == False:
os.mkdir(img_path)
print('mkdir ---- '+img_path)
img_name ='%s%s%s%s%s%s%s' % (file_url,'/image/',item_type,'/',curr_time,random_str,'.jpg')
img_sql_url = '%s%s%s%s%s%s' % ('/image/',item_type,'/',curr_time,random_str,'.jpg')
data = {
'name': title_len,
'type': item_type,
'url': url_len,
'img_url': img_sql_url,
'down': '',
'baidu_down': '',
'status':0
}
request.urlretrieve(img_src_len, img_name) #下載圖片
vs.insert('list',data) #把信息寫(xiě)入數(shù)據(jù)庫(kù)
vs.commit()
page_data = {
'item': item_type,
'time': curr_time,
'page': x
}
vs.insert('plan',page_data)
vs.commit()
vs.close()
print(str(item_type) + "_page end _" + str(x))
time.sleep(1)
pass
運(yùn)行的時(shí)候多幾個(gè)線程運(yùn)行播玖,一個(gè)傳入3椎工,一個(gè)傳入9,其它的我暫時(shí)不需要蜀踏,然后讀取頁(yè)面里面下載地址的線程我也只啟動(dòng)了二個(gè)维蒙,因?yàn)榱斜磉€需要下載圖片,比較慢果覆,啟動(dòng)多了也是在那里sheep
def main_spider(self):
p = Process(target=self.kaibei_list,args=(3,))
p.start()
a = Process(target=self.kaibei_list,args=(9,))
a.start()
for x in range(0,2):
name = 'name'+str(x)
name = Process(target=self.down_url_spider)
name.start()
time.sleep(1)
pass
這網(wǎng)站很有意思颅痊,他直接把需要用錢(qián)購(gòu)買(mǎi)的下載地址直接放到了js源碼里面,只是用js在控制局待,我以為可以直接下載了用斑响,但后面證實(shí)了我太天真,雖然可以下載钳榨,但是下載下來(lái)的模板也都還是加過(guò)密的舰罚,非要用他們的u盾和專(zhuān)業(yè)版的才能用。
def down_url_spider(self):
while True:
vs = MySQLHelper()
query = 'SELECT list.id, list.url, list.`status` FROM list WHERE list.`status` = 0 LIMIT 0, 1' #讀取status為0的數(shù)據(jù)
text = vs.queryAll(query)
try:
item_id = text[0]['id']
print('down_url_spider' + item_id)
except Exception as e:
#print('down_url_spider sleep ') #如果沒(méi)有就休息一秒
vs.close()
time.sleep(3)
else:
#首先把當(dāng)前數(shù)據(jù)更改為1薛耻,代表已經(jīng)有人占了坑位了
querys = "%s%s%s" % ("UPDATE `list` SET `status`= 1 WHERE (`id`='",item_id,"')")
vs.query(querys)
vs.commit()
url = 'http://moku.kaibei.com' + text[0]['url']
f = request.urlopen(url)
html = f.read()
down_url = Selector(text=html).xpath('/html/body/div[5]/div/script[3]').extract()
text = str(down_url[0]).replace(" ","").replace("\r\n","").strip()
baidu_down = re.findall(r'pan.baidu.com/s/\w*',text)
#print(baidu_down)
baidu_down = baidu_down[0]
#print(baidu_down)
querys = "%s%s%s%s%s" % ("UPDATE `list` SET `status`= 2, `baidu_down`= '",baidu_down,"' WHERE (`id`='",item_id,"')")
vs.query(querys)
vs.commit()
vs.close()
time.sleep(3)
pass
pass
雖然做了些無(wú)用功营罢,但就當(dāng)學(xué)習(xí)了,慢慢改進(jìn)代碼饼齿。
人生或許也是這樣饲漾,有時(shí)候已經(jīng)很努力了,但是卻沒(méi)有什么結(jié)果缕溉,但不要?dú)怵H考传,或許只是暫時(shí)沒(méi)用,后續(xù)如果與一些事物關(guān)聯(lián)起來(lái)或許會(huì)一飛沖天证鸥。
我用的是python3僚楞,用到了下面這些模塊
from multiprocessing import Process
import docs.settings as settings
from urllib import request
from scrapy.selector import Selector
from common.MySQLHelper import MySQLHelper
from lxml import etree
import random
import string
import time
import re
import os
可以看到數(shù)據(jù)里已經(jīng)把數(shù)據(jù)都down下來(lái)了
圖片也全下載下來(lái)了
運(yùn)行過(guò)程中出現(xiàn)了一次問(wèn)題,就是標(biāo)題上面有個(gè)'號(hào)的時(shí)候敌土,寫(xiě)入數(shù)據(jù)庫(kù)出了問(wèn)題镜硕,后來(lái)直接把'號(hào)替換了,這只是練練手沒(méi)什么問(wèn)題返干,大項(xiàng)目里面就不應(yīng)該出這些問(wèn)題了兴枯,這就是sql注入的基本原理。