簡單描述一下爬取的基本思路:
- 在google上搜索海賊王绍坝,選定風(fēng)之動(dòng)漫網(wǎng)為目標(biāo)進(jìn)行爬取數(shù)據(jù),如:http://manhua.fzdm.com/2/846/index_1.html
- 觀察每個(gè)頁面url規(guī)律,846是代表話數(shù)涮因,index_page.html代表是多少頁
- 檢查頁面的圖片便簽呵俏,找出唯一能指定該圖片的CSS表達(dá)式
- 使用requests來get到頁面的報(bào)文畔师,使用BeautifulSoup來解析報(bào)文
- 原計(jì)劃使用MongoDB存儲(chǔ)圖片地址薄霜,處于暫時(shí)操作mongodb還不夠熟練,直接使用了列表操作
- 使用urllib來進(jìn)行下載圖片到本地
from bs4 import BeautifulSoup
import requests
import time
import pymongo
import urllib.request
import os
path = '/Users/meixuhong/OnePiece/'
# ================================== 設(shè)計(jì)數(shù)據(jù)庫 ====================================
client = pymongo.MongoClient('localhost',27017)
onepiece = client['onepiece']
onepiece_pic = onepiece['onepiece_pic']
# ================================== 抓取多頁數(shù)據(jù) ==================================
def parseMultiplePages(chapter,page_num):
img_urls = []
for page_num in range(1,page_num+1):
time.sleep(4)
wb_data = requests.get('http://manhua.fzdm.com/2/{}/index_{}.html'.format(chapter,page_num))
soup = BeautifulSoup(wb_data.text,'lxml')
imgs = soup.select('div#mh > li > a > img')
for img in imgs:
data = {
'img': img.get('src')
}
print(data)
# onepiece_pic.insert_one(data)
img_urls.append(data['img'])
print('img_urls is a list as:',img_urls)
return img_urls
# 837話的前16頁
# parseMultiplePages(837,16)
# ================================== 下載漫畫并命名 ==================================
def dl_images(chapter,img_urls):
#==判斷并創(chuàng)建目錄==
subPath = path + str(chapter) + '/'
isExists = os.path.exists(subPath)
if not isExists:
print('create the path: {}...'.format(subPath))
os.mkdir(subPath)
else:
print('the path already exsiting ...')
# ==判斷并創(chuàng)建目錄==
for i in range(1,len(img_urls)+1):
# 使用urllib.request.urlretrieve(url, fine_path_name)下載文件
urllib.request.urlretrieve(img_urls[i-1],subPath+str(i)+'_'+img_urls[i-1].split('/')[-1])
print('\n{} downloaded and has been named as {}.\n'.format(img_urls[i-1],subPath+str(i)+'_'+img_urls[i-1].split('/')[-1]))
# ================================== 下載多話漫畫 ==================================
def dl_chapters(chapter_from_,chapter_to_):
for i in range(chapter_from_ , chapter_to_ + 1):
dl_images(i,parseMultiplePages(i,18))
dl_chapters(800,848)
程序完全只考慮了功能實(shí)現(xiàn)漱凝,沒有考慮多做考慮疮蹦,以后海賊王更新的時(shí)候不用到處找資源慢慢等待了,滿足我個(gè)人需求碉哑。