示例網(wǎng)頁用豆瓣電影top250织狐。豆瓣top250其實(shí)是一個多頁面的爬取芹血,并沒有什么可怕之處藐窄,首先做第一個頁面的爬蟲
from bs4 import BeautifulSoup
import requests
import time
url = 'https://movie.douban.com/top250?start=0&filter='
wb_data = requests.get(url)
soup = BeautifulSoup(wb_data.text,'lxml')
imgs = soup.select('#content div.pic > a > img')
titles = soup.select('#content div.info > div.hd > a > span')
rates = soup.select('#content span.rating_num')
for img,title,rate in zip(imgs,titles,rates):
data = {
'img':img.get('src'),
'title':title.get_text(),
'rate':rate.get_text()
}
print(data)
OK资昧,做完一個之后其實(shí)工作完成了大半,接下來稍微修改即可荆忍。
B71EFAAF-4FD4-4F74-BF68-905593E48EBF.png
8401C0A7-1833-495D-88A5-2D0E1EB8A850.png
上面兩張圖是豆瓣top250第一頁和第二頁的鏈接格带,不難看出只有start后面的數(shù)字在發(fā)生改變,其實(shí)這個數(shù)字代表的就是每個頁面的加載量刹枉,每頁都會加載25個電影叽唱,找到這個規(guī)律后我們使用列表推導(dǎo)式完成多頁面的集合,修改上面的url行如下微宝。
urls = ['https://movie.douban.com/top250?start={}&filter='.format(str(i)) for i in range(0,250,25)]
之后將這些代碼都封裝進(jìn)一個函數(shù)中棺亭,用for循環(huán)讀出即可,最終代碼如下蟋软。
from bs4 import BeautifulSoup
import requests
import time
urls = ['https://movie.douban.com/top250?start={}&filter='.format(str(i)) for i in range(0,250,25)]
def get_attractions(url,data=None):
wb_data = requests.get(url)
time.sleep(2)
soup = BeautifulSoup(wb_data.text,'lxml')
imgs = soup.select('#content div.pic > a > img')
titles = soup.select('#content div.info > div.hd > a > span')
rates = soup.select('#content span.rating_num')
if data == None:
for img,title,rate in zip(imgs,titles,rates):
data = {
'img':img.get('src'),
'title':title.get_text(),
'rate':rate.get_text()
}
print(data)
for single_url in urls:
get_attractions(single_url)
這里引入了python的time模塊镶摘,使用它的sleep()方法來推遲調(diào)用線程的運(yùn)行,這里用來讓爬蟲每隔兩秒請求一次岳守,可以防止有的網(wǎng)站因?yàn)轭l繁的請求把我們IP封掉凄敢。