利用Python進(jìn)行簡單的一些圖片網(wǎng)站爬蟲。
我們分為三部分來完成這個爬蟲
獲取頁數(shù)的url - 解析頁面的HTLM - 下載圖片
PAGE的url是這樣的:www.91doutu.com/category/qq表情包/page/1
我們可以用for循環(huán)來遍歷出我們需要爬蟲的頁數(shù)笛钝。
BASE_PAGE_URL = 'http://www.91doutu.com/category/qq%E8%A1%A8%E6%83%85%E5%8C%85/page/'
for i in range(0,11):
print BASE_PAGE_URL + str(i)
這樣就獲取到了我們需要的page_url了重罪。
接下來我們來完成第二步
解析頁面的HTML源碼 獲取我們需要的部分宵溅。
#encoding
import requests
from bs4 import BeautifulSoup
response = requests.get('http://www.91doutu.com/category/qq%E8%A1%A8%E6%83%85%E5%8C%85')
content = response.content
soup = BeautifulSoup(content,'lxml')
img_list = soup.find_all('img',attrs={'class':'thumb'})
for img in img_list:
print img['data-src']
這樣就獲取到了我們需要的圖片url了项滑。
第三步-下載
只需要用到一個函數(shù)就輕輕松松搞定曙旭。
首先分割url 取list最后一個元素來當(dāng)做我們的文件名,然后再下載到images目錄下别垮。
#encoding
import requests
from bs4 import BeautifulSoup
import os
import urllib
def download_image(url):
split_list = url.split('/')
filename = split_list.pop()
path = os.path.join('images',filename)
urllib.urlretrieve(url,filename=path)
response = requests.get('http://www.91doutu.com/category/qq%E8%A1%A8%E6%83%85%E5%8C%85')
content = response.content
soup = BeautifulSoup(content,'lxml')
img_list = soup.find_all('img',attrs={'class':'thumb'})
for img in img_list:
url = img['data-src']
download_image(url)
完整的Code:
#encoding
#_PlugName_ = Spider_Img
#__Author__ = Search__
# @Time : 2017/8/29
#__Refer___ = http://www.reibang.com/u/d743d12d1d77
import requests
from bs4 import BeautifulSoup
import os
import urllib
BASE_PAGE_URL = 'http://www.91doutu.com/category/qq%E8%A1%A8%E6%83%85%E5%8C%85/page/'
PAGE_URL_LIST = []
for x in range(7,10):
url = BASE_PAGE_URL + str(x)
PAGE_URL_LIST.append(url)
def download_image(url):
split_list = url.split('/')
filename = split_list.pop()
path = os.path.join('images',filename)
urllib.urlretrieve(url,filename=path)
def get_page(page_url):
response = requests.get(page_url)
content = response.content
soup = BeautifulSoup(content,'lxml')
img_list = soup.find_all('img',attrs={'class':'thumb'})
for img in img_list:
url = img['data-src']
download_image(url)
def main():
for page_url in PAGE_URL_LIST:
get_page(page_url)
if __name__ == "__main__":
main()