用所學的前兩節(jié)內容卸了一個簡單的爬蟲功能绑莺,爬取貓眼電影中的電影名字,導演惕耕,封面纺裁,類型信息
思路分析:
先看一下http://maoyan.com/robots
http://maoyan.com/films 我們要抓取的信息在這個頁面,我們可以通過該頁面的下方的頁碼元素司澎,使用firebug查看將所有頁碼中包含的電影鏈接取出欺缘,通過鏈接進入電影詳情頁面抓取我們需要的信息,我抓取了電影名字挤安,導演谚殊,類型,封面路徑信息并保存到csv中
需要使用的模塊及文檔說明:
- requests 模塊
- lxml模塊
- re模塊
- cssselect模塊
- csv 模塊
下載頁面:代碼中已有注釋
#!/usr/bin/env python
# -*-coding:utf-8 -*-
import requests
import lxml.html
import re
import sys
reload(sys)
sys.setdefaultencoding("utf-8")
class Downloader:
def __init__(self, url, scrape_callback=None):
self.url = url
self.scrape_callback = scrape_callback
def get_download_url(self):
# '''獲取每個電影詳情頁的url'''
url_list =[]
# for i in range(1,22803):
for i in range(1):
url = self.url+'?offset='+str(30*i)
response = requests.get(url)
if response.status_code == 200:
html = response.text
tree = lxml.html.fromstring(html)
# 使用xpath方式獲取div下的所有a標簽
ll = tree.xpath(u"http://div/a")
# 遍歷ll
for i in ll:
# 獲取a標簽的href信息
url_str = i.get('href')
# 判斷url_str
if url_str is not None:
# 判斷url_str是否有'films'中,有的話則是我們要找的內容
if re.search('films', url_str) is not None:
# 判斷蛤铜,不保存重復的url
if i.get('href') not in url_list:
url_list.append(i.get('href'))
return url_list
def download(self):
# ''''抓取url信息中的電影名稱络凿,等信息'''
# 獲取所有詳情頁的url
url_list = self.get_download_url()
for url_num in url_list:
# 查看元素可知返回的是‘films/2333’只需把后綴的/和數(shù)字填入到默認url中就是具體詳情頁
urlpage = self.url+'/'+str(url_num).split('/')[-1]
# url請求
response = requests.get(urlpage)
# 判斷,如果能正常相應則繼續(xù)
if response.status_code == 200:
html = response.text
# 規(guī)范html
tree = lxml.html.fromstring(html)
# 判斷scrape_callback昂羡,True則保存信息
if self.scrape_callback:
# 通過CSS選擇器獲取想要的內容
l_img = tree.cssselect('div.avater-shadow > img.avater')[0].get('src')
l_filmname = tree.cssselect('div.movie-brief-container > h3.name')[0].text_content()
l_type = tree.cssselect('div.movie-brief-container > ul> li.ellipsis')[0].text_content()
l_director = tree.cssselect('div.info > a.name')[0].text_content()
row = [l_filmname, l_director, l_type, l_img]
self.scrape_callback(row)
保存頁面:講抓取的信息保存到csv中
#!/usr/bin/env python
# -*-coding:utf-8 -*-
import csv
import downloader
import sys
reload(sys)
sys.setdefaultencoding("utf-8")
class ScrapeCallback:
'''下載的信息保存到csv中'''
def __init__(self):
fileobj=open("maoyan.csv", 'w')
self.writer = csv.writer(fileobj)
self.films = ('film_name', 'director', 'film_type', 'film_cover')
self.writer.writerow(self.films)
def __call__(self, row):
self.writer.writerow(row)
if __name__ == '__main__':
url = 'http://maoyan.com/films'
d = downloader.Downloader(url, scrape_callback=ScrapeCallback())
d.download()
運行ScrapeCallback會得到maoyan.csv文件絮记,用excel打開會出現(xiàn)亂碼,須將utf-8轉成ansi 相關鏈接