python scrapy實(shí)操

主要介紹:
1、scrapy框架簡(jiǎn)介、數(shù)據(jù)在框架內(nèi)如何進(jìn)行流動(dòng)
2闰渔、scrapy框架安裝、mongodb數(shù)據(jù)庫(kù)安裝
3铐望、scrapy抓取項(xiàng)目如何創(chuàng)建
4冈涧、scrapy抓取項(xiàng)目如何進(jìn)行數(shù)據(jù)解析
5、scrapy抓取項(xiàng)目如何繞過(guò)反爬機(jī)制抓取數(shù)據(jù)
6正蛙、scrapy抓取項(xiàng)目如何存儲(chǔ)數(shù)據(jù)到不同的格式
=
抓取目標(biāo):
本文通過(guò)網(wǎng)頁(yè)豆瓣電影排行數(shù)據(jù)的抓取和清洗,介紹Python使用

image

準(zhǔn)備工作:
1督弓、具有一定的Python基礎(chǔ)
2、具有一定的linux系統(tǒng)管理基礎(chǔ)乒验,編譯安裝軟件愚隧,yum包管理工具等
3、具有一定數(shù)據(jù)庫(kù)管理基礎(chǔ)锻全,增刪改查
4狂塘、了解xpath語(yǔ)法和插件的使用方法

代碼下載地址:Python爬蟲(chóng)框架Scrapy入門(mén)與實(shí)踐
注意:
文件middlewares.py 中下面信息需要改為有效信息:
request.meta['proxy'] = 'http-cla.abuyun.com:9030'
proxy_name_pass = b'H622272STYB666BW:F78990HJSS7'
如果么有購(gòu)買(mǎi),測(cè)試功能需要取消該方法:
修改settings.py文件:注釋douban.middlewares.my_proxy:
DOWNLOADER_MIDDLEWARES = { #'douban.middlewares.my_proxy': 543,}

操作 1 : 通過(guò)Pycharm CE 創(chuàng)建一個(gè)項(xiàng)目scrapy_douban

創(chuàng)建前需要安裝好相應(yīng)的環(huán)境和軟件:
環(huán)境配置,安裝
A : 安裝Anaconda (包含Python環(huán)境,Conda,numpy,pandas 等大量依賴(lài)包) :
下載地址1:Anaconda 下載1
下載地址2(國(guó)內(nèi)推薦): 清華大學(xué)開(kāi)源鏡像 Anaconda 下載

選擇包 : 分別對(duì)應(yīng)有Mac , windows, linux 包, 根據(jù)設(shè)備選擇,
比如我的是Mac : Anaconda3-5.2.0-MacOSX-x86_64-1.pkg

image

下載開(kāi)發(fā)工具->PyCharm

logo如下:

image

創(chuàng)建項(xiàng)目: 下面選擇Python方式是創(chuàng)建一個(gè)新的目錄管理第三方源, 后面可能需要手動(dòng)導(dǎo)入需要的包

image

創(chuàng)建后就會(huì)自動(dòng)生成項(xiàng)目,并導(dǎo)入初始化環(huán)境, 然后就可以創(chuàng)建代碼了:

image
操作 2 : 進(jìn)入你的項(xiàng)目路徑, 并初始化

(下面調(diào)試是在Mac OS 系統(tǒng)進(jìn)行,其他系統(tǒng)可能有點(diǎn)小區(qū)別)
進(jìn)入你的項(xiàng)目路徑:
cd /Users/niexiaobo/Documents/PythonFile/scrapy_douban
并初始化一個(gè)項(xiàng)目douban:
scrapy startproject douban

終端效果如下:

niexiaobodeMacBook-Pro:~ niexiaobo$ cd /Users/niexiaobo/Documents/PythonFile/scrapy_douban 
niexiaobodeMacBook-Pro:scrapy_douban niexiaobo$ scrapy startproject douban
New Scrapy project 'douban', using template directory '/anaconda3/lib/python3.6/site-packages/scrapy/templates/project', created in:
    /Users/niexiaobo/Documents/PythonFile/scrapy_douban/douban

You can start your first spider with:
    cd douban
    scrapy genspider example example.com
niexiaobodeMacBook-Pro:scrapy_douban niexiaobo$ 

image
操作 3 : 修改settings.py設(shè)置文件:
ROBOTSTXT_OBEY = False
# 下載延時(shí)
DOWNLOAD_DELAY = 0.5

操作 4 : 生成初始化文件:
niexiaobodeMacBook-Pro:scrapy_douban niexiaobo$ cd douban/
niexiaobodeMacBook-Pro:douban niexiaobo$ ls
douban      scrapy.cfg
niexiaobodeMacBook-Pro:douban niexiaobo$ cd douban/
niexiaobodeMacBook-Pro:douban niexiaobo$ cd spiders/
niexiaobodeMacBook-Pro:spiders niexiaobo$ scrapy genspider douban_spider movie.douban.com
Created spider 'douban_spider' using template 'basic' in module:
  douban.spiders.douban_spider
niexiaobodeMacBook-Pro:spiders niexiaobo$ ls
__init__.py     __pycache__     douban_spider.py
niexiaobodeMacBook-Pro:spiders niexiaobo$ 

image

抓取目標(biāo)鏈接:https://movie.douban.com/top250

image
操作 5 : 根據(jù)需要抓取的對(duì)象編輯數(shù)據(jù)模型文件 items.py ,創(chuàng)建對(duì)象(序號(hào),名稱(chēng),描述,評(píng)價(jià)等等).
# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy

class DoubanItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()

    #序號(hào)
    serial_number = scrapy.Field()
    #電影名稱(chēng)
    movie_name = scrapy.Field()
    # 介紹
    introduce = scrapy.Field()
    # 星級(jí)
    star = scrapy.Field()
    # 評(píng)價(jià)
    evaluate = scrapy.Field()
    # 描述
    describle = scrapy.Field()

操作 6 : 編輯爬蟲(chóng)文件douban_spider.py :

修改前:

# -*- coding: utf-8 -*-
import scrapy

class DoubanSpiderSpider(scrapy.Spider):
    name = 'douban_spider'
    allowed_domains = ['movie.douban.com']
    start_urls = ['http://movie.douban.com/']

    def parse(self, response):
        pass

修改后:

# -*- coding: utf-8 -*-
import scrapy

class DoubanSpiderSpider(scrapy.Spider):
    # 爬蟲(chóng)的名稱(chēng)
    name = 'douban_spider'
    # 爬蟲(chóng)允許抓取的域名
    allowed_domains = ['movie.douban.com']
    # 爬蟲(chóng)抓取數(shù)據(jù)地址,給調(diào)度器
    start_urls = ['http://movie.douban.com/top250']

    def parse(self, response):
        # 打印返回結(jié)果
        print(response.text)

操作 7 : 開(kāi)啟scrapy項(xiàng)目:

打開(kāi)終端, 在spiders文件路徑下執(zhí)行命令:scrapy crawl douban_spider

niexiaobodeMacBook-Pro:spiders niexiaobo$ scrapy crawl douban_spider

執(zhí)行返回:

2018-07-10 10:36:18 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: douban)
2018-07-10 10:36:18 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.4.0, Python 3.6.5 |Anaconda, Inc.| (default, Apr 26 2018, 08:42:37) - [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)], pyOpenSSL 18.0.0 (OpenSSL 1.0.2o  27 Mar 2018), cryptography 2.2.2, Platform Darwin-16.7.0-x86_64-i386-64bit
2018-07-10 10:36:18 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'douban', 'DOWNLOAD_DELAY': 0.5, 'NEWSPIDER_MODULE': 'douban.spiders', 'SPIDER_MODULES': ['douban.spiders']}
2018-07-10 10:36:18 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2018-07-10 10:36:18 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.Do
.
.
2018-07-10 10:36:18 [scrapy.core.engine] DEBUG: Crawled (403) <GET http://movie.douban.com/top250> (referer: None)
2018-07-10 10:36:18 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <403 http://movie.douban.com/top250>: HTTP status code is not handled or not allowed
.
.
 'log_count/DEBUG': 2,
 'log_count/INFO': 8,
 'memusage/max': 51515392,
 'memusage/startup': 51515392,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2018, 7, 10, 2, 36, 18, 577140)}
2018-07-10 10:36:18 [scrapy.core.engine] INFO: Spider closed (finished)

上面返回發(fā)現(xiàn)有報(bào)錯(cuò):

2018-07-10 10:36:18 [scrapy.core.engine] DEBUG: Crawled (403) <GET http://movie.douban.com/top250> (referer: None)
2018-07-10 10:36:18 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <403 http://movie.douban.com/top250>: HTTP status code is not handled or not allowed

我們還需要回到項(xiàng)目settings.py 里 設(shè)置USER_AGENT,不然請(qǐng)求無(wú)法通過(guò)
設(shè)置什么內(nèi)容?

操作 8 : 設(shè)置請(qǐng)求頭信息 USER_AGENT

我們需要打開(kāi)網(wǎng)頁(yè),F12打開(kāi)頁(yè)面調(diào)試窗口,在網(wǎng)絡(luò)(network)下,刷新頁(yè)面,找到"top250",并點(diǎn)擊它:

image

找到請(qǐng)求信息的消息頭,里面有User-Agent信息: (復(fù)制它)

image

User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:61.0) Gecko/20100101 Firefox/61.0

打開(kāi)Pycharm CE的 settings.py 里 設(shè)置USER_AGENT:

image

打開(kāi)終端, 在spiders文件路徑下重新執(zhí)行命令:scrapy crawl douban_spider

niexiaobodeMacBook-Pro:spiders niexiaobo$ scrapy crawl douban_spider

如果返回日志里有一堆html信息,說(shuō)明執(zhí)行成功:

...
<div class="pic">
                    <em class="">1</em>
                    <a >
                        <img width="100" alt="肖申克的救贖" src="https://img3.doubanio.com/view/photo/s_ratio_poster/public/p480747492.jpg" class="">
                    </a>
                </div>
                <div class="info">
                    <div class="hd">
                        <a  class="">
                            <span class="title">肖申克的救贖</span>
                                    <span class="title">&nbsp;/&nbsp;The Shawshank Redemption</span>
                                <span class="other">&nbsp;/&nbsp;月黑高飛(港)  /  刺激1995(臺(tái))</span>
                        </a>

                            <span class="playable">[可播放]</span>
                    </div>
                    <div class="bd">
                        <p class="">
                            導(dǎo)演: 弗蘭克·德拉邦特 Frank Darabont&nbsp;&nbsp;&nbsp;主演: 蒂姆·羅賓斯 Tim Robbins /...<br>
                            1994&nbsp;/&nbsp;美國(guó)&nbsp;/&nbsp;犯罪 劇情
                        </p>

                        <div class="star">
                                <span class="rating5-t"></span>
                                <span class="rating_num" property="v:average">9.6</span>
                                <span property="v:best" content="10.0"></span>
                                <span>1062864人評(píng)價(jià)</span>
                        </div>

                            <p class="quote">
                                <span class="inq">希望讓人自由。</span>
                            </p>
                    </div>
                </div>
...

另外,本人安裝Python是通過(guò)Anaconda管理,會(huì)安裝大部分常用的模塊,如果編譯安裝Python缺少模塊,就可能執(zhí)行失敗

image

如果執(zhí)行失敗,比如下面情況,像教程里老師缺少sqlite3:

image

那么需要安裝sqlite:

管理員執(zhí)行命令: sudo yum -y install sqlite*
再輸入電腦密碼回車(chē)

image

安裝成功后,需要重新編譯一下Python,并開(kāi)啟sqlite
進(jìn)入你的Python安裝目錄編譯:
./configure --prefix='你的安裝路徑' --with-ssl

image
操作 9 : 上面我們是在終端執(zhí)行的,為了方便,現(xiàn)在設(shè)置在Pycharm CE開(kāi)發(fā)工具中執(zhí)行.

首先我們需要?jiǎng)?chuàng)建一個(gè)啟動(dòng)文件,比如main.py:
創(chuàng)建完成后編寫(xiě)如下main.py:

from  scrapy import cmdline
# 輸出未過(guò)濾的頁(yè)面信息
cmdline.execute('scrapy crawl douban_spider'.split())

右鍵運(yùn)行,返回信息和終端一樣.

操作 10 : 下面進(jìn)入爬蟲(chóng)文件douban_spider.py 進(jìn)行進(jìn)一步設(shè)置:
# -*- coding: utf-8 -*-
import scrapy

class DoubanSpiderSpider(scrapy.Spider):
    # 爬蟲(chóng)的名稱(chēng)
    name = 'douban_spider'
    # 爬蟲(chóng)允許抓取的域名
    allowed_domains = ['movie.douban.com']
    # 爬蟲(chóng)抓取數(shù)據(jù)地址,給調(diào)度器
    start_urls = ['http://movie.douban.com/top250']

    def parse(self, response):
        movie_list = response.xpath("http://div[@class='article']//ol[@class='grid_view']/li")
        for i_item in movie_list:
            print(i_item)

其中:response.xpath("http://div[@class='article']//ol[@class='grid_view']/li")是xml的解析方法xpath, 括號(hào)內(nèi)是xpath語(yǔ)法:

(根據(jù)抓取網(wǎng)頁(yè)的目錄結(jié)構(gòu),等到上面結(jié)果, 意思是選取class為article的div下,class為grid_view的ol下的所有l(wèi)i標(biāo)簽)

image

示例:

image

回到上面,在douban_spider.py 編輯完成后,進(jìn)入main.py運(yùn)行:

2018-07-10 14:31:51 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://movie.douban.com/top250> from <GET http://movie.douban.com/top250>
2018-07-10 14:31:52 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://movie.douban.com/top250> (referer: None)
<200 https://movie.douban.com/top250>
<Selector xpath="http://div[@class='article']//ol[@class='grid_view']/li" data='<li>\n            <div class="item">\n    '>
<Selector xpath="http://div[@class='article']//ol[@class='grid_view']/li" data='<li>\n            <div class="item">\n    '>
<Selector xpath="http://div[@class='article']//ol[@class='grid_view']/li" data='<li>\n            <div class="item">\n    '>
<Selector xpath="http://div[@class='article']//ol[@class='grid_view']/li" data='<li>\n            <div class="item">\n    '>
<Selector xpath="http://div[@class='article']//ol[@class='grid_view']/li" data='<li>\n            <div class="item">\n    '>
...

操作 11 : 返回我們選擇的Selector對(duì)象

接下來(lái)進(jìn)一步細(xì)分,獲取詳細(xì)的信息:
繼續(xù)修改 信息:
1: 導(dǎo)入模型文件from douban.items import DoubanItem
意思是從目錄文件douban下的items.py里,導(dǎo)入DoubanItem模型
2: 修改遍歷:

        for i_item in movie_list:
            douban_item = DoubanItem()
            douban_item['serial_number'] = i_item.xpath(".//div[@class='item']//em/text()").extract_first()
            print(douban_item)

解釋:

1 DoubanItem() 模型初始化
2 douban_item['serial_number'] 設(shè)置模型變量serial_number值,
3 i_item.xpath(".//div[@class='item']//em/text()")對(duì)返回結(jié)果進(jìn)一步篩選,并且以"."開(kāi)頭表示拼接,以text()結(jié)束表示獲取其信息
4 extract_first() 篩選結(jié)果的第一個(gè)值

修改后的douban_spider.py文件:

# -*- coding: utf-8 -*-
import scrapy
from douban.items import DoubanItem

class DoubanSpiderSpider(scrapy.Spider):
    # 爬蟲(chóng)的名稱(chēng)
    name = 'douban_spider'
    # 爬蟲(chóng)允許抓取的域名
    allowed_domains = ['movie.douban.com']
    # 爬蟲(chóng)抓取數(shù)據(jù)地址,給調(diào)度器
    start_urls = ['http://movie.douban.com/top250']

    def parse(self, response):
        movie_list = response.xpath("http://div[@class='article']//ol[@class='grid_view']/li")
        for i_item in movie_list:
            douban_item = DoubanItem()
            douban_item['serial_number'] = i_item.xpath(".//div[@class='item']//em/text()").extract_first()
            print(douban_item)

運(yùn)行main.py:( 如下,序號(hào)獲取成功)

2018-07-10 15:06:13 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://movie.douban.com/top250> from <GET http://movie.douban.com/top250>
2018-07-10 15:06:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://movie.douban.com/top250> (referer: None)
{'serial_number': '1'}
{'serial_number': '2'}
{'serial_number': '3'}
{'serial_number': '4'}
{'serial_number': '5'}
{'serial_number': '6'}
{'serial_number': '7'}
...

操作 12 : 完善douban_spider.py文件(解析詳細(xì)屬性):
# -*- coding: utf-8 -*-
import scrapy
from douban.items import DoubanItem

class DoubanSpiderSpider(scrapy.Spider):
    # 爬蟲(chóng)的名稱(chēng)
    name = 'douban_spider'
    # 爬蟲(chóng)允許抓取的域名
    allowed_domains = ['movie.douban.com']
    # 爬蟲(chóng)抓取數(shù)據(jù)地址,給調(diào)度器
    start_urls = ['http://movie.douban.com/top250']

    def parse(self, response):
        movie_list = response.xpath("http://div[@class='article']//ol[@class='grid_view']/li")
        for i_item in movie_list:
            douban_item = DoubanItem()
            douban_item['serial_number'] = i_item.xpath(".//div[@class='item']//em/text()").extract_first()
            douban_item['movie_name'] = i_item.xpath(".//div[@class='info']/div[@class='hd']/a/span[1]/text()").extract_first()
            descs = i_item.xpath(".//div[@class='info']//div[@class='hd']/p[1]/text()").extract()
            for i_desc in descs:
                i_desc_str = "".join(i_desc.split())
                douban_item['introduce'] = i_desc_str

            douban_item['star'] = i_item.xpath(".//span[@class='rating_num']/text()").extract_first()
            douban_item['evaluate'] = i_item.xpath(".//div[@class='star']//span[4]/text()").extract_first()
            douban_item['describle'] = i_item.xpath(".//p[@class='quote']/span/text()").extract_first()
            print(douban_item)

再次運(yùn)行main.py,返回信息:

2018-07-10 15:29:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://movie.douban.com/top250> (referer: None)
{'describle': '希望讓人自由鳄厌。',
 'evaluate': '1062864人評(píng)價(jià)',
 'movie_name': '肖申克的救贖',
 'serial_number': '1',
 'star': '9.6'}
{'describle': '風(fēng)華絕代荞胡。',
 'evaluate': '774612人評(píng)價(jià)',
 'movie_name': '霸王別姬',
 'serial_number': '2',
 'star': '9.5'}
{'describle': '怪蜀黍和小蘿莉不得不說(shuō)的故事。',
 'evaluate': '991246人評(píng)價(jià)',
 'movie_name': '這個(gè)殺手不太冷',
 'serial_number': '3',
 'star': '9.4'}
...

操作 13 : yield命令和Scrapy框架

接著把剛才最后一行代碼
print(douban_item)
替換成
yield douban_item

意思是將返回結(jié)果壓入 item Pipline進(jìn)行處理:(如下圖介紹scrapy原理)

image
操作 14 : 繼續(xù)編輯我們的爬蟲(chóng)douban_spider.py文件

一直到上面為止,只抓取了當(dāng)前頁(yè)面,接下來(lái)需要處理下一頁(yè)功能,并遍歷所有鏈接.
如下圖所示,我們需要遍歷標(biāo)簽<span class="next"> 下的<a href="".....</a>

image
操作 15 : 遍歷 "下一頁(yè)" , 獲取所有數(shù)據(jù)

再次編輯douban_spider.py文件:

# -*- coding: utf-8 -*-
import scrapy
from douban.items import DoubanItem

class DoubanSpiderSpider(scrapy.Spider):
    # 爬蟲(chóng)的名稱(chēng)
    name = 'douban_spider'
    # 爬蟲(chóng)允許抓取的域名
    allowed_domains = ['movie.douban.com']
    # 爬蟲(chóng)抓取數(shù)據(jù)地址,給調(diào)度器
    start_urls = ['http://movie.douban.com/top250']

    def parse(self, response):
        movie_list = response.xpath("http://div[@class='article']//ol[@class='grid_view']/li")
        for i_item in movie_list:
            douban_item = DoubanItem()
            douban_item['serial_number'] = i_item.xpath(".//div[@class='item']//em/text()").extract_first()
            douban_item['movie_name'] = i_item.xpath(".//div[@class='info']/div[@class='hd']/a/span[1]/text()").extract_first()
            descs = i_item.xpath(".//div[@class='info']//div[@class='hd']/p[1]/text()").extract()
            for i_desc in descs:
                i_desc_str = "".join(i_desc.split())
                douban_item['introduce'] = i_desc_str

            douban_item['star'] = i_item.xpath(".//span[@class='rating_num']/text()").extract_first()
            douban_item['evaluate'] = i_item.xpath(".//div[@class='star']//span[4]/text()").extract_first()
            douban_item['describle'] = i_item.xpath(".//p[@class='quote']/span/text()").extract_first()
            yield douban_item
        # 解析下一頁(yè)
        next_link = response.xpath("http://span[@class='next']/link/@href").extract()
        if next_link:
            next_link = next_link[0]
            yield scrapy.Request("https://movie.douban.com/top250"+next_link,callback=self.parse)

解釋:
1 每次for循環(huán)結(jié)束后,需要獲取next頁(yè)面鏈接:next_link
2 如果到最后一頁(yè)時(shí)沒(méi)有下一頁(yè),需要判斷一下
3 下一頁(yè)地址拼接: 點(diǎn)擊第二頁(yè)時(shí)頁(yè)面地址是https://movie.douban.com/top250?start=25&filter= 恰好就是https://movie.douban.com/top250 和 <a href ...</a>中href的拼接
4 callback=self.parse : 請(qǐng)求回調(diào)

運(yùn)行main.py結(jié)果:(可以看到我們把最后一個(gè)序號(hào)250的數(shù)據(jù)加載到)

image
操作 16 : 保存數(shù)據(jù)到j(luò)son文件 或者 csv文件

在douban路徑執(zhí)行:scrapy crawl douban_spider -o movielist.json
或者
在douban路徑執(zhí)行:scrapy crawl douban_spider -o movielist.csv

niexiaobodeMacBook-Pro:douban niexiaobo$ scrapy crawl douban_spider -o movielist.json

niexiaobodeMacBook-Pro:douban niexiaobo$ scrapy crawl douban_spider -o movielist.csv

保存成功:

...
{'describle': '一部能引人思考的科幻勵(lì)志片了嚎。',
 'evaluate': '92482人評(píng)價(jià)',
 'movie_name': '千鈞一發(fā)',
 'serial_number': '249',
 'star': '8.7'}
2018-07-10 17:29:47 [scrapy.core.scraper] DEBUG: Scraped from <200 https://movie.douban.com/top250?start=225&filter=>
{'describle': '獻(xiàn)給所有外婆的電影泪漂。',
 'evaluate': '50187人評(píng)價(jià)',
 'movie_name': '愛(ài)·回家',
 'serial_number': '250',
 'star': '9.0'}
2018-07-10 17:29:47 [scrapy.core.engine] INFO: Closing spider (finished)
2018-07-10 17:29:47 [scrapy.extensions.feedexport] INFO: Stored json feed (250 items) in: movielist.json
2018-07-10 17:29:47 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 3862,
 'downloader/request_count': 11,
 'downloader/request_method_count/GET': 11,
 'downloader/response_bytes': 128522,
 'downloader/response_count': 11,
 'downloader/response_status_count/200': 10,
 'downloader/response_status_count/301': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2018, 7, 10, 9, 29, 47, 88010),
 'item_scraped_count': 250,
 'log_count/DEBUG': 262,
 'log_count/INFO': 8,
 'memusage/max': 51916800,
 'memusage/startup': 51916800,
 'request_depth_max': 9,
 'response_received_count': 10,
 'scheduler/dequeued': 11,
 'scheduler/dequeued/memory': 11,
 'scheduler/enqueued': 11,
 'scheduler/enqueued/memory': 11,
 'start_time': datetime.datetime(2018, 7, 10, 9, 29, 40, 675082)}
2018-07-10 17:29:47 [scrapy.core.engine] INFO: Spider closed (finished)

ls查看:里面有movielist.json 和 movielist.csv

niexiaobodeMacBook-Pro:douban niexiaobo$ ls
__init__.py items.py    middlewares.py  movielist.json  settings.py
__pycache__ main.py     movielist.csv   pipelines.py    spiders

查看保存結(jié)果:
Mac可以使用Numbers正常打開(kāi)(如果使用Excel打開(kāi)顯示亂碼,需要先設(shè)置編碼格式Utf8-bom后打開(kāi))

image
操作 17 : 存儲(chǔ)到數(shù)據(jù)庫(kù)MongoDB(pymongo)

首先檢查是否安裝pymongo:
打開(kāi)終端
輸入

python

回車(chē)
輸入:

import pymongo

回車(chē)

如果沒(méi)有安裝就會(huì)報(bào)錯(cuò):

...
 No module named 'pymongo'

安裝pymongo:
輸入命令:

pip install pymongo 

回車(chē)安裝.

安裝成功以后,接下來(lái)需要編寫(xiě)存儲(chǔ)代碼.
進(jìn)入項(xiàng)目
設(shè)置settings.py文件
(1)將settings.py被注釋的下面代碼開(kāi)啟:

ITEM_PIPELINES = {
   'douban.pipelines.DoubanPipeline': 300,
}

(2)settings.py文件最后添加數(shù)據(jù)庫(kù)信息:

啟動(dòng)數(shù)據(jù)庫(kù)服務(wù)

host:你的ip地址;
port : pymongo默認(rèn)端口
db_name: 數(shù)據(jù)庫(kù)名
db_collection: 表名

# 定義MongoDB信息
mongo_host = '172.16.0.0'
mongo_port = 27017
mongo_db_name = 'douban'
mongo_db_collection = 'douban_movie'

修改你的pipelines.py文件如下:

# -*- coding: utf-8 -*-
import pymongo
from douban.settings import mongo_host ,mongo_port,mongo_db_name,mongo_db_collection

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

class DoubanPipeline(object):
    def __init__(self):
        host = mongo_host
        port = mongo_port
        dbname = mongo_db_name
        sheetname = mongo_db_collection
        client = pymongo.MongoClient(host=host,port=port)
        mydb = client[dbname]
        self.post = mydb[sheetname]
    def process_item(self, item, spider):
        data = dict(item)
        self.post.insert(data)
        return item

進(jìn)入main.py運(yùn)行.即可存儲(chǔ)數(shù)據(jù)到數(shù)據(jù)庫(kù).

操作 17 : ip代理中間價(jià)編寫(xiě)(爬蟲(chóng)ip地址偽裝)

修改中間價(jià)文件:middlewares.py文件:
(1)文件開(kāi)頭導(dǎo)入base64文件:

import base64

(2)文件結(jié)尾添加方法:

class my_proxy(object):
    def process_request(self,request,spider):
        request.meta['proxy'] = 'http-cla.abuyun.com:9030'
        proxy_name_pass = b'H622272STYB666BW:F78990HJSS7'
        enconde_pass_name = base64.b64encode(proxy_name_pass)
        request.headers['Proxy-Authorization'] = 'Basic ' + enconde_pass_name.decode()

解釋:根據(jù)阿布云注冊(cè)購(gòu)買(mǎi)http隧道列表信息
request.meta['proxy'] : '服務(wù)器地址:端口號(hào)'
proxy_name_pass: b'證書(shū)號(hào):密鑰' ,b開(kāi)頭是字符串base64處理
base64.b64encode() : 變量做base64處理
'Basic ' : basic后一定要有空格

購(gòu)買(mǎi)阿布云http隧道頁(yè):

image

修改settings.py文件:
(3)取消注釋,并修改如下:

DOWNLOADER_MIDDLEWARES = {
   'douban.middlewares.my_proxy': 543,
}

(4)進(jìn)入main.py運(yùn)行:
下面截圖表示成功隱藏ip地址

image
操作 18 : 頭信息User-Agent偽裝

其實(shí)在上面'操作 8' 步驟里已經(jīng)設(shè)置過(guò)一次User-Agent信息,不過(guò)信息是寫(xiě)死的,
接下里我們通過(guò)隨機(jī)給出一個(gè)User-Agent信息的方式來(lái)實(shí)現(xiàn)簡(jiǎn)單偽裝:

同樣是修改中間價(jià)文件:middlewares.py文件:
(1)文件開(kāi)頭導(dǎo)入random文件(隨機(jī)函數(shù)):

import random

(2)文件結(jié)尾添加方法:
添加新方法:

class my_useragent(object):
    def process_request(self, request, spider):
        UserAgentList = [
            "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
            "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
            "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
            "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
            "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
            "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
            "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
            "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
            "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
            "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
            "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
            "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5",
            "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
            "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20",
            "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER",
            "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)",
            "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E; LBBROWSER)",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 LBBROWSER",
            "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)",
            "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; QQBrowser/7.0.3698.400)",
            "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)",
            "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; 360SE)",
            "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)",
            "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)",
            "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1",
            "Mozilla/5.0 (iPad; U; CPU OS 4_2_1 like Mac OS X; zh-cn) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8C148 Safari/6533.18.5",
            "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:2.0b13pre) Gecko/20110307 Firefox/4.0b13pre",
            "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11",
            "Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10",
            "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36",
        ]
        agent = random.choice(UserAgentList)
        request.headers['User_Agent'] = agent

(3)修改settings.py文件:并修改如下:
增加一條設(shè)置: 'douban.middlewares.my_useragent': 544

DOWNLOADER_MIDDLEWARES = {
   'douban.middlewares.my_proxy': 543,
   'douban.middlewares.my_useragent': 544,

}

(4)進(jìn)入main.py運(yùn)行:
user agent設(shè)置成功

image
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市歪泳,隨后出現(xiàn)的幾起案子萝勤,更是在濱河造成了極大的恐慌,老刑警劉巖呐伞,帶你破解...
    沈念sama閱讀 211,265評(píng)論 6 490
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件敌卓,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡伶氢,警方通過(guò)查閱死者的電腦和手機(jī)趟径,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,078評(píng)論 2 385
  • 文/潘曉璐 我一進(jìn)店門(mén),熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)鞍历,“玉大人舵抹,你說(shuō)我怎么就攤上這事×涌常” “怎么了惧蛹?”我有些...
    開(kāi)封第一講書(shū)人閱讀 156,852評(píng)論 0 347
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)。 經(jīng)常有香客問(wèn)我香嗓,道長(zhǎng)迅腔,這世上最難降的妖魔是什么? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 56,408評(píng)論 1 283
  • 正文 為了忘掉前任靠娱,我火速辦了婚禮沧烈,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘像云。我一直安慰自己锌雀,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 65,445評(píng)論 5 384
  • 文/花漫 我一把揭開(kāi)白布迅诬。 她就那樣靜靜地躺著腋逆,像睡著了一般。 火紅的嫁衣襯著肌膚如雪侈贷。 梳的紋絲不亂的頭發(fā)上惩歉,一...
    開(kāi)封第一講書(shū)人閱讀 49,772評(píng)論 1 290
  • 那天,我揣著相機(jī)與錄音俏蛮,去河邊找鬼撑蚌。 笑死,一個(gè)胖子當(dāng)著我的面吹牛搏屑,可吹牛的內(nèi)容都是我干的争涌。 我是一名探鬼主播,決...
    沈念sama閱讀 38,921評(píng)論 3 406
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼睬棚,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼解幼!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起鲫剿,我...
    開(kāi)封第一講書(shū)人閱讀 37,688評(píng)論 0 266
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒(méi)想到半個(gè)月后逼泣,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體砍的,經(jīng)...
    沈念sama閱讀 44,130評(píng)論 1 303
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡滋早,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,467評(píng)論 2 325
  • 正文 我和宋清朗相戀三年影兽,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了峻堰。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片讹开。...
    茶點(diǎn)故事閱讀 38,617評(píng)論 1 340
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡,死狀恐怖捐名,靈堂內(nèi)的尸體忽然破棺而出旦万,到底是詐尸還是另有隱情,我是刑警寧澤镶蹋,帶...
    沈念sama閱讀 34,276評(píng)論 4 329
  • 正文 年R本政府宣布成艘,位于F島的核電站拇砰,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏狰腌。R本人自食惡果不足惜除破,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,882評(píng)論 3 312
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望琼腔。 院中可真熱鬧瑰枫,春花似錦、人聲如沸丹莲。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 30,740評(píng)論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)甥材。三九已至盯另,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間洲赵,已是汗流浹背鸳惯。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 31,967評(píng)論 1 265
  • 我被黑心中介騙來(lái)泰國(guó)打工, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留叠萍,地道東北人芝发。 一個(gè)月前我還...
    沈念sama閱讀 46,315評(píng)論 2 360
  • 正文 我出身青樓,卻偏偏與公主長(zhǎng)得像苛谷,于是被迫代替她去往敵國(guó)和親辅鲸。 傳聞我的和親對(duì)象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 43,486評(píng)論 2 348