scrapy分布式爬蟲部署-- 爬取知乎用戶為例

環(huán)境簡介:
Ubuntu 環(huán)境下 使用MongoDB將數(shù)據(jù)保存到本地,利用redis-server實現(xiàn)分布式部署
使用scrapy框架爬去知乎用戶的信息。

  1. 安裝MongoDB
    sudo apt-get install mongodb
    2.安裝redis
    sudo apt-get install redis-server
    3.安裝scarpy
    sudo apt-get install scrapy

創(chuàng)建爬蟲項目:
scrapy startproject zhihu
cd zhihu
創(chuàng)建爬蟲文件:
scarpy genspider zhihu www.zhihu.com

代碼簡析:
zhihu.py爬蟲

# -*- coding: utf-8 -*-
import json

from scrapy import Request, Spider

from ..items import UserItem


class ZhihuSpider(Spider):
    name = 'zhihu'
    allowed_domains = ['www.zhihu.com']
    start_urls = ['http://www.zhihu.com/']

    start_user = 'excited-vczh'

    # 用戶信息
    user_url = 'https://www.zhihu.com/api/v4/members/{user}?include={include}'
    user_query = 'allow_message,is_followed,is_following,is_org,is_blocking,employments,answer_count,follower_count,articles_count,gender,badge[?(type=best_answerer)].topics'

    # 關(guān)注列表
    followees_url = 'https://www.zhihu.com/api/v4/members/{user}/followees?include={include}&offset=0&limit=20'
    followees_query = 'data[*].answer_count,articles_count,gender,follower_count,is_followed,is_following,badge[?(type=best_answerer)].topics'

    # 粉絲列表
    followers_url = 'https://www.zhihu.com/api/v4/members/{user}/followers?include={include}&offset=0&limit=20'
    followers_query = 'data[*].answer_count,articles_count,gender,follower_count,is_followed,is_following,badge[?(type=best_answerer)].topics'

    def start_requests(self):
        yield Request(self.user_url.format(user=self.start_user, include=self.user_query), self.parse_user)
        yield Request(self.followees_url.format(user=self.start_user, include=self.followees_query, offset=0, limit=20),
                      callback=self.parse_followees)
        yield Request(self.followees_url.format(user=self.start_user, include=self.followees_query, offset=0, limit=20),
                      callback=self.parse_followers)

    def parse_user(self, response):
        result = json.loads(response.text)
        item = UserItem()
        for field in item.fields:
            if field in result.keys():
                item[field] = result.get(field)
        yield item
        yield Request(
            self.followees_url.format(user=result.get('url_token'), include=self.followees_query, offset=0, limit=20),
            callback=self.parse_followees)
        yield Request(
            self.followees_url.format(user=result.get('url_token'), include=self.followees_query, offset=0, limit=20),
            callback=self.parse_followers)

    # 解析關(guān)注列表
    def parse_followees(self, response):
        results = json.loads(response.text)

        if 'data' in results.keys():
            for result in results.get('data'):
                yield Request(self.user_url.format(user=result.get('url_token'), include=self.user_query),
                              self.parse_user)

        if 'paging' in results.keys() and results.get('paging').get('is_end') == False:
            next_page = results.get('paging').get('next')
            yield Request(next_page, self.parse_followees)

    # 解析粉絲列表
    def parse_followers(self, response):
        results = json.loads(response.text)

        if 'data' in results.keys():
            for result in results.get('data'):
                yield Request(self.user_url.format(user=result.get('url_token'), include=self.user_query),
                              self.parse_user)

        if 'paging' in results.keys() and results.get('paging').get('is_end') == False:
            next_page = results.get('paging').get('next')
            yield Request(next_page, self.parse_followers)

pipellines.py 管線文件

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html


import pymongo


class ZhihuPipeline(object):
    def process_item(self, item, spider):
        return item


class MongoPipeline(object):
    def __init__(self):
        host = 'localhost'
        port = 27017
        dbname = 'Zhihu'
        sheetname = 'zhihu_user'
        # 創(chuàng)建MONGODB數(shù)據(jù)庫鏈接
        client = pymongo.MongoClient(host=host, port=port)
        # 指定數(shù)據(jù)庫
        mydb = client[dbname]
        # 存放數(shù)據(jù)的數(shù)據(jù)庫表名
        self.post = mydb[sheetname]

    def process_item(self, item, spider):
        # 使用update方法 進行去重處理
        self.post.update({'url_token': item['url_token']}, {'$set': item}, True)
        return item

item.py 爬取數(shù)據(jù)的保存格式

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html

from scrapy import Item, Field


class UserItem(Item):
    id = Field()
    name = Field()
    avatar_url = Field()
    headline = Field()
    description = Field()
    url = Field()
    url_token = Field()
    gender = Field()
    cover_url = Field()
    type = Field()
    Badge = Field()

    answer_count = Field()
    articles_count = Field()
    commercial_question_count = Field()
    favorite_count = Field()
    follower_count = Field()
    following_columns_count = Field()
    following_count = Field()
    pins_count = Field()
    question_count = Field()
    thank_from_count = Field()
    thank_to_count = Field()
    vote_from_count = Field()
    vote_to_count = Field()
    voteup_count = Field()
    following_favlists_count = Field()
    following_question_count = Field()
    following_topic_count = Field()
    marked_ansers_count = Field()
    mutual_followees_count = Field()
    hosted_live_count = Field()
    participated_live_count = Field()

settings.py scrapy配置文件

# -*- coding: utf-8 -*-

# Scrapy settings for zhihu project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'zhihu'

SPIDER_MODULES = ['zhihu.spiders']
NEWSPIDER_MODULE = 'zhihu.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
# USER_AGENT = 'zhihu (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
# CONCURRENT_REQUESTS_PER_DOMAIN = 16
# CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
# COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
# TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    'Accept-Language': 'en',
    'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.62 Safari/537.36',
    'authorization': 'oauth c3cef7c66a1843f8b3a9e6a1e3160e20'
}

# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
# SPIDER_MIDDLEWARES = {
#    'zhihu.middlewares.ZhihuSpiderMiddleware': 543,
# }

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    'zhihu.middlewares.MyCustomDownloaderMiddleware': 543,
# }

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
# }

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'zhihu.pipelines.MongoPipeline': 300,#將本地存儲的管線打開
    #'scrapy_redis.pipelines.RedisPipeline': 301#不注釋將會把爬到的數(shù)據(jù)上傳到Master端洽沟,會消耗新能霸旗,一般情況下婶博,Master只保存連接指紋耙册,數(shù)據(jù)又本地儲存
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
# AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# HTTPCACHE_ENABLED = True
# HTTPCACHE_EXPIRATION_SECS = 0
# HTTPCACHE_DIR = 'httpcache'
# HTTPCACHE_IGNORE_HTTP_CODES = []
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

# 調(diào)度表使用scrapy_redis的調(diào)度表
# Enables scheduling storing requests queue in redis.
SCHEDULER = "scrapy_redis.scheduler.Scheduler"

#去重使用scrapy_redis的去重文件
# Ensure all spiders share same duplicates filter through redis.
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

#指定redis的url地址
REDIS_URL = 'redis://user:password@hostname:port'

github:https://github.com/a331363549/Spider_Zhihu

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市烙无,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌遍尺,老刑警劉巖截酷,帶你破解...
    沈念sama閱讀 206,214評論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異乾戏,居然都是意外死亡迂苛,警方通過查閱死者的電腦和手機,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,307評論 2 382
  • 文/潘曉璐 我一進店門鼓择,熙熙樓的掌柜王于貴愁眉苦臉地迎上來三幻,“玉大人,你說我怎么就攤上這事呐能∧畎幔” “怎么了?”我有些...
    開封第一講書人閱讀 152,543評論 0 341
  • 文/不壞的土叔 我叫張陵摆出,是天一觀的道長朗徊。 經(jīng)常有香客問我,道長偎漫,這世上最難降的妖魔是什么爷恳? 我笑而不...
    開封第一講書人閱讀 55,221評論 1 279
  • 正文 為了忘掉前任,我火速辦了婚禮象踊,結(jié)果婚禮上温亲,老公的妹妹穿的比我還像新娘。我一直安慰自己杯矩,他們只是感情好栈虚,可當我...
    茶點故事閱讀 64,224評論 5 371
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著菊碟,像睡著了一般节芥。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上逆害,一...
    開封第一講書人閱讀 49,007評論 1 284
  • 那天头镊,我揣著相機與錄音,去河邊找鬼魄幕。 笑死相艇,一個胖子當著我的面吹牛,可吹牛的內(nèi)容都是我干的纯陨。 我是一名探鬼主播坛芽,決...
    沈念sama閱讀 38,313評論 3 399
  • 文/蒼蘭香墨 我猛地睜開眼留储,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了咙轩?” 一聲冷哼從身側(cè)響起获讳,我...
    開封第一講書人閱讀 36,956評論 0 259
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎活喊,沒想到半個月后丐膝,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 43,441評論 1 300
  • 正文 獨居荒郊野嶺守林人離奇死亡钾菊,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 35,925評論 2 323
  • 正文 我和宋清朗相戀三年帅矗,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片煞烫。...
    茶點故事閱讀 38,018評論 1 333
  • 序言:一個原本活蹦亂跳的男人離奇死亡浑此,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出滞详,到底是詐尸還是另有隱情凛俱,我是刑警寧澤,帶...
    沈念sama閱讀 33,685評論 4 322
  • 正文 年R本政府宣布茵宪,位于F島的核電站最冰,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏稀火。R本人自食惡果不足惜暖哨,卻給世界環(huán)境...
    茶點故事閱讀 39,234評論 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望凰狞。 院中可真熱鬧篇裁,春花似錦、人聲如沸赡若。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,240評論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽逾冬。三九已至黍聂,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間身腻,已是汗流浹背产还。 一陣腳步聲響...
    開封第一講書人閱讀 31,464評論 1 261
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留嘀趟,地道東北人脐区。 一個月前我還...
    沈念sama閱讀 45,467評論 2 352
  • 正文 我出身青樓,卻偏偏與公主長得像她按,于是被迫代替她去往敵國和親牛隅。 傳聞我的和親對象是個殘疾皇子炕柔,可洞房花燭夜當晚...
    茶點故事閱讀 42,762評論 2 345

推薦閱讀更多精彩內(nèi)容

  • 太長了,還是轉(zhuǎn)載吧...今天在看博客的時候媒佣,無意中發(fā)現(xiàn)了@Trinea在GitHub上的一個項目Android開源...
    龐哈哈哈12138閱讀 20,140評論 3 283
  • 作為周星馳的鐵粉匕累,昨晚重看他拍的美人魚,整場下來依然笑聲不斷默伍,當電影結(jié)束后哩罪,這次我眼含熱淚,感動我的不是這...
    天蓬O帥閱讀 325評論 0 0
  • 關(guān)鍵字:170607碘耳、周三显设、濮陽、晴 淘淘時有吐奶辛辨,清晨又是腹泄捕捂;小何不長記性總不帶束腰帶,抱孩子喂奶時不經(jīng)意間會...
    二石兄閱讀 546評論 0 1
  • 關(guān)于需求的提出斗搞,更多的其實是在分類指攒,將一類事物從整體中提出,并賦予其新的定義僻焚。在此基礎(chǔ)上完成對用戶群的梳理允悦。依賴不...
    養(yǎng)過小龍女閱讀 329評論 0 0
  • 文/北芊 1 天臺巧遇 羅奕從未料到,能在這座南方的城市里又遇見童悠悠虑啤,這個曾短暫陪她走過一段路的舊友隙弛,她甚至快忘...
    北芊閱讀 911評論 3 4