簡(jiǎn)介
scrapy做簡(jiǎn)單的大量數(shù)據(jù)的爬蟲太方便了摊崭,一般就三個(gè)文件 setting.py操骡,item.py 宗苍,xxx_spider.py,代碼量很少佃却。存json的時(shí)候最高爬取過600多MB的文本者吁。去年存入postgresql的時(shí)候最多的一次大概一次性爬取了1000多萬的關(guān)鍵詞(key,[related1,related2],key,related對(duì)調(diào)的時(shí)候靠用redis放內(nèi)存中分批計(jì)算才成功饲帅,把老化的機(jī)械硬盤換成固態(tài)硬盤之后复凳,就不用redis了)。
代碼部分
spider 代碼灶泵,從本地muci.txt中獲取關(guān)鍵詞育八,setting控制爬取深度(DEEP_LIMIT = 1
就是只爬取當(dāng)前關(guān)鍵詞的相關(guān)搜索詞)
# -*- coding:utf8 -*-
from scrapy.spiders import CrawlSpider
from scrapy import Request
from mbaidu.items import baiduItemtj
import os
from scrapy.conf import settings
settings.overrides['RESULT_JSONFILE'] = 'mbaidutj.json'
class MbaiduSpider(CrawlSpider):
name = 'mbaidu_xg'
allowed_domains = ['m.baidu.com']
def start_requests(self):
mucifile = open(os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), "muci.txt"), 'r')
for key in mucifile.readlines():
nextword = key.strip("\n").strip()
if nextword != "":
yield Request('https://m.baidu.com/s?word=' + nextword, self.parse)
def parse(self, response):
related = response.css('#reword .rw-list a::text').extract()
if related:
for rw in related:
item = baiduItemtj()
item['keyword'],item['description'] = [rw,'']
yield item
rwlink = response.css('#reword .rw-list a::attr(href)').extract()
if rwlink:
for link in rwlink:
yield Request(link,self.parse)
tj = response.css('.wa-sigma-celebrity-rela-entity.c-scroll-element-gap a')
if tj:
for i in tj:
item = baiduItemtj()
item['keyword'],item['description'] = i.css('p::text').extract()
yield item
tjlink = response.css('.wa-sigma-celebrity-rela-entity.c-scroll-element-gap a::attr(href)').extract()
if tjlink:
for link in tjlink:
yield Request(link,self.parse)
處理json編碼的代碼 piplines.py中,本地存儲(chǔ)python亂碼是使用(加入setting.py中)赦邻,估計(jì)是python2的鍋髓棋,python3不一定要
class JsonWriterPipeline(object):
def __init__(self):
self.file = codecs.open(settings.get('RESULT_JSONFILE','default.json'), 'w', encoding='utf-8')
def process_item(self, item, spider):
line = json.dumps(dict(item)) + "\n"
self.file.write(line.decode('unicode_escape'))
# item = {"haha": "hehe"}
# return {"log": "可以不需要return數(shù)據(jù)了,返回的數(shù)據(jù)會(huì)再次轉(zhuǎn)成Unicode,交給系統(tǒng)自帶的輸出"}
# return item
item.py
class baiduItemtj(scrapy.Item):
# 右側(cè)推薦有description 底部相關(guān)搜索沒有 為空
keyword = scrapy.Field()
description = scrapy.Field()