本文章僅作為個(gè)人筆記
Scrpy官網(wǎng)
Scrpy官方文檔
Scrpy中文文檔
個(gè)人ScrapyDemo項(xiàng)目地址
python環(huán)境安裝
- win下安裝:
- python:下載python安裝包直接安裝即可
- pip: easy_install pip
- mac下安裝:
- python:mac下自帶python2.7
- pip: easy_install pip
- centos7下安裝:
- python:centos7下自帶python2.7
- pip: easy_install pip
scrapy 安裝
pip install Scrapy
創(chuàng)建項(xiàng)目
scrapy startproject <project_name>
創(chuàng)建爬蟲(chóng)
scrapy genspider <spider_name> <host_name>
在文件夾根目錄創(chuàng)建 requirements.txt文件并加入需要的組件浮定,例如:
Scrapy==1.5.0
beautifulsoup4==4.6.0
requests==2.18.4
項(xiàng)目環(huán)境搭建
pip install -r requirements.txt
運(yùn)行單個(gè)爬蟲(chóng)
scrapy crawl <spider_name>
運(yùn)行多個(gè)爬蟲(chóng)(Scrapy本身并不支持命令行直接運(yùn)行多個(gè)Spiders降盹,創(chuàng)建一個(gè)新的python文件加入如下內(nèi)容運(yùn)行此python文件便可)(需按需更改)
# -*- coding: utf-8 -*-
import sys
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from ScrapyDemo.spiders.news_estadao import EstadaoSpider
from ScrapyDemo.spiders.news_gazetaesportiva import DemoSpider
from ScrapyDemo.spiders.news_megacurioso import MegacuriosoSpider
if sys.getdefaultencoding != 'utf-8':
reload(sys)
sys.setdefaultencoding('utf-8')
process = CrawlerProcess(get_project_settings())
process.crawl(EstadaoSpider)
process.crawl(DemoSpider)
process.crawl(MegacuriosoSpider)
process.start()
啟用pipelines用于處理結(jié)果
- 打開(kāi)settings.py文件在ITEM_PIPELINES選項(xiàng)下加入peline并賦值彤敛,0-1000,數(shù)字越小越優(yōu)先
輸出單個(gè)spider運(yùn)行結(jié)果到文件
scrapy crawl demo -o /path/to/demo.json
多個(gè)spider的結(jié)果混合處理:
- 上面的運(yùn)行多個(gè)爬蟲(chóng)腳本并不能將多個(gè)spider的結(jié)果混合處理
- 因?yàn)闃I(yè)務(wù)需要,只可退而求其次
思路:借助commands庫(kù)運(yùn)行l(wèi)inux命令順序運(yùn)行并輸出結(jié)果到文件毙替,最后讀取文件內(nèi)容解析為對(duì)象進(jìn)行混合處理
-
代碼(需按需更改):
#!/usr/bin/env python # encoding: utf-8 import commands def test(): result = [] try: commands.getoutput("echo '' > /path/to/megacurioso.json") #清空上次運(yùn)行結(jié)果 commands.getoutput("scrapy crawl demo -o /path/to/demo.json") # 運(yùn)行結(jié)果并輸出 result = json.loads(commands.getoutput("cat /path/to/megacurioso.json")) # 獲取運(yùn)行結(jié)果 except: print "Get megacurioso error." return result
解決結(jié)果爬蟲(chóng)信息亂碼問(wèn)題:
-
在有亂碼問(wèn)題python文件頂部加入如下代碼:
if sys.getdefaultencoding != 'utf-8': reload(sys) sys.setdefaultencoding('utf-8')
爬蟲(chóng)示例叉谜,也可以使用文頂給出的github鏈接:
-
item示例(items.py):
# -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # https://doc.scrapy.org/en/latest/topics/items.html import scrapy class ScrapydemoItem(scrapy.Item): title = scrapy.Field() imageUrl = scrapy.Field() des = scrapy.Field() source = scrapy.Field() actionUrl = scrapy.Field() contentType = scrapy.Field() itemType = scrapy.Field() createTime = scrapy.Field() country = scrapy.Field() headUrl = scrapy.Field() pass
-
pipelines示例(pipelines.py):
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html from ScrapyDemo.items import ScrapydemoItem import json class ScrapydemoPipeline(object): DATA_LIST_NEWS = [] def open_spider(self, spider): DATA_LIST_NEWS = [] print 'Spider start.' def process_item(self, item, spider): if isinstance(item, ScrapydemoItem): self.DATA_LIST_NEWS.append(dict(item)) return item def close_spider(self, spider): print json.dumps(self.DATA_LIST_NEWS) print 'Spider end.'
-
spider示例(demo.py):
# -*- coding: utf-8 -*- import scrapy from ScrapyDemo.items import ScrapydemoItem class DemoSpider(scrapy.Spider): name = 'news_gazetaesportiva' allowed_domains = ['www.gazetaesportiva.com'] start_urls = ['https://www.gazetaesportiva.com/noticias/'] headers = { 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8', 'accept-language': 'zh-CN,zh;q=0.9,en-US;q=0.8,en;q=0.7', 'cache-control': 'max-age=0', 'upgrade-insecure-requests': '1', 'User-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.139 Safari/537.36' } def parse(self, response): print('Start parse.') for element in response.xpath('//article'): title = element.xpath(".//h3[@class='entry-title no-margin']/a/text()").extract_first() imageUrl = [element.xpath(".//img[@class='medias-object wp-post-image']/@src").extract_first()] des = element.xpath(".//div[@class='entry-content space']/text()").extract_first() source = 'gazeta' actionUrl = element.xpath(".//a[@class='blog-image']/@href").extract_first() contentType = '' itemType = '' createTime = element.xpath(".//small[@class='updated']/text()").extract_first() country = 'PZ' headUrl = '' if title is not None and title != "" and actionUrl is not None and actionUrl != "" and imageUrl is not None and imageUrl != "": item = ScrapydemoItem() item['title'] = title item['imageUrl'] = imageUrl item['des'] = des item['source'] = source item['actionUrl'] = actionUrl item['contentType'] = contentType item['itemType'] = itemType item['createTime'] = createTime item['country'] = country item['headUrl'] = headUrl yield item print('End parse.')
-
代碼個(gè)人理解:
-
settings可配置公共配置及配置pipelines對(duì)spiders結(jié)果進(jìn)行匯總,例如(后面的數(shù)值越大優(yōu)先級(jí)越低痊臭,取值0-1000):
ITEM_PIPELINES = { 'DemoScrapy.pipelines.ScrapydemoPipeline': 300, }
配置pipelines后命令行運(yùn)行spiders是會(huì)先運(yùn)行open_spider方法哮肚,然后每個(gè)結(jié)果解析時(shí)運(yùn)行process_item方法,最后spider結(jié)束時(shí)運(yùn)行close_spider方法
items文件是用來(lái)配置描述結(jié)果對(duì)象的
spiders文件夾里根據(jù)命令行創(chuàng)建的spiders文件配置需要抓取的數(shù)據(jù)的網(wǎng)頁(yè)及需要偽裝的請(qǐng)求頭參數(shù)等广匙,抓取數(shù)據(jù)后數(shù)據(jù)結(jié)果進(jìn)入 parse方法進(jìn)行解析允趟,可使用xpath進(jìn)行解析。xpath的具體使用可參考前文給出的鏈接鸦致,個(gè)人進(jìn)行數(shù)據(jù)抓取前使用chrom定位標(biāo)簽潮剪,復(fù)制源碼后根據(jù)規(guī)則找到標(biāo)簽位置最后進(jìn)行規(guī)則匹配,因?yàn)槊看螖?shù)據(jù)規(guī)則匹配不可能一次性完成分唾,建議使用debug功能來(lái)進(jìn)行匹配抗碰,最后一次性完成規(guī)則書(shū)寫。
-
-
pycharm下debug spiders:
-
打開(kāi)pycharm后如果遇到部分插件無(wú)法安裝的情況可使用虛擬環(huán)境:
image.png
image.png
image.png -
debug運(yùn)行scrapy:
image.png
image.png
image.png
image.png
image.png
image.png-
運(yùn)行到斷點(diǎn)后右擊選擇 Evaluate Expresion
如此便可隨意運(yùn)行代碼查看執(zhí)行結(jié)果
-
-