scrapy爬蟲寫好后半醉,需要用命令行運(yùn)行幽纷,如果能在網(wǎng)頁上操作就比較方便式塌。scrapyd部署就是為了解決這個(gè)問題,能夠在網(wǎng)頁端查看正在執(zhí)行的任務(wù)友浸,也能新建爬蟲任務(wù)峰尝,和終止爬蟲任務(wù),功能比較強(qiáng)大收恢。
一武学、安裝
1,安裝scrapyd
pip install scrapyd
2伦意, 安裝 scrapyd-deploy
pip install scrapyd-client
windows系統(tǒng)火窒,在c:\python27\Scripts下生成的是scrapyd-deploy,無法直接在命令行里運(yùn)行scrapd-deploy驮肉。
解決辦法:
在c:\python27\Scripts下新建一個(gè)scrapyd-deploy.bat熏矿,文件內(nèi)容如下:
@echo off
C:\Python27\python C:\Python27\Scripts\scrapyd-deploy %*
添加環(huán)境變量:C:\Python27\Scripts;
二心例、使用
1荔烧,運(yùn)行scrapyd
首先切換命令行路徑到Scrapy項(xiàng)目的根目錄下恨溜,
要執(zhí)行以下的命令胎许,需要先在命令行里執(zhí)行scrapyd,將scrapyd運(yùn)行起來
MacBook-Pro:~ usera$ scrapyd
/usr/local/bin/scrapyd:5: UserWarning: Module _markerlib was already imported from /Library/Python/2.7/site-packages/distribute-0.6.49-py2.7.egg/_markerlib/__init__.pyc, but /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python is being added to sys.path
from pkg_resources import load_entry_point
2016-09-24 16:00:21+0800 [-] Log opened.
2016-09-24 16:00:21+0800 [-] twistd 15.5.0 (/usr/bin/python 2.7.10) starting up.
2016-09-24 16:00:21+0800 [-] reactor class: twisted.internet.selectreactor.SelectReactor.
2016-09-24 16:00:21+0800 [-] Site starting on 6800
2016-09-24 16:00:21+0800 [-] Starting factory <twisted.web.server.Site instance at 0x102a21518>
2016-09-24 16:00:21+0800 [Launcher] Scrapyd 1.1.0 started: max_proc=16, runner='scrapyd.runner'
2栏妖,發(fā)布工程到scrapyd
a乱豆,配置scrapy.cfg
在scrapy.cfg中,取消#url = http://localhost:6800/前面的“#”吊趾,具體如下:,
然后在命令行中切換命令至scrapy工程根目錄宛裕,運(yùn)行命令:
scrapyd-deploy <target> -p <project>
示例:
scrapd-deploy -p MySpider
- 驗(yàn)證是否發(fā)布成功
scrapyd-deploy -l
output:
TS http://localhost:6800/
一,開始使用
1论泛,先啟動 scrapyd揩尸,在命令行中執(zhí)行:
MyMacBook-Pro:MySpiderProject user$ scrapyd
2,創(chuàng)建爬蟲任務(wù)
curl http://localhost:6800/schedule.json -d project=myproject -d spider=spider2
- bug:
scrapyd deploy shows 0 spiders by scrapyd-client
scrapy中有的spider不出現(xiàn)屁奏,顯示只有0個(gè)spiders岩榆。 - 解決
需要注釋掉settings中的
# LOG_LEVEL = "ERROR"
# LOG_STDOUT = True
# LOG_FILE = "/tmp/spider.log"
# LOG_FORMAT = "%(asctime)s [%(name)s] %(levelname)s: %(message)s"
When setting LOG_STDOUT=True, scrapyd-deploy will return 'spiders: 0'. Because the output will be redirected to the file when execute 'scrapy list', like this: INFO:stdout:spider-name. Soget_spider_list can not parse it correctly.
3,查看爬蟲任務(wù)
在網(wǎng)頁中輸入:http://localhost:6800/
下圖為http://localhost:6800/jobs的內(nèi)容:
4坟瓢,運(yùn)行配置
配置文件:C:\Python27\Lib\site-packages\scrapyd-1.1.0-py2.7.egg\scrapyd\default_scrapyd.conf
[scrapyd]
eggs_dir = eggs
logs_dir = logs
items_dir = items
jobs_to_keep = 50
dbs_dir = dbs
max_proc = 0
max_proc_per_cpu = 4
finished_to_keep = 100
poll_interval = 5
http_port = 6800
debug = off
runner = scrapyd.runner
application = scrapyd.app.application
launcher = scrapyd.launcher.Launcher
[services]
schedule.json = scrapyd.webservice.Schedule
cancel.json = scrapyd.webservice.Cancel
addversion.json = scrapyd.webservice.AddVersion
listprojects.json = scrapyd.webservice.ListProjects
listversions.json = scrapyd.webservice.ListVersions
listspiders.json = scrapyd.webservice.ListSpiders
delproject.json = scrapyd.webservice.DeleteProject
delversion.json = scrapyd.webservice.DeleteVersion
listjobs.json = scrapyd.webservice.ListJobs
參考
http://www.cnblogs.com/jinhaolin/p/5033733.html
https://scrapyd.readthedocs.io/en/latest/api.html#cancel-json