愛(ài)上程序網(wǎng)
緣由:這個(gè)網(wǎng)站是在工作中谷歌找問(wèn)題找出來(lái)的休雌,然后發(fā)現(xiàn)里面的文章挺多的俗或, 畢竟自己平時(shí)比較喜歡看技術(shù)文章末贾,什么都想懂,什么都懂得不深入,這不荠呐,想要轉(zhuǎn)爬蟲(chóng)工作的竖独,現(xiàn)在還在繼續(xù)android開(kāi)發(fā)中裤唠。。
廢話不多說(shuō)预鬓。
來(lái)個(gè)數(shù)據(jù)庫(kù)的結(jié)果:
為什么暫時(shí)是這些呢巧骚?因?yàn)橛玫难h(huán)用了10000次,可能還會(huì)多格二,數(shù)據(jù)爬取到了2013年了劈彪,想必這個(gè)網(wǎng)站確實(shí)是建站比較久了,可能遍歷的話還會(huì)有更多吧顶猜。在第二次爬取數(shù)據(jù)的時(shí)候一共爬取了24萬(wàn)左右的數(shù)據(jù)沧奴,可能還有多吧。
這里總結(jié)一下經(jīng)驗(yàn):
之前在pycharm中直接在文件夾中進(jìn)行寫(xiě)scrapy工程導(dǎo)入模塊一直出錯(cuò)长窄,原因是導(dǎo)入不進(jìn)去滔吠。可以看以下截圖對(duì)比:
可以看出來(lái)下面那個(gè)項(xiàng)目在工程中沒(méi)有變黑挠日,所以在spider中導(dǎo)入items里面的類疮绷,是一直導(dǎo)入報(bào)錯(cuò)的,說(shuō)是no module named ‘’xxx‘’
解決方法:
需要我們?cè)趂ile菜單欄中重新open嚣潜。冬骚。然后導(dǎo)入該項(xiàng)目,再關(guān)聯(lián)到該工程中懂算,add xxx的 然后項(xiàng)目就變成黑色了只冻, 這樣子導(dǎo)包就正常了。
抓取的數(shù)據(jù)主要是:
閱讀量计技,標(biāo)題喜德,標(biāo)題鏈接(內(nèi)容詳情頁(yè)),內(nèi)容垮媒,時(shí)間
使用scrapy 進(jìn)行數(shù)據(jù)的爬取舍悯,在parse()方法中用的還不是很熟練航棱,一邊調(diào),一邊用把贱呐。
步驟一:
settings文件的配置:
# -*- coding: utf-8 -*-
# Scrapy settings for aichengxu project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# http://doc.scrapy.org/en/latest/topics/settings.html
# http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'aichengxu'
SPIDER_MODULES = ['aichengxu.spiders']
NEWSPIDER_MODULE = 'aichengxu.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
# USER_AGENT = 'aichengxu (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
# CONCURRENT_REQUESTS_PER_DOMAIN = 16
# CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
# COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
# TELNETCONSOLE_ENABLED = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Language': 'zh-CN,zh;q=0.8',
'Cache-Control': 'max-age=0',
'Connection': 'keep-alive',
'Cookie': 'ras=24656333; cids_AC31=24656333',
'Host': 'www.aichengxu.com',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.75 Safari/537.36',
}
# 配置mongoDB
MONGO_HOST = "127.0.0.1" # 主機(jī)IP
MONGO_PORT = 27017 # 端口號(hào)
MONGO_DB = "aichengxu2" # 庫(kù)名
MONGO_COLL = "ai_chengxu" # collection
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
# SPIDER_MIDDLEWARES = {
# 'aichengxu.middlewares.AichengxuSpiderMiddleware': 543,
# }
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
# 'aichengxu.middlewares.MyCustomDownloaderMiddleware': 543,
# 'aichengxu.middlewares.MyCustomDownloaderMiddleware': 543,
# 'aichengxu.middlewares.MyCustomDownloaderMiddleware': 543,
# }
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
# EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
# }
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'aichengxu.pipelines.AichengxuPipeline': 300,
'aichengxu.pipelines.DuoDuoMongo': 300,
'aichengxu.pipelines.JsonWritePipline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
# AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# HTTPCACHE_ENABLED = True
# HTTPCACHE_EXPIRATION_SECS = 0
# HTTPCACHE_DIR = 'httpcache'
# HTTPCACHE_IGNORE_HTTP_CODES = []
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
當(dāng)然有用的也就那幾個(gè)沒(méi)有注釋掉的丧诺,這是一種習(xí)慣吧,不然后面有時(shí)候都在想為什么scrapy沒(méi)有跑起來(lái)奄薇,或者直接報(bào)錯(cuò)驳阎,呵呵。
步驟二:
進(jìn)行items里面的定義:
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class AichengxuItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
pass
class androidItem(scrapy.Item):
# 閱讀量
count = scrapy.Field()
# 標(biāo)題
title = scrapy.Field()
# 鏈接
titleLink = scrapy.Field()
# 描述
desc = scrapy.Field()
# 時(shí)間
time = scrapy.Field()
步驟三:
spiders里面的代碼編寫(xiě)
# -*- coding: utf-8 -*-
# @Time : 2017/8/25 21:54
# @Author : 蛇崽
# @Email : 17193337679@163.com
# @File : aichengxuspider.py 愛(ài)程序網(wǎng) www.aichengxu.com
import scrapy
from aichengxu.items import androidItem
import logging
class aiChengxu(scrapy.Spider):
name = 'aichengxu'
allowed_domains = ['www.aichengxu.com']
start_urls = ["http://www.aichengxu.com/android/{}/".format(n) for n in range(1,10000)]
def parse(self, response):
node_list = response.xpath("http://*[@class='item-box']")
print('nodelist',node_list)
for node in node_list:
android_item = androidItem()
count = node.xpath("./div[@class='views']/text()").extract()
title_link = node.xpath("./div[@class='bd']/h3/a/@href").extract()
title = node.xpath("./div[@class='bd']/h3/a/text()").extract()
desc = node.xpath("./div[@class='bd']/div[@class='desc']/text()").extract()
time = node.xpath("./div[@class='bd']/div[@class='item-source']/span[2]").extract()
print(count,title,title_link,desc,time)
android_item['title'] = title
android_item['titleLink'] = title_link
android_item['desc'] = desc
android_item['count'] = count
android_item['time'] = time
yield android_item
這里對(duì)yield這個(gè)關(guān)鍵字說(shuō)一下自己的看法:就是返回該條目馁蒂,并繼續(xù)執(zhí)行下一個(gè)任務(wù)呵晚,或者說(shuō),把item返回去沫屡,然后繼續(xù)下一條爬蟲(chóng)吧饵隙。
步驟四:
piplines代碼:
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
import json
import pymongo
from scrapy.conf import settings
class AichengxuPipeline(object):
def process_item(self, item, spider):
return item
class DuoDuoMongo(object):
def __init__(self):
self.client = pymongo.MongoClient(host=settings['MONGO_HOST'], port=settings['MONGO_PORT'])
self.db = self.client[settings['MONGO_DB']]
self.post = self.db[settings['MONGO_COLL']]
def process_item(self, item, spider):
postItem = dict(item)
self.post.insert(postItem)
return item
# 寫(xiě)入json文件
class JsonWritePipline(object):
def __init__(self):
self.file = open('愛(ài)上程序網(wǎng)2.json','w',encoding='utf-8')
def process_item(self,item,spider):
line = json.dumps(dict(item),ensure_ascii=False)+"\n"
self.file.write(line)
return item
def spider_closed(self,spider):
self.file.close()
這里主要寫(xiě)了兩種類型的存儲(chǔ),一個(gè)是mogodb沮脖,一個(gè)是json文件的存儲(chǔ)
這樣金矛,代碼就爬完了,然后就這樣子結(jié)束了勺届,代碼久了不寫(xiě)驶俊,生疏了許多。
有一個(gè)疑問(wèn):這里爬取了24萬(wàn)的數(shù)據(jù)免姿,沒(méi)有在中途因?yàn)閏ookie或者代理的原因被靜止饼酿,可能在settings里面設(shè)置了請(qǐng)求頭的緣故吧,
DEFAULT_REQUEST_HEADERS
再說(shuō)一下這個(gè)網(wǎng)站為什么自己為什么想爬胚膊,總感覺(jué)自己對(duì)技術(shù)驅(qū)動(dòng)性很強(qiáng)吧故俐,當(dāng)然啦,android并不是我最喜歡的吧紊婉,可能或者有些畏懼它药版,android適配很惡心的。機(jī)型喻犁,屏幕什么的槽片。
這個(gè)網(wǎng)站還有很多其他條目:
如果全部爬取下來(lái),老實(shí)說(shuō)株汉,我沒(méi)有進(jìn)行爬取過(guò)筐乳,有想法的小伙伴可是試一試歌殃,最后把代碼上傳到我的github上: