項目環(huán)境:
pycharm
windows
python3.7
新建項目
在想要創(chuàng)建的目錄下打開命令行
創(chuàng)建命令:scrapy startproject 項目名
接著創(chuàng)建一個spider:
cd 項目名
scrapy genspider 爬蟲名 "域名"
在項目下新建一個python文件嘹黔,運(yùn)行此文件即可運(yùn)行爬蟲項目,不需要再去命令行了
from scrapy import cmdline
cmdline.execute('scrapy crawl 爬蟲名'.split())
使用以上命令創(chuàng)建的文件目錄如下:(我爬取的是京東手機(jī)商品信息)
這里解釋各個文件所起的作用
scrapy.cfg:項目的總配置文件彩库,通常無須修改。
spider_JDComments:項目的 Python 模塊,程序?qū)拇颂帉?dǎo)入 Python 代碼陌兑。
items.py:用于定義項目用到的 Item 類铡原。Item 類就是一個 DTO(數(shù)據(jù)傳輸對象)偷厦,通常就是定義 N 個屬性,該類需要由開發(fā)者來定義燕刻。
pipelines.py:項目的管道文件只泼,它負(fù)責(zé)處理爬取到的信息。該文件需要由開發(fā)者編寫卵洗。
settings.py:項目的配置文件请唱,在該文件中進(jìn)行項目相關(guān)配置。
JD_spider:在該目錄下存放項目所需的蜘蛛过蹂,蜘蛛負(fù)責(zé)抓取項目感興趣的信息十绑。
接下來開始上代碼,代碼中會有詳細(xì)注釋(#后的英文為scrapy框架自動生成)
items 酷勺,這里是創(chuàng)建想要爬取的字段本橙,這里只是命名,操作在JD_spider中
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class SpiderJdcommentsItem(scrapy.Item):
# define the fields for your item here like:
comment = scrapy.Field() #評論評論
star = scrapy.Field() #評論星級
編寫爬蟲解析文件
# -*- coding: utf-8 -*-
import scrapy
import re
import json
from ..items import SpiderJdcommentsItem #這里注意脆诉,引用SpiderJdcommentsItem時用spider_JDComments.spider_JDComments.SpiderJdcommentsItem會報錯
class JdSpiderSpider(scrapy.Spider):
name = 'JD_spider' #爬蟲名
# allowed_domains = ['list.jd.com'] #允許在此域名內(nèi)爬取
start_urls = ['https://list.jd.com/list.html?cat=9987,653,655&page=1&sort=sort_commentcount_desc&trans=1&JL=6_0_0#J_main'] #起始爬取的url地址
def parse(self, response):
#爬取每個手機(jī)鏈接
phone_url_list = response.xpath("http://*[@id='plist']/ul/li") #xpath解析獲取手機(jī)鏈接列表
for temp in phone_url_list:
temp_url = "https:" + temp.xpath("./div/div[4]/a/@href").get() #獲取每個手機(jī)鏈接甚亭,這里的xpath是接著phone_url_list的寫的,可以看到xpath前有個"."
yield scrapy.Request(url=temp_url, callback=self.get_commenturl) #將手機(jī)鏈接傳給處理手機(jī)詳情頁的函數(shù)(在那里找到評論真實(shí)鏈接)
#因為scrapy是異步爬取的击胜,也就是說他有可能會先爬取頁數(shù)靠后的手機(jī)信息亏狰,而頁數(shù)靠后的手機(jī)信息評論少,所以我只讓他爬前4頁偶摔,每頁60*10*80,四頁也有一萬多條暇唾,夠用了
for i in range(2,5):
next_link = "https://list.jd.com/list.html?cat=9987,653,655&page=%s&sort=sort_commentcount_desc&trans=1&JL=6_0_0#J_main" % i
yield scrapy.Request(next_link, callback=self.parse)
# #獲取下一頁鏈接
# next_link = response.xpath("http://span[@class='p-num']//a[@class='pn-next']/@href").getall()
# if next_link:
# next_link = next_link[0]
# yield scrapy.Request("https://list.jd.com/" + next_link, callback=self.parse)
def get_commenturl(self, response):
pattern = re.compile('\d+') # 正則表達(dá)式匹配鏈接中的數(shù)字串,下面構(gòu)造評論的url時會用到
number = pattern.findall(response.url)[0]
for i in range(60): # 獲取每個手機(jī)的50頁評論,因為京東只顯示100頁評論(雖然看著有10多萬條)
#好評信息策州,鏈接中score=3瘸味,中評2,差評1
comment_url = "https://club.jd.com/comment/productPageComments.action?&productId=%s&score=2&sortType=5&page=%s&pageSize=10&isShadowSku=0&rid=0&fold=1" % (number, i)
yield scrapy.Request(url=comment_url, callback=self.detail)
def detail(self, response):
data = json.loads(response.body.decode(response.encoding)) #response.body獲得的是byte類型
#scrapy中response.body 與 response.text區(qū)別:https://www.cnblogs.com/themost/p/8471953.html
#對response的參數(shù)了解鏈接:https://blog.csdn.net/l1336037686/article/details/78536694
if(data['comments']): #判斷評論信息是否存在抽活,有的手機(jī)個別評論頁數(shù)沒有評論信息
jdcomment_item = SpiderJdcommentsItem() # 保存評論和星級數(shù)據(jù)所用
for temp in data['comments']:
if (temp['content'] and temp['score']):
jdcomment_item['comment'] = temp['content']
jdcomment_item['score'] = temp['score']
yield jdcomment_item
附上對response的參數(shù)了解鏈接:https://blog.csdn.net/l1336037686/article/details/78536694
scrapy中response.body 與 response.text區(qū)別:https://www.cnblogs.com/themost/p/8471953.html
設(shè)置代理硫戈,以免被封ip,在middlewares.py最下面添加如下代碼(通用下硕,不用改)
class my_useragent(object):
def process_request(self, request, spider):
USER_AGENT_LIST = [
'MSIE (MSIE 6.0; X11; Linux; i686) Opera 7.23',
'Opera/9.20 (Macintosh; Intel Mac OS X; U; en)',
'Opera/9.0 (Macintosh; PPC Mac OS X; U; en)',
'iTunes/9.0.3 (Macintosh; U; Intel Mac OS X 10_6_2; en-ca)',
'Mozilla/4.76 [en_jp] (X11; U; SunOS 5.8 sun4u)',
'iTunes/4.2 (Macintosh; U; PPC Mac OS X 10.2)',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:5.0) Gecko/20100101 Firefox/5.0',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:9.0) Gecko/20100101 Firefox/9.0',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:16.0) Gecko/20120813 Firefox/16.0',
'Mozilla/4.77 [en] (X11; I; IRIX;64 6.5 IP30)',
'Mozilla/4.8 [en] (X11; U; SunOS; 5.7 sun4u)'
]
agent = random.choice(USER_AGENT_LIST)
request.headers['User_Agent'] = agent
settings.py設(shè)置(需要改的地方后面附上了注釋)
# -*- coding: utf-8 -*-
# Scrapy settings for spider_JDComments project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'spider_JDComments'
SPIDER_MODULES = ['spider_JDComments.spiders']
NEWSPIDER_MODULE = 'spider_JDComments.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36' #代理
# Obey robots.txt rules
ROBOTSTXT_OBEY = False #是否遵守爬蟲協(xié)議
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 0.5 #數(shù)字越大丁逝,爬的越慢
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'spider_JDComments.middlewares.SpiderJdcommentsSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = { #開啟代理
'spider_JDComments.middlewares.my_useragent': 543,
}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = { #開啟自己定義的popeline
'spider_JDComments.pipelines.SpiderJdcommentsPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
保存文件,若果不需要保存在數(shù)據(jù)庫中梭姓,則不需開啟上一步settings中自己定義的pipeline霜幼,也不需下一步中更改pipelines.py文件,可按如下方式保存數(shù)據(jù)
進(jìn)入項目文件夾下打開命令行:scrapy crawl 爬蟲名 -o test.csv(存儲的文件名)
保存數(shù)據(jù)到mysql中(在pipelines.py中編寫)
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
import pymysql
class SpiderJdcommentsPipeline(object):
def __init__(self): #參看文章:https://blog.csdn.net/loner_fang/article/details/81056191
self.conn = pymysql.connect('127.0.0.1', 'root', '123456', 'jd_spider', charset='utf8', use_unicode=True) #從前到后本地ip誉尖,用戶名罪既,密碼,數(shù)據(jù)庫中table名铡恕,后兩項保證編碼正確
self.cursor = self.conn.cursor() #創(chuàng)建游標(biāo)
def process_item(self, item, spider):
insert_sql = """insert into jd(score, comment) VALUES (%s, %s)""" #插入數(shù)據(jù)語句
self.cursor.execute(insert_sql, (item['score'], item['comment'])) #插入操作
self.conn.commit() #提交琢感,不進(jìn)行提交無法保存到數(shù)據(jù)庫
def close_spider(self, spider):
# 關(guān)閉游標(biāo)和連接
self.cursor.close()
self.conn.close()
接下來運(yùn)行項目就可以實(shí)現(xiàn)向mysql中存取數(shù)據(jù)
最后附上一些操作mysql遇到的問題(使用mysql workbench操作)
看表中信息:
導(dǎo)出表和清空表:(表導(dǎo)出為csv格式后不規(guī)整,需再操作excell探熔,附上鏈接:https://www.cnblogs.com/zsgyx/p/10452734.html)
mysql表中一頁只有1000條數(shù)據(jù)驹针,大概在框框位置會有下一頁按鈕,叫:fetch next frame of records
mysql常見數(shù)據(jù)類型:https://blog.csdn.net/qq_42338771/article/details/89880360
導(dǎo)出數(shù)據(jù)到scv后诀艰,所有列的數(shù)據(jù)是在同一列的柬甥,如下:
選擇分列的符號:
實(shí)現(xiàn)分列:
mysql導(dǎo)出數(shù)據(jù)時,原本的字符串中有換行的其垄,導(dǎo)出的excell表中會分別存為一行苛蒲,原因是導(dǎo)出時將\r\n解析了,解決方法:
用命令刪除數(shù)據(jù)庫特定表中的\r\n
UPDATE jd_spider.jd_negative SETcomment
= REPLACE( REPLACE(comment
, CHAR( 10 ) , '' ) , CHAR( 13 ) , '' ) ;
如果執(zhí)行這條語句報錯绿满,Error Code: 1175. You are using safe update mode and you tried to update a table without a WHERE that uses a KEY column To disable safe mode, toggle the option in Preferences -> SQL Editor and reconnect.
使用命令改變數(shù)據(jù)庫模式即可解決:
SET SQL_SAFE_UPDATES = 0