1. 分片概念
分片(sharding)是一種跨多臺機器分布數據的方法, MongoDB使用分片來支持具有非常大的數據集和高吞吐量操作的部署。
換句話說:分片(sharding)是指將數據拆分,將其分散存在不同的機器上的過程。有時也用分區(qū)(partitioning)來表示這個概念雨让。將數據分散到不同的機器上,不需要功能強大的大型計算機就可以儲存更多的數據,處理更多的負載尸变。
具有大型數據集或高吞吐量應用程序的數據庫系統(tǒng)可以會挑戰(zhàn)單個服務器的容量。例如,高查詢率會耗盡服務器的CPU容量艇棕。工作集大小大于系統(tǒng)的RAM會強調磁盤驅動器的I / O容量蝌戒。
有兩種解決系統(tǒng)增長的方法:垂直擴展和水平擴展。
垂直擴展意味著增加單個服務器的容量,例如使用更強大的CPU,添加更多RAM或增加存儲空間量欠肾∑康撸可用技術的局限性可能會限制單個機器對于給定工作負載而言足夠強大。此外,基于云的提供商基于可用的硬件配置具有硬性上限刺桃。結果,垂直縮放有實際的最大值粹淋。
水平擴展意味著劃分系統(tǒng)數據集并加載多個服務器,添加其他服務器以根據需要增加容量。雖然單個機器的總體速度或容量可能不高,但每臺機器處理整個工作負載的子集,可能提供比單個高速大容量服務器更高的效率。擴展部署容量只需要根據需要添加額外的服務器,這可能比單個機器的高端硬件的總體成本更低桃移。權衡是基礎架構和部署維護的復雜性增加屋匕。
MongoDB支持通過分片進行水平擴展。
2. 分片集群包含的組件
MongoDB分片群集包含以下組件:
分片(存儲):每個分片包含分片數據的子集借杰。 每個分片都可以部署為副本集过吻。
mongos (路由):mongos充當查詢路由器,在客戶端應用程序和分片集群之間提供接口。
config servers (“調度”的配置):配置服務器存儲群集的元數據和配置設置蔗衡。 從MongoDB 3.4開
始,必須將配置服務器部署為副本集(CSRS)纤虽。
下圖描述了分片集群中組件的交互:
MongoDB在集合級別對數據進行分片,將集合數據分布在集群中的分片上。
3. 分片集群架構目標
兩個分片節(jié)點副本集(3+3)+一個配置節(jié)點副本集(3)+兩個路由節(jié)點(2),共11個服務節(jié)點
4. 分片(存儲)節(jié)點副本集的創(chuàng)建
所有的的配置文件都直接放到 sharded_cluster 的相應的子目錄下面,默認配置文件名字:
mongod.conf
4.1 第一套副本集
準備存放數據绞惦、配置文件和日志的目錄:
#-----------myshardrs01
mkdir -p /etc/mongod/27{0,1,2}18/
mkdir -p /var/log/mongodb/27{0,1,2}18/
mkdir -p /data/mongodb/db/27{0,1,2}18/
給創(chuàng)建的文件目錄賦權限
chown -R mongodb:mongodb /data/mongodb
chown -R mongodb:mongodb /var/log/mongodb
新建或修改配置文件
vim /etc/mongod/27018/mongod.conf
修改配置文件mongod.conf
systemLog:
#MongoDB發(fā)送所有日志輸出的目標指定為文件
destination: file
#mongod或mongos應向其發(fā)送所有診斷日志記錄信息的日志文件的路徑
path: "/var/log/mongodb/27018/mongod.log"
#當mongos或mongod實例重新啟動時,mongos或mongod會將新條目附加到現有日志文件的末尾逼纸。
logAppend: true
storage:
#mongod實例存儲其數據的目錄。storage.dbPath設置僅適用于mongod济蝉。
dbPath: "/data/mongodb/db/27018"
journal:
#啟用或禁用持久性日志以確保數據文件保持有效和可恢復杰刽。
enabled: true
processManagement:
#啟用在后臺運行mongos或mongod進程的守護進程模式。
fork: true
#指定用于保存mongos或mongod進程的進程ID的文件位置,其中mongos或mongod將寫入其PID
pidFilePath: "/var/run/mongodb/mongod_27018.pid"
timeZoneInfo: /usr/share/zoneinfo
net:
#服務實例綁定所有IP,有副作用,副本集初始化的時候,節(jié)點名字會自動設置為本地域名,而不是ip
#bindIpAll: true
#服務實例綁定的IP
bindIp: localhost,127.0.0.1
#bindIp
#綁定的端口
port: 27018
replication:
#副本集的名稱
replSetName: myshardrs01
sharding:
#分片角色
clusterRole: shardsvr
sharding.clusterRole:
Value | Description |
---|---|
configsvr | Start this instance as a config server. The instance starts on port 27019 by default. |
shardsvr | Start this instance as a shard. The instance starts on port 27018 by default. |
注意:
設置sharding.clusterRole需要mongod實例運行復制王滤。 要將實例部署為副本集成員,請使用replSetName設置并指定副本集的名稱贺嫂。
因為我是apt安裝的所以以下修改僅僅適用于apt安裝的,通過其他方式安裝的可參考雁乡,基本類似第喳。
以上只更改了27018,剩下的27118和27218只需更改相應的端口即可蔗怠。
啟動第一套副本集:一主一副本一仲裁
sudo -u mongodb -g mongodb mongod -f /etc/mongod/27018/mongod.conf
sudo -u mongodb -g mongodb mongod -f /etc/mongod/27118/mongod.conf
sudo -u mongodb -g mongodb mongod -f /etc/mongod/27218/mongod.conf
查看服務是否啟動:
ps -ef | grep mongo | grep -v grep
mongodb 23848 1 0 10:48 ? 00:01:52 mongod -f /etc/mongod/27118/mongod.conf
mongodb 24216 1 0 10:49 ? 00:01:50 mongod -f /etc/mongod/27018/mongod.conf
mongodb 24854 1 0 10:51 ? 00:01:08 mongod -f /etc/mongod/27218/mongod.conf
(1)初始化副本集和創(chuàng)建主節(jié)點:
使用客戶端命令連接任意一個節(jié)點,但這里盡量要連接主節(jié)點:
mongo --host 127.0.0.1 --port 27018
執(zhí)行初始化副本集命令:
> rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "localhost:27018",
"ok" : 1
}
myshardrs01:OTHER>
myshardrs01:PRIMARY>
查看副本集情況(節(jié)選內容):
myshardrs01:PRIMARY> rs.status()
{
"set" : "myshardrs01",
"date" : ISODate("2021-03-22T02:53:18.523Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 1,
"writeMajorityCount" : 1,
......
......
"members" : [
{
"_id" : 0,
"name" : "localhost:27018",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 219,
"optime" : {
"ts" : Timestamp(1616381592, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2021-03-22T02:53:12Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1616381582, 2),
"electionDate" : ISODate("2021-03-22T02:53:02Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
}
],
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1616381592, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1616381592, 1)
}
(2)添加副本節(jié)點:
myshardrs01:PRIMARY> rs.add("127.0.0.1:27118")
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1616381636, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1616381636, 1)
}
(3)添加仲裁節(jié)點
myshardrs01:PRIMARY> rs.addArb("127.0.0.1:27218")
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1616381778, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1616381778, 1)
}
查看副本集的配置情況:
myshardrs01:PRIMARY> rs.status()
{
"set" : "myshardrs01",
"date" : ISODate("2021-03-22T02:56:55.982Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 2,
"writeMajorityCount" : 2,
......
......
"members" : [
{
"_id" : 0,
"name" : "localhost:27018",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 436,
"optime" : {
"ts" : Timestamp(1616381812, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2021-03-22T02:56:52Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1616381582, 2),
"electionDate" : ISODate("2021-03-22T02:53:02Z"),
"configVersion" : 3,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "127.0.0.1:27118",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 179,
"optime" : {
"ts" : Timestamp(1616381812, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1616381812, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2021-03-22T02:56:52Z"),
"optimeDurableDate" : ISODate("2021-03-22T02:56:52Z"),
"lastHeartbeat" : ISODate("2021-03-22T02:56:54.701Z"),
"lastHeartbeatRecv" : ISODate("2021-03-22T02:56:54.703Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "localhost:27018",
"syncSourceHost" : "localhost:27018",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 3
},
{
"_id" : 2,
"name" : "127.0.0.1:27218",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 37,
"lastHeartbeat" : ISODate("2021-03-22T02:56:54.701Z"),
"lastHeartbeatRecv" : ISODate("2021-03-22T02:56:54.753Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 3
}
],
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1616381812, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1616381812, 1)
}
4.2 第二套副本集
準備存放數據和日志的目錄:
#-----------myshardrs02
mkdir -p /etc/mongod/27{4,5,6}18/
mkdir -p /var/log/mongodb/27{4,5,6}18/
mkdir -p /data/mongodb/db/27{4,5,6}18/
給創(chuàng)建的目錄賦權限
chown -R mongodb:mongodb /data/mongodb
chown -R mongodb:mongodb /var/log/mongodb
新建或修改配置文件:
vim /etc/mongod/27318/mongod.conf
myshardrs02_27318 :
systemLog:
#MongoDB發(fā)送所有日志輸出的目標指定為文件
destination: file
#mongod或mongos應向其發(fā)送所有診斷日志記錄信息的日志文件的路徑
path: "/var/log/mongodb/27318/mongod.log"
#當mongos或mongod實例重新啟動時,mongos或mongod會將新條目附加到現有日志文件的末尾墩弯。
logAppend: true
storage:
#mongod實例存儲其數據的目錄。storage.dbPath設置僅適用于mongod寞射。
dbPath: "/data/mongodb/db/27318"
journal:
#啟用或禁用持久性日志以確保數據文件保持有效和可恢復渔工。
enabled: true
processManagement:
#啟用在后臺運行mongos或mongod進程的守護進程模式。
fork: true
#指定用于保存mongos或mongod進程的進程ID的文件位置,其中mongos或mongod將寫入其PID
pidFilePath: "/var/run/mongodb/mongod_27318.pid"
timeZoneInfo: /usr/share/zoneinfo
net:
#服務實例綁定所有IP,有副作用,副本集初始化的時候,節(jié)點名字會自動設置為本地域名,而不是ip
#bindIpAll: true
#服務實例綁定的IP
bindIp: localhost,127.0.0.1
#bindIp
#綁定的端口
port: 27318
replication:
#副本集的名稱
replSetName: myshardrs02
sharding:
#分片角色
clusterRole: shardsvr
注意:
以上只更改了27018桥温,剩下的27118和27218只需更改相應的端口即可引矩。
啟動第一套副本集:一主一副本一仲裁
sudo -u mongodb -g mongodb mongod -f /etc/mongod/27318/mongod.conf
sudo -u mongodb -g mongodb mongod -f /etc/mongod/27418/mongod.conf
sudo -u mongodb -g mongodb mongod -f /etc/mongod/27518/mongod.conf
查看服務是否啟動:
ps -ef | grep mongo | grep -v grep
mongodb 22490 1 0 13:38 ? 00:01:21 mongod -f /etc/mongod/27418/mongod.conf
mongodb 23193 1 0 13:42 ? 00:00:38 mongod -f /etc/mongod/27518/mongod.conf
mongodb 23848 1 0 10:48 ? 00:01:56 mongod -f /etc/mongod/27118/mongod.conf
mongodb 24216 1 0 10:49 ? 00:01:53 mongod -f /etc/mongod/27018/mongod.conf
mongodb 24854 1 0 10:51 ? 00:01:09 mongod -f /etc/mongod/27218/mongod.conf
mongodb 24957 1 0 13:48 ? 00:01:22 mongod -f /etc/mongod/27318/mongod.conf
(1)初始化副本集和創(chuàng)建主節(jié)點:
使用客戶端命令連接任意一個節(jié)點,但這里盡量要連接主節(jié)點:
mongo --host 127.0.0.1 --port 27318
執(zhí)行初始化副本集命令:
> rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "localhost:27318",
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1616392164, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1616392164, 1)
}
myshardrs02:SECONDARY>
myshardrs02:PRIMARY>
(2)添加副本節(jié)點:
myshardrs02:PRIMARY> rs.add("127.0.0.1:27418")
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1616392220, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1616392220, 1)
}
(3)添加仲裁節(jié)點:
myshardrs02:PRIMARY> rs.addArb("127.0.0.1:27518")
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1616392251, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1616392251, 1)
}
myshardrs02:PRIMARY> rs.status()
{
"set" : "myshardrs02",
"date" : ISODate("2021-03-22T05:52:58.195Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 2,
"writeMajorityCount" : 2,
......
......
"members" : [
{
"_id" : 0,
"name" : "localhost:27318",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 246,
"optime" : {
"ts" : Timestamp(1616392374, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2021-03-22T05:52:54Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1616392164, 2),
"electionDate" : ISODate("2021-03-22T05:49:24Z"),
"configVersion" : 3,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "127.0.0.1:27418",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 157,
"optime" : {
"ts" : Timestamp(1616392374, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1616392374, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2021-03-22T05:52:54Z"),
"optimeDurableDate" : ISODate("2021-03-22T05:52:54Z"),
"lastHeartbeat" : ISODate("2021-03-22T05:52:57.027Z"),
"lastHeartbeatRecv" : ISODate("2021-03-22T05:52:57.031Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "localhost:27318",
"syncSourceHost" : "localhost:27318",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 3
},
{
"_id" : 2,
"name" : "127.0.0.1:27518",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 127,
"lastHeartbeat" : ISODate("2021-03-22T05:52:57.029Z"),
"lastHeartbeatRecv" : ISODate("2021-03-22T05:52:57.068Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 3
}
],
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1616392374, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1616392374, 1)
}
5. 配置節(jié)點副本集的創(chuàng)建
第一步:準備存放數據和日志的目錄:
#-----------myconfigrs
mkdir -p /etc/mongod/27{0,1,2}19/
mkdir -p /var/log/mongodb/27{0,1,2}19/
mkdir -p /data/mongodb/db/27{0,1,2}19/
給創(chuàng)建的目錄賦權限
chown -R mongodb:mongodb /data/mongodb
chown -R mongodb:mongodb /var/log/mongodb
新建或修改配置文件:
vim /etc/mongod/27019/mongod.conf
myconfigrs_27019 :
systemLog:
#MongoDB發(fā)送所有日志輸出的目標指定為文件
destination: file
#mongod或mongos應向其發(fā)送所有診斷日志記錄信息的日志文件的路徑
path: "/var/log/mongodb/27019/mongod.log"
#當mongos或mongod實例重新啟動時,mongos或mongod會將新條目附加到現有日志文件的末尾。
logAppend: true
storage:
#mongod實例存儲其數據的目錄侵浸。storage.dbPath設置僅適用于mongod旺韭。
dbPath: "/data/mongodb/db/27019"
journal:
#啟用或禁用持久性日志以確保數據文件保持有效和可恢復。
enabled: true
processManagement:
#啟用在后臺運行mongos或mongod進程的守護進程模式掏觉。
fork: true
#指定用于保存mongos或mongod進程的進程ID的文件位置,其中mongos或mongod將寫入其PID
pidFilePath: "/var/run/mongodb/mongod_27019.pid"
timeZoneInfo: /usr/share/zoneinfo
net:
#服務實例綁定所有IP,有副作用,副本集初始化的時候,節(jié)點名字會自動設置為本地域名,而不是ip
#bindIpAll: true
#服務實例綁定的IP
bindIp: localhost,127.0.0.1
#bindIp
#綁定的端口
port: 27019
replication:
#副本集的名稱
replSetName: myconfigrs
sharding:
#分片角色
clusterRole: configsvr
啟動配置副本集:一主兩副本
依次啟動三個mongod服務:
sudo -u mongodb -g mongodb mongod -f /etc/mongod/27019/mongod.conf
sudo -u mongodb -g mongodb mongod -f /etc/mongod/27119/mongod.conf
sudo -u mongodb -g mongodb mongod -f /etc/mongod/27219/mongod.conf
查看服務是否啟動:
ps -ef | grep mongo | grep -v grep
(1)初始化副本集和創(chuàng)建主節(jié)點:
使用客戶端命令連接任意一個節(jié)點,但這里盡量要連接主節(jié)點:
mongo --host 127.0.0.1 --port 27019
執(zhí)行初始化副本集命令:
> rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "localhost:27019",
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(1616394009, 1),
"electionId" : ObjectId("000000000000000000000000")
},
"lastCommittedOpTime" : Timestamp(0, 0)
}
myconfigrs:SECONDARY>
myconfigrs:PRIMARY>
(2)添加兩個副本節(jié)點:
myconfigrs:PRIMARY> rs.add("127.0.0.1:27119")
{
"ok" : 1,
"$gleStats" : {
"lastOpTime" : {
"ts" : Timestamp(1616394052, 1),
"t" : NumberLong(1)
},
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1616394049, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1616394052, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1616394052, 1)
}
myconfigrs:PRIMARY> rs.add("127.0.0.1:27219")
{
"ok" : 1,
"$gleStats" : {
"lastOpTime" : {
"ts" : Timestamp(1616394056, 1),
"t" : NumberLong(1)
},
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1616394052, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1616394056, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1616394056, 1)
}
(3)查看副本集的配置情況:
myconfigrs:PRIMARY> rs.status()
{
"set" : "myconfigrs",
"date" : ISODate("2021-03-22T06:22:01.429Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"configsvr" : true,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 2,
"writeMajorityCount" : 2,
......
......
"members" : [
{
"_id" : 0,
"name" : "localhost:27019",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 448,
"optime" : {
"ts" : Timestamp(1616394119, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2021-03-22T06:21:59Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1616394009, 2),
"electionDate" : ISODate("2021-03-22T06:20:09Z"),
"configVersion" : 3,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "127.0.0.1:27119",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 69,
"optime" : {
"ts" : Timestamp(1616394119, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1616394119, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2021-03-22T06:21:59Z"),
"optimeDurableDate" : ISODate("2021-03-22T06:21:59Z"),
"lastHeartbeat" : ISODate("2021-03-22T06:22:00.253Z"),
"lastHeartbeatRecv" : ISODate("2021-03-22T06:22:01.256Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "localhost:27019",
"syncSourceHost" : "localhost:27019",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 3
},
{
"_id" : 2,
"name" : "127.0.0.1:27219",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 65,
"optime" : {
"ts" : Timestamp(1616394119, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1616394119, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2021-03-22T06:21:59Z"),
"optimeDurableDate" : ISODate("2021-03-22T06:21:59Z"),
"lastHeartbeat" : ISODate("2021-03-22T06:22:00.254Z"),
"lastHeartbeatRecv" : ISODate("2021-03-22T06:21:59.850Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "127.0.0.1:27119",
"syncSourceHost" : "127.0.0.1:27119",
"syncSourceId" : 1,
"infoMessage" : "",
"configVersion" : 3
}
],
"ok" : 1,
"$gleStats" : {
"lastOpTime" : {
"ts" : Timestamp(1616394056, 1),
"t" : NumberLong(1)
},
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1616394119, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1616394119, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1616394119, 1)
}
6. 路由節(jié)點的創(chuàng)建和操作
6.1第一個路由節(jié)點的創(chuàng)建和連接
準備存放數據和日志的目錄:
#-----------mongos01
mkdir -p /etc/mongod/27017/
mkdir -p /var/log/mongodb/27017/
給創(chuàng)建的目錄賦權限:
chown -R mongodb:mongodb /var/log/mongodb
mymongos_27017節(jié)點:
新建或修改配置文件:
vim /etc/mongod/27017/mongos.conf
mongos.conf
systemLog:
#MongoDB發(fā)送所有日志輸出的目標指定為文件
destination: file
#mongod或mongos應向其發(fā)送所有診斷日志記錄信息的日志文件的路徑
path: "/var/log/mongodb/27017/mongod.log"
#當mongos或mongod實例重新啟動時,mongos或mongod會將新條目附加到現有日志文件的末尾区端。
logAppend: true
processManagement:
#啟用在后臺運行mongos或mongod進程的守護進程模式。
fork: true
#指定用于保存mongos或mongod進程的進程ID的文件位置,其中mongos或mongod將寫入其PID
pidFilePath: "/var/run/mongodb/mongod_27017.pid"
timeZoneInfo: /usr/share/zoneinfo
net:
#服務實例綁定所有IP,有副作用,副本集初始化的時候,節(jié)點名字會自動設置為本地域名,而不是ip
#bindIpAll: true
#服務實例綁定的IP
bindIp: localhost,127.0.0.1
#bindIp
#綁定的端口
port: 27017
sharding:
#指定配置節(jié)點副本集
configDB:
myconfigrs/127.0.0.1:27019,127.0.0.1:27119,127.0.0.1:27219
啟動mongos:
sudo -u mongodb -g mongodb mongos -f /etc/mongod/27017/mongos.conf
about to fork child process, waiting until server is ready for connections.
forked process: 3703
child process started successfully, parent exiting
提示:啟動如果失敗,可以查看 log目錄下的日志,查看失敗原因澳腹。
客戶端登錄mongos:
mongo --host 127.0.0.1 --port 27017
此時,寫不進去數據,如果寫數據會報錯:
mongos> use alonzo
switched to db alonzo
mongos> db.user.insert({name:"zs"})
WriteCommandError({
"ok" : 0,
"errmsg" : "unable to initialize targeter for write op for collection alonzo.user :: caused by :: Database alonzo could not be created :: caused by :: No shards found",
"code" : 70,
"codeName" : "ShardNotFound",
"operationTime" : Timestamp(1616395838, 2),
"$clusterTime" : {
"clusterTime" : Timestamp(1616395838, 2),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
})
原因:
通過路由節(jié)點操作,現在只是連接了配置節(jié)點,還沒有連接分片數據節(jié)點,因此無法寫入業(yè)務數據织盼。
properties配置文件參考:
logpath=/mongodb/sharded_cluster/mymongos_27017/log/mongos.log
logappend=true
bind_ip_all=true
port=27017
fork=true
configdb=myconfigrs/127.0.0.1:27019,127.0.0.1:27119,127.0.0.1:27219
6.2 在路由節(jié)點上進行分片配置操作
使用命令添加分片:
(1)添加分片:
語法:
sh.addShard("IP:Port")
將第一套分片副本集添加進來:
sh.addShard("myshardrs01/localhost:27018,127.0.0.1:27118,127.0.0.1:27218")
注意:
添加副本的IP時杨何,需要根據被添加副本中rs.status()中的ip填寫否則會報錯
查看分片狀態(tài)情況:
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("605837199c42252507b7988b")
}
shards:
{ "_id" : "myshardrs01", "host" : "myshardrs01/127.0.0.1:27118,localhost:27018", "state" : 1 }
active mongoses:
"4.2.13" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
繼續(xù)將第二套分片副本集添加進來:
mongos> sh.addShard("myshardrs02/localhost:27318,127.0.0.1:27418,127.0.0.1:27518")
{
"shardAdded" : "myshardrs02",
"ok" : 1,
"operationTime" : Timestamp(1616399229, 5),
"$clusterTime" : {
"clusterTime" : Timestamp(1616399229, 5),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
查看分片狀態(tài):
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("605837199c42252507b7988b")
}
shards:
{ "_id" : "myshardrs01", "host" : "myshardrs01/127.0.0.1:27118,localhost:27018", "state" : 1 }
{ "_id" : "myshardrs02", "host" : "myshardrs02/127.0.0.1:27418,localhost:27318", "state" : 1 }
active mongoses:
"4.2.13" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
24 : Success
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
myshardrs01 1000
myshardrs02 24
too many chunks to print, use verbose if you want to force print
提示:如果添加分片失敗,需要先手動移除分片,檢查添加分片的信息的正確性后,再次添加分片。
移除分片參考(了解):
use admin
db.runCommand( { removeShard: "myshardrs02" } )
注意:如果只剩下最后一個 shard,是無法刪除的
移除時會自動轉移分片數據,需要一個時間過程沥邻。
完成后,再次執(zhí)行刪除分片命令才能真正刪除
(2)開啟分片功能:sh.enableSharding("庫名")危虱、sh.shardCollection("庫名.集合名",{"key":1})
在mongos上的articledb數據庫配置sharding:
mongos> sh.enableSharding("articledb")
{
"ok" : 1,
"operationTime" : Timestamp(1616400113, 5),
"$clusterTime" : {
"clusterTime" : Timestamp(1616400113, 5),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
查看分片狀態(tài):
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("605837199c42252507b7988b")
}
shards:
{ "_id" : "myshardrs01", "host" : "myshardrs01/127.0.0.1:27118,localhost:27018", "state" : 1 }
{ "_id" : "myshardrs02", "host" : "myshardrs02/127.0.0.1:27418,localhost:27318", "state" : 1 }
active mongoses:
"4.2.13" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
512 : Success
databases:
{ "_id" : "articledb", "primary" : "myshardrs02", "partitioned" : true, "version" : { "uuid" : UUID("77fa0d00-be7c-45df-a882-fbc363ac3c03"), "lastMod" : 1 } }
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
myshardrs01 512
myshardrs02 512
too many chunks to print, use verbose if you want to force print
(3)集合分片
對集合分片,你必須使用 sh.shardCollection() 方法指定集合和分片鍵。
語法:
sh.shardCollection(namespace, key, unique)
參數:
Parameter | Type | Description |
---|---|---|
namespace | string | 要(分片)共享的目標集合的命名空間,格式: <database>.<collection>
|
key | document | 用作分片鍵的索引規(guī)范文檔唐全。shard鍵決定MongoDB如何在shard之間分發(fā)文檔埃跷。除非集合為空,否則索引必須在shardcollection命令之前存在。如果集合為空,則MongoDB在對集合進行分片之前創(chuàng)建索引,前提是支持分片鍵的索引不存在邮利。簡單的說:由包含字段和該字段的索引遍歷方向的文檔組成弥雹。 |
unique | boolean | 當值為true情況下,片鍵字段上會限制為確保是唯一索引。哈希策略片鍵不支持唯一索引近弟。默認是false缅糟。 |
對集合進行分片時,你需要選擇一個 片鍵(Shard Key) , shard key 是每條記錄都必須包含的,且建立了
索引的單個字段或復合字段,MongoDB按照片鍵將數據劃分到不同的 數據塊 中,并將 數據塊 均衡地分布
到所有分片中.為了按照片鍵劃分數據塊,MongoDB使用 基于哈希的分片方式(隨機平均分配)或者基
于范圍的分片方式(數值大小分配) 。
用什么字段當片鍵都可以,如:nickname作為片鍵,但一定是必填字段祷愉。
分片規(guī)則一:哈希策略
對于 基于哈希的分片 ,MongoDB計算一個字段的哈希值,并用這個哈希值來創(chuàng)建數據塊.
在使用基于哈希分片的系統(tǒng)中,擁有”相近”片鍵的文檔 很可能不會 存儲在同一個數據塊中,因此數據的分
離性更好一些.
使用nickname作為片鍵,根據其值的哈希值進行數據分片
mongos> sh.shardCollection("articledb.comment",{"nickname":"hashed"})
{
"collectionsharded" : "articledb.comment",
"collectionUUID" : UUID("25ebf512-7180-45f6-9fef-ffb1551e3017"),
"ok" : 1,
"operationTime" : Timestamp(1616400856, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1616400856, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
查看分片狀態(tài):
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("605837199c42252507b7988b")
}
shards:
{ "_id" : "myshardrs01", "host" : "myshardrs01/127.0.0.1:27118,localhost:27018", "state" : 1 }
{ "_id" : "myshardrs02", "host" : "myshardrs02/127.0.0.1:27418,localhost:27318", "state" : 1 }
active mongoses:
"4.2.13" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
512 : Success
databases:
{ "_id" : "articledb", "primary" : "myshardrs02", "partitioned" : true, "version" : { "uuid" : UUID("77fa0d00-be7c-45df-a882-fbc363ac3c03"), "lastMod" : 1 } }
articledb.comment
shard key: { "nickname" : "hashed" }
unique: false
balancing: true
chunks:
myshardrs01 2
myshardrs02 2
{ "nickname" : { "$minKey" : 1 } } -->> { "nickname" : NumberLong("-4611686018427387902") } on : myshardrs01 Timestamp(1, 0)
{ "nickname" : NumberLong("-4611686018427387902") } -->> { "nickname" : NumberLong(0) } on : myshardrs01 Timestamp(1, 1)
{ "nickname" : NumberLong(0) } -->> { "nickname" : NumberLong("4611686018427387902") } on : myshardrs02 Timestamp(1, 2)
{ "nickname" : NumberLong("4611686018427387902") } -->> { "nickname" : { "$maxKey" : 1 } } on : myshardrs02 Timestamp(1, 3)
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
myshardrs01 512
myshardrs02 512
too many chunks to print, use verbose if you want to force print
分片規(guī)則二:范圍策略
對于基于范圍的分片 ,MongoDB按照片鍵的范圍把數據分成不同部分.假設有一個數字的片鍵:想象一個
從負無窮到正無窮的直線,每一個片鍵的值都在直線上畫了一個點.MongoDB把這條直線劃分為更短的不
重疊的片段,并稱之為 數據塊 ,每個數據塊包含了片鍵在一定范圍內的數據.
在使用片鍵做范圍劃分的系統(tǒng)中,擁有”相近”片鍵的文檔很可能存儲在同一個數據塊中,因此也會存儲在同
一個分片中.
如使用作者年齡字段作為片鍵,按照點贊數的值進行分片:
mongos> sh.shardCollection("articledb.author",{"age":1})
{
"collectionsharded" : "articledb.author",
"collectionUUID" : UUID("12ed4962-8fec-4cb0-84db-941a95e8e168"),
"ok" : 1,
"operationTime" : Timestamp(1616401052, 13),
"$clusterTime" : {
"clusterTime" : Timestamp(1616401052, 13),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
注意:
1)一個集合只能指定一個片鍵,否則報錯。
2)一旦對一個集合分片,分片鍵和分片值就不可改變赦颇。 如:不能給集合選擇不同的分片鍵二鳄、不能更新分片鍵的值。
3)根據age索引進行分配數據媒怯。
查看分片狀態(tài):
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("605837199c42252507b7988b")
}
shards:
{ "_id" : "myshardrs01", "host" : "myshardrs01/127.0.0.1:27118,localhost:27018", "state" : 1 }
{ "_id" : "myshardrs02", "host" : "myshardrs02/127.0.0.1:27418,localhost:27318", "state" : 1 }
active mongoses:
"4.2.13" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
512 : Success
databases:
{ "_id" : "articledb", "primary" : "myshardrs02", "partitioned" : true, "version" : { "uuid" : UUID("77fa0d00-be7c-45df-a882-fbc363ac3c03"), "lastMod" : 1 } }
articledb.author
shard key: { "age" : 1 }
unique: false
balancing: true
chunks:
myshardrs02 1
{ "age" : { "$minKey" : 1 } } -->> { "age" : { "$maxKey" : 1 } } on : myshardrs02 Timestamp(1, 0)
articledb.comment
shard key: { "nickname" : "hashed" }
unique: false
balancing: true
chunks:
myshardrs01 2
myshardrs02 2
{ "nickname" : { "$minKey" : 1 } } -->> { "nickname" : NumberLong("-4611686018427387902") } on : myshardrs01 Timestamp(1, 0)
{ "nickname" : NumberLong("-4611686018427387902") } -->> { "nickname" : NumberLong(0) } on : myshardrs01 Timestamp(1, 1)
{ "nickname" : NumberLong(0) } -->> { "nickname" : NumberLong("4611686018427387902") } on : myshardrs02 Timestamp(1, 2)
{ "nickname" : NumberLong("4611686018427387902") } -->> { "nickname" : { "$maxKey" : 1 } } on : myshardrs02 Timestamp(1, 3)
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
myshardrs01 512
myshardrs02 512
too many chunks to print, use verbose if you want to force print
基于范圍的分片方式與基于哈希的分片方式性能對比:
基于范圍的分片方式提供了更高效的范圍查詢,給定一個片鍵的范圍,分發(fā)路由可以很簡單地確定哪個數據塊存儲了請求需要的數據,并將請求轉發(fā)到相應的分片中.
不過,基于范圍的分片會導致數據在不同分片上的不均衡,有時候,帶來的消極作用會大于查詢性能的積極作用.
比如,如果片鍵所在的字段是線性增長的,一定時間內的所有請求都會落到某個固定的數據塊中,最終導致分布在同一個分片中.在這種情況下,一小部分分片承載了集群大部分的數據,系統(tǒng)并不能很好地進行擴展.
與此相比,基于哈希的分片方式以范圍查詢性能的損失為代價,保證了集群中數據的均衡.哈希值的隨機性使數據隨機分布在每個數據塊中,因此也隨機分布在不同分片中.但是也正由于隨機性,一個范圍查詢很難確定應該請求哪些分片,通常為了返回需要的結果,需要請求所有分片.
如無特殊情況,一般推薦使用 Hash Sharding订讼。而使用 _id 作為片鍵是一個不錯的選擇,因為它是必有的,你可以使用數據文檔 _id 的哈希作為片鍵。這個方案能夠是的讀和寫都能夠平均分布,并且它能夠保證每個文檔都有不同的片鍵所以數據塊能夠很精細扇苞。
似乎還是不夠完美,因為這樣的話對多個文檔的查詢必將命中所有的分片欺殿。雖說如此,這也是一種比較好的方案了。
理想化的 shard key 可以讓 documents 均勻地在集群中分布:
顯示集群的詳細信息:
mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("605837199c42252507b7988b")
}
shards:
{ "_id" : "myshardrs01", "host" : "myshardrs01/127.0.0.1:27118,localhost:27018", "state" : 1 }
{ "_id" : "myshardrs02", "host" : "myshardrs02/127.0.0.1:27418,localhost:27318", "state" : 1 }
active mongoses:
"4.2.13" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
512 : Success
databases:
{ "_id" : "articledb", "primary" : "myshardrs02", "partitioned" : true, "version" : { "uuid" : UUID("77fa0d00-be7c-45df-a882-fbc363ac3c03"), "lastMod" : 1 } }
articledb.author
shard key: { "age" : 1 }
unique: false
balancing: true
chunks:
myshardrs02 1
{ "age" : { "$minKey" : 1 } } -->> { "age" : { "$maxKey" : 1 } } on : myshardrs02 Timestamp(1, 0)
articledb.comment
shard key: { "nickname" : "hashed" }
unique: false
balancing: true
chunks:
myshardrs01 2
myshardrs02 2
{ "nickname" : { "$minKey" : 1 } } -->> { "nickname" : NumberLong("-4611686018427387902") } on : myshardrs01 Timestamp(1, 0)
{ "nickname" : NumberLong("-4611686018427387902") } -->> { "nickname" : NumberLong(0) } on : myshardrs01 Timestamp(1, 1)
{ "nickname" : NumberLong(0) } -->> { "nickname" : NumberLong("4611686018427387902") } on : myshardrs02 Timestamp(1, 2)
{ "nickname" : NumberLong("4611686018427387902") } -->> { "nickname" : { "$maxKey" : 1 } } on : myshardrs02 Timestamp(1, 3)
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
myshardrs01 512
myshardrs02 512
too many chunks to print, use verbose if you want to force print
查看均衡器是否工作(需要重新均衡時系統(tǒng)才會自動啟動,不用管它):
mongos> sh.isBalancerRunning()
false
查看當前 Balancer狀態(tài):
mongos> sh.getBalancerState()
true
6.3 分片后插入數據測試
測試一(哈希規(guī)則):登錄mongs后,向comment循環(huán)插入1000條數據做測試:
mongos> use articledb
switched to db articledb
mongos> for(var i=1;i<=1000;i++){db.comment.insert({_id:i+"",nickname:"BoBo"+i})}
WriteResult({ "nInserted" : 1 })
提示: js的語法,因為mongo的shell是一個JavaScript的shell鳖敷。
注意:從路由上插入的數據,必須包含片鍵,否則無法插入脖苏。
分別登陸兩個片的主節(jié)點,統(tǒng)計文檔數量
第一個分片副本集:
mongo --host 127.0.0.1 --port 27018
myshardrs01:PRIMARY> use articledb
switched to db articledb
myshardrs01:PRIMARY> db.comment.count()
507
第二個分片副本集:
mongo --host 127.0.0.1 --port 27318
myshardrs02:PRIMARY> use articledb
switched to db articledb
myshardrs02:PRIMARY> db.comment.count()
493
可以看到, 1000條數據近似均勻的分布到了2個shard上。是根據片鍵的哈希值分配的定踱。
這種分配方式非常易于水平擴展:一旦數據存儲需要更大空間,可以直接再增加分片即可,同時提升了
性能棍潘。
使用db.comment.stats()查看單個集合的完整情況,mongos執(zhí)行該命令可以查看該集合的數據分片的
情況。
使用sh.status()查看本庫內所有集合的分片信息崖媚。
測試二(范圍規(guī)則):登錄mongs后,向comment循環(huán)插入1000條數據做測試:
mongos> use articledb
switched to db articledb
mongos> for(var i=1;i<=20000;i++){db.author.save({"name":"BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo"+i,"age":NumberInt(i%120)})}
WriteResult({ "nInserted" : 1 })
mongos> db.author.count()
20000
插入成功后,仍然要分別查看兩個分片副本集的數據情況亦歉。
分片效果:
mongos> sh.status()
#以下只截取一部分
articledb.author
shard key: { "age" : 1 }
unique: false
balancing: true
chunks:
myshardrs02 1
{ "age" : { "$minKey" : 1 } } -->> { "age" : { "$maxKey" : 1 } } on : myshardrs02 Timestamp(1, 0)
發(fā)現并沒有分片成功
提示:如果查看狀態(tài)發(fā)現沒有分片,則可能是由于以下原因造成了:
1)系統(tǒng)繁忙,正在分片中。
2)數據塊(chunk)沒有填滿,默認的數據塊尺寸(chunksize)是64M,填滿后才會考慮向其他片的
數據塊填充數據,因此,為了測試,可以將其改小,這里改為1M,操作如下:
下面我們試試看通過修改數據塊大小以及修改過后的分片效果:
mongos> db.author.remove({})
WriteResult({ "nRemoved" : 20000 })
mongos> use config
switched to db config
mongos> db.settings.save( { _id:"chunksize", value: 1 } )
WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : "chunksize" })
mongos> use articledb
switched to db articledb
mongos> for(var i=1;i<=20000;i++){db.author.save({"name":"BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBo"+i,"age":NumberInt(i%120)})}
WriteResult({ "nInserted" : 1 })
mongos> sh.status()
articledb.author
shard key: { "age" : 1 }
unique: false
balancing: true
chunks:
myshardrs01 2
myshardrs02 3
{ "age" : { "$minKey" : 1 } } -->> { "age" : 0 } on : myshardrs01 Timestamp(2, 0)
{ "age" : 0 } -->> { "age" : 49 } on : myshardrs01 Timestamp(3, 0)
{ "age" : 49 } -->> { "age" : 100 } on : myshardrs02 Timestamp(2, 3)
{ "age" : 100 } -->> { "age" : 119 } on : myshardrs02 Timestamp(2, 4)
{ "age" : 119 } -->> { "age" : { "$maxKey" : 1 } } on : myshardrs02 Timestamp(3, 1)
測試完改回來:
db.settings.save( { _id:"chunksize", value: 64 } )
注意:要先改小,再設置分片畅哑。為了測試,可以先刪除集合,重新建立集合的分片策略,再插入數據測試即可肴楷。
6.4 再增加一個路由節(jié)點
#-----------mongos02
mkdir -p /etc/mongod/27117/
mkdir -p /var/log/mongodb/27117/
chown -R mongodb:mongodb /var/log/mongodb
新建或修改配置文件:
/etc/mongod/27117/mongos.conf
mongos.conf
systemLog:
#MongoDB發(fā)送所有日志輸出的目標指定為文件
destination: file
#mongod或mongos應向其發(fā)送所有診斷日志記錄信息的日志文件的路徑
path: "/var/log/mongodb/27117/mongod.log"
#當mongos或mongod實例重新啟動時,mongos或mongod會將新條目附加到現有日志文件的末尾。
logAppend: true
processManagement:
#啟用在后臺運行mongos或mongod進程的守護進程模式荠呐。
fork: true
#指定用于保存mongos或mongod進程的進程ID的文件位置,其中mongos或mongod將寫入其PID
pidFilePath: "/var/run/mongodb/mongod_27117.pid"
timeZoneInfo: /usr/share/zoneinfo
net:
#服務實例綁定所有IP,有副作用,副本集初始化的時候,節(jié)點名字會自動設置為本地域名,而不是ip
#bindIpAll: true
#服務實例綁定的IP
bindIp: localhost,127.0.0.1
#bindIp
#綁定的端口
port: 27117
sharding:
#指定配置節(jié)點副本集
configDB:
myconfigrs/127.0.0.1:27019,127.0.0.1:27119,127.0.0.1:27219
啟動mongos2:
mongos -f /etc/mongod/27117/mongos.conf
about to fork child process, waiting until server is ready for connections.
forked process: 32205
child process started successfully, parent exiting
使用mongo客戶端登錄27117,發(fā)現,第二個路由無需配置,因為分片配置都保存到了配置服務器中了赛蔫。