1.背景&問(wèn)題描述
接上篇文章 http://www.reibang.com/p/329b9f92ac4c
在上一篇文章中慰枕,由于系統(tǒng)宕機(jī)刃跛,導(dǎo)致大量索引出現(xiàn)了Unassigned 狀態(tài)挪捕。在上一篇文章中诚卸,我們通過(guò)reroute API進(jìn)行了操作芜繁,對(duì)主分片缺失的索引鳍咱,經(jīng)過(guò)上述操作之后,分配了主分片措嵌。但是在接下來(lái)的操作中躲叼,對(duì)于副本分片,reroute出錯(cuò)企巢!
如下是索引 alarm-2017.08.12枫慷,第0個(gè)分片的副本沒(méi)有分配:
下面執(zhí)行語(yǔ)句:
POST /_cluster/reroute
{
"commands": [
{
"allocate_replica": {
"index": "alarm-2017.08.12",
"shard": 0,
"node": "node4-1"
}
}
]
}
結(jié)果執(zhí)行失敗浪规!
{
"error": {
"root_cause": [
{
"type": "remote_transport_exception",
"reason": "[node3-2][192.168.21.88:9301][cluster:admin/reroute]"
}
],
"type": "illegal_argument_exception",
"reason": "[allocate_replica] allocation of [alarm-2017.08.12][0] on node {node4-1}{u47KtJGgQw60T_xm9hmepw}{UbaCHI4KRveQeTAnJvGFEQ}{192.168.21.89}{192.168.21.89:9301}{rack=r4, ml.enabled=true} is not allowed, reason: [NO(shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2017-08-16T00:54:47.088Z], failed_attempts[5], delayed=false, details[failed recovery, failure RecoveryFailedException[[alarm-2017.08.12][0]: Recovery failed from {node8}{Bpd3y--EQsag1u1NTmtZfA}{4T_McpmjSXqLowRoXztssQ}{192.168.21.89}{192.168.21.89:9301}{rack=r4} into {node5}{i4oG4VcaSdKVeNEvStXwAw}{w4nAITEOR9u7liR55qDsVA}{192.168.21.88}{192.168.21.88:9300}{rack=r3}]; nested: RemoteTransportException[[node8][192.168.21.89:9301][internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[Phase[1] phase1 failed]; nested: RecoverFilesRecoveryException[Failed to transfer [0] files with total size of [0b]]; nested: FileSystemException[/opt/elasticsearch/elasticsearch-node8/data/nodes/0/indices/FgLdgYTmTfazlP8i5K0Knw/0/index: Too many open files in system]; ], allocation_status[no_attempt]]])][YES(primary shard for this replica is already active)][YES(explicitly ignoring any disabling of allocation due to manual allocation commands via the reroute API)][YES(target node version [5.5.1] is the same or newer than source node version [5.5.1])][YES(the shard is not being snapshotted)][YES(node passes include/exclude/require filters)][YES(the shard does not exist on the same host)][YES(enough disk for shard on node, free: [6.4tb], shard size: [0b], free after allocating shard: [6.4tb])][YES(below shard recovery limit of outgoing: [0 < 2] incoming: [0 < 2])][YES(total shard limits are disabled: [index: -1, cluster: -1] <= 0)][YES(allocation awareness is not enabled, set cluster setting [cluster.routing.allocation.awareness.attributes] to enable it)]"
},
"status": 400
}
注意看錯(cuò)誤:
FileSystemException[/opt/elasticsearch/elasticsearch-node8/data/nodes/0/indices/FgLdgYTmTfazlP8i5K0Knw/0/index: Too many open files in system
2.問(wèn)題分析:
表面上看好像是還是超出了文件limits或听。但是在做上述操作的過(guò)程中,我已經(jīng)將所有配置調(diào)整笋婿,并將elasticsearch集群升級(jí)誉裆,新增了兩臺(tái)服務(wù)器,將服務(wù)器修改為如下節(jié)點(diǎn):
節(jié)點(diǎn)名稱 | 服務(wù)器 | http端口 | rack | Xms&Xmx |
---|---|---|---|---|
node1-1 | 192.168.21.23 | 9201 | rack1 | 20G |
node1-2 | 192.168.21.23 | 9202 | rack1 | 20G |
node1-3 | 192.168.21.23 | 9203 | rack1 | 20G |
node2-1 | 192.168.21.24 | 9201 | rack2 | 20G |
node2-2 | 192.168.21.24 | 9202 | rack2 | 20G |
node2-3 | 192.168.21.24 | 9203 | rack2 | 20G |
node3-1 | 192.168.21.88 | 9201 | rack3 | 20G |
node3-2 | 192.168.21.88 | 9202 | rack3 | 20G |
node3-3 | 192.168.21.88 | 9203 | rack3 | 20G |
node4-1 | 192.168.21.89 | 9201 | rack4 | 20G |
node4-2 | 192.168.21.89 | 9202 | rack4 | 20G |
node4-3 | 192.168.21.89 | 9203 | rack4 | 20G |
但是報(bào)錯(cuò)日志中還是node8缸濒,這證明可能是上次宕機(jī)直接導(dǎo)致了副本文件不可用足丢,無(wú)法進(jìn)行reroute.
現(xiàn)在查看各節(jié)點(diǎn)的limits配置:
GET _nodes/stats/process?filter_path=**.max_file_descriptors
結(jié)果:
{
"nodes": {
"57A1rYqMRH-igOdlM9VyRg": {
"process": {
"max_file_descriptors": 655350
}
},
"if6AS6S-REKMOOVAp__xkg": {
"process": {
"max_file_descriptors": 655350
}
},
"Q4iPvXjvQkK6OImAHisHcw": {
"process": {
"max_file_descriptors": 655350
}
},
"VTqaCdj6TEGjDN5dlsygVw": {
"process": {
"max_file_descriptors": 655350
}
},
"u47KtJGgQw60T_xm9hmepw": {
"process": {
"max_file_descriptors": 655350
}
},
"Bpd3y--EQsag1u1NTmtZfA": {
"process": {
"max_file_descriptors": 655350
}
},
"i4oG4VcaSdKVeNEvStXwAw": {
"process": {
"max_file_descriptors": 655350
}
},
"pYKjqz0hS3aSs8sBuZbfFg": {
"process": {
"max_file_descriptors": 655350
}
},
"mSyzxBFFTRmLx4TWaPpJYg": {
"process": {
"max_file_descriptors": 655350
}
},
"8_cG1N_cSY-VfQLK-zVuhQ": {
"process": {
"max_file_descriptors": 655350
}
},
"JIKzocuZRtec_XkrM1eXDg": {
"process": {
"max_file_descriptors": 655350
}
},
"Ol6mvLtURTu5Ie6bX_gSdQ": {
"process": {
"max_file_descriptors": 655350
}
}
}
}
可以看到每個(gè)節(jié)點(diǎn)的max_file_descriptors都非常大,不太可能出現(xiàn)無(wú)法打開文件的錯(cuò)誤庇配,這只有一種可能斩跌,就是原來(lái)的副本分片數(shù)據(jù)存在問(wèn)題,無(wú)法reroute捞慌。
副本無(wú)法rerouteL鲜弧!卿闹!
想了各種辦法之后,決定用以下兩種方式來(lái)解決:
方法一:我想到了將索引snapshot到文件系統(tǒng)萝快,之后再restore 锻霎。
但是這個(gè)方案再嘗試了一個(gè)索引之后放棄。因?yàn)閑lasticsearch的snapshot需要每個(gè)節(jié)點(diǎn)的snapshot目錄通過(guò)NFS方式實(shí)現(xiàn)網(wǎng)絡(luò)共享揪漩。也就是類似與windows的共享文件夾旋恼。這樣來(lái)保證所有節(jié)點(diǎn)的備份都導(dǎo)出到同一個(gè)目錄。這個(gè)方案被放棄的原因是nfs共享目錄還沒(méi)有建立奄容,而且這個(gè)過(guò)程比較復(fù)雜冰更。如果有更好的辦法肯定放棄這個(gè)方案。
方法二:有沒(méi)有一種簡(jiǎn)單的辦法讓索引重建昂勒?我查看了elasticsearch的官方文檔蜀细。終于找到了reindex。對(duì)就是這個(gè)功能強(qiáng)大的reindex戈盈。
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html
具體參數(shù)可參考上述鏈接奠衔。這個(gè)reindex不僅可以讓本地索引重建谆刨,而且還可以以其他elasticsearch服務(wù)器為源重建索引。也就是說(shuō)可以將其他集群的索引拷貝到目標(biāo)集群归斤。
補(bǔ)充:
方法三:還有一種更為快捷的辦法痊夭,考慮到elasticsearch的分片的副本可以動(dòng)態(tài)的設(shè)置,那么其實(shí)可以先將 number_of_replicas 設(shè)為0脏里,待副本分片都刪除之后抄淑,再將number_of_replicas改為1,這樣副本分片就會(huì)自動(dòng)恢復(fù)滑凉。同時(shí)吨凑,考慮到副本的優(yōu)化情況,建議在做該操作的同時(shí)员淫,做一次段合并合蔽。以節(jié)約空間和減少文件句柄數(shù)。
3.reindex解決過(guò)程
選擇了目標(biāo)索引
alarm-2017.08.12
POST /_reindex
{
"source": {
"index": "alarm-2017.08.12"
}
, "dest": {
"index": "alarm-2017.08.12.bak",
"version_type": "external"
}
}
響應(yīng)結(jié)果
{
"took": 7143,
"timed_out": false,
"total": 1414,
"updated": 0,
"created": 1414,
"deleted": 0,
"batches": 2,
"version_conflicts": 0,
"noops": 0,
"retries": {
"bulk": 0,
"search": 0
},
"throttled_millis": 0,
"requests_per_second": -1,
"throttled_until_millis": 0,
"failures": []
}
再查看索引監(jiān)控介返,發(fā)現(xiàn)索引重建成功拴事,分片也正常。
現(xiàn)在只需要將原來(lái)的索引刪除即可圣蝎!
如果需要索引名與之前一致刃宵,則將現(xiàn)有索引增加一個(gè)別名即可。
https://www.elastic.co/guide/cn/elasticsearch/guide/current/index-aliases.html
DELETE alarm-2017.08.12
PUT alarm-2017.08.12.bak/_alias/alarm-2017.08.12
不難發(fā)現(xiàn)徘公,在elasticsearch中并沒(méi)有類似于mysql table那樣直接可以修改表名的操作牲证,這可能與elasticsearch的特殊結(jié)構(gòu)有關(guān)系。reindex是一個(gè)非常重要的操作关面,在某些方面坦袍,甚至可能取代備份的snapshot。直接通過(guò)一個(gè)新的集群將數(shù)據(jù)copy等太。
4.修改number_of_replicas解決過(guò)程
選取目標(biāo)索引
applog-prod-2016.12.18
可以看到存在分片未分配狀況:
GET _cat/shards/applog-prod-2016.12.18*
applog-prod-2016.12.18 4 r STARTED 916460 666.4mb 192.168.21.24 node2-2
applog-prod-2016.12.18 4 p STARTED 916460 666.6mb 192.168.21.23 node1-3
applog-prod-2016.12.18 1 p STARTED 916295 672.8mb 192.168.21.88 node3-3
applog-prod-2016.12.18 1 r STARTED 916295 672.8mb 192.168.21.24 node2-3
applog-prod-2016.12.18 2 r STARTED 916730 670.9mb 192.168.21.89 node4-2
applog-prod-2016.12.18 2 p STARTED 916730 670.9mb 192.168.21.23 node1-3
applog-prod-2016.12.18 3 r STARTED 917570 674.9mb 192.168.21.23 node1-1
applog-prod-2016.12.18 3 p STARTED 917570 674.9mb 192.168.21.24 node2-2
applog-prod-2016.12.18 0 p STARTED 917656 673.5mb 192.168.21.88 node3-2
applog-prod-2016.12.18 0 r UNASSIGNED
現(xiàn)在修改number_of_replicas
PUT applog-prod-2016.12.18/_settings
{
"index":{
"number_of_replicas":0
}
}
可以看到number_of_replicas變成了0捂齐,分片只有1份
GET applog-prod-2016.12.18/_settings
{
"applog-prod-2016.12.18": {
"settings": {
"index": {
"refresh_interval": "5s",
"number_of_shards": "5",
"provided_name": "applog-prod-2016.12.18",
"creation_date": "1482019342621",
"number_of_replicas": "0",
"uuid": "hmZfjW80Q-SeV_qha_r-EA",
"version": {
"created": "5000199"
}
}
}
}
}
分片:
GET _cat/shards/applog-prod-2016.12.18*
applog-prod-2016.12.18 4 p STARTED 916460 666.6mb 192.168.21.23 node1-3
applog-prod-2016.12.18 1 p STARTED 916295 672.8mb 192.168.21.88 node3-3
applog-prod-2016.12.18 2 p STARTED 916730 670.9mb 192.168.21.23 node1-3
applog-prod-2016.12.18 3 p STARTED 917570 674.9mb 192.168.21.24 node2-2
applog-prod-2016.12.18 0 p STARTED 917656 673.5mb 192.168.21.88 node3-2
對(duì)段進(jìn)行合并:
POST /applog-prod-2016.12.18/_forcemerge?max_num_segments=1
之后再將number_of_replicas改回來(lái)
PUT applog-prod-2016.12.18/_settings
{
"index":{
"number_of_replicas":1
}
}
分片情況:
GET _cat/shards/applog-prod-2016.12.18*
applog-prod-2016.12.18 4 r INITIALIZING 192.168.21.89 node4-1
applog-prod-2016.12.18 4 p STARTED 916460 666.6mb 192.168.21.23 node1-3
applog-prod-2016.12.18 1 p STARTED 916295 672.8mb 192.168.21.88 node3-3
applog-prod-2016.12.18 1 r INITIALIZING 192.168.21.89 node4-3
applog-prod-2016.12.18 2 r STARTED 916730 670.9mb 192.168.21.89 node4-1
applog-prod-2016.12.18 2 p STARTED 916730 670.9mb 192.168.21.23 node1-3
applog-prod-2016.12.18 3 r STARTED 917570 674.9mb 192.168.21.89 node4-3
applog-prod-2016.12.18 3 p STARTED 917570 674.9mb 192.168.21.24 node2-2
applog-prod-2016.12.18 0 p STARTED 917656 673.5mb 192.168.21.88 node3-2
applog-prod-2016.12.18 0 r INITIALIZING 192.168.21.89 node4-1
可以發(fā)現(xiàn)分片會(huì)被初始化,恢復(fù)到2個(gè)分片缩抡。
5.總結(jié)
對(duì)于索引出現(xiàn)Unassigned 的情況奠宜,最好的解決辦法是reroute,如果不能reroute,則考慮重建分片瞻想,通過(guò)number_of_replicas的修改進(jìn)行恢復(fù)压真。如果上述兩種情況都不能恢復(fù),則考慮reindex蘑险。