ELK集群搭建
ELK 是三個開源項目的首字母縮寫:
Elasticsearch呀酸、Logstash 和 Kibana次酌。
- Elasticsearch 是一個搜索和分析引擎酣栈。
- Logstash 是服務(wù)器端數(shù)據(jù)處理管道擅笔,能夠同時從多個來源采集數(shù)據(jù)志衣,轉(zhuǎn)換數(shù)據(jù),將數(shù)據(jù)發(fā)送到Elasticsearch等存儲庫中猛们。
- Kibana 則可以讓用戶在 Elasticsearch 中使用圖形和圖表對數(shù)據(jù)進(jìn)行可視化念脯。
集群搭建
# docker版本:
Docker version 19.03.13, build 4484c46d9d
準(zhǔn)備鏡像
# 鏡像拉取
docker pull elasticsearch:7.7.0
docker pull kibana:7.7.0
docker pull lastash:7.7.0
創(chuàng)建容器掛載目錄、配置文件
部署的主目錄
/home/elasticsearch/v7.7/
# 主目錄在/home/elasticsearch/v7.7/
切換到主目錄下
cd /home/elasticsearch/v7.7/
# 配置文件
mkdir -p node-1/config
mkdir -p node-2/config
mkdir -p node-3/config
# 數(shù)據(jù)存儲
mkdir -p /node-1/data
mkdir -p /node-2/data
mkdir -p /node-3/data
# 日志存儲
mkdir -p /node-1/logs
mkdir -p /node-2/logs
mkdir -p /node-3/logs
# 插件管理
mkdir -p /node-1/plugins
mkdir -p /node-2/plugins
mkdir -p /node-3/plugins
# 開放權(quán)限
chmod 777 /home/elasticsearch/v7.7/node-1/data
chmod 777 /home/elasticsearch/v7.7/node-2/data
chmod 777 /home/elasticsearch/v7.7/node-3/data
chmod 777 /home/elasticsearch/v7.7/node-1/logs
chmod 777 /home/elasticsearch/v7.7/node-2/logs
chmod 777 /home/elasticsearch/v7.7/node-3/logs
elasticsearch配置文件編寫
我們是在一臺物理機上部署3個容器實現(xiàn)elasticsearch的集群環(huán)境阅懦。創(chuàng)建了私有網(wǎng)絡(luò)和二,并設(shè)置了固定IP地址。所以每個節(jié)點都需要注意其IP地址以及端口號的配置是否正確耳胎。
## 節(jié)點1配置信息如下:
# 文件路徑 /home/elasticsearch/v7.7/node-1/config/elasticsearch.yml
cluster.name: elk-v7
node.name: node-1
node.master: true
node.data: true
node.max_local_storage_nodes: 3
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/log
bootstrap.memory_lock: true
network.host: 10.10.10.11
http.port: 9200
transport.tcp.port: 9300
discovery.seed_hosts: ["10.10.10.12:9300","10.10.10.13:9300"]
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
## 節(jié)點2配置信息如下:
# 文件路徑 /home/elasticsearch/v7.7/node-2/config/elasticsearch.yml
cluster.name: elk-v7
node.name: node-2
node.master: true
node.data: true
node.max_local_storage_nodes: 3
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/log
bootstrap.memory_lock: true
network.host: 10.10.10.12
http.port: 9200
transport.tcp.port: 9300
discovery.seed_hosts: ["10.10.10.11:9300","10.10.10.13:9300"]
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
## 節(jié)點3配置信息如下:
# 文件路徑 /home/elasticsearch/v7.7/node-3/config/elasticsearch.yml
cluster.name: elk-v7
node.name: node-3
node.master: true
node.data: true
node.max_local_storage_nodes: 3
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/log
bootstrap.memory_lock: true
network.host: 10.10.10.13
http.port: 9200
transport.tcp.port: 9300
discovery.seed_hosts: ["10.10.10.11:9300","10.10.10.12:9300"]
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
配置參數(shù)說明:
cluster.name: 集群名稱
node.name: 節(jié)點的名稱
node.master: true # 是不是有資格競選主節(jié)點
node.data: true # 是否存儲數(shù)據(jù)
node.max_local_storage_nodes: 3 #最大集群節(jié)點數(shù)
bootstrap.memory_lock: true #是否開啟時鎖定內(nèi)存(默認(rèn)為是)
# 注意這兩個路徑不要配置物理機的路徑了,這是【容器內(nèi)部】的路徑L杷怕午!
path.data: /usr/share/elasticsearch/data # 數(shù)據(jù)存檔位置
path.logs: /usr/share/elasticsearch/log # 日志存放位置
# 配合network.publish_host 一起使用。參見下文的小竅門:
network.host: 10.10.10.11 #設(shè)置網(wǎng)關(guān)地址
# 設(shè)置其它結(jié)點和該結(jié)點交互的ip地址淹魄,如果不設(shè)置它會自動判斷郁惜,值必須是個真實的ip地址,設(shè)置當(dāng)前物理機地址,如果是docker安裝節(jié)點的IP將會是配置的IP而不是docker網(wǎng)管ip
# network.publish_host: 10.10.10.11
http.port: 9200 # 設(shè)置映射端口
transport.tcp.port: 9300 # 內(nèi)部節(jié)點之間溝通端口
# 組播地址
discovery.seed_hosts: ["10.10.10.12:9300","10.10.10.13:9300"]
# es7.x 之后新增的配置甲锡,寫入候選主節(jié)點的設(shè)備地址兆蕉,在開啟服務(wù)后可以被選為主節(jié)點
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
另:如果我們想要使用 物理機的IP 地址作為集群的IP其實也可以的。
修改每個節(jié)點的配置文件的如下配置:
network.host: 0.0.0.0
network.publish_host: 物理機IP地址(例如:192.168.10.100)
# 內(nèi)部節(jié)點之間溝通端口 注意每個節(jié)點的端口需要不同缤沦,因為我們使用的是同一個IP地址
http.port: 端口 # 每個節(jié)點不能相同 例如 9200虎韵、9201、9202
transport.tcp.port: 端口 # 每個節(jié)點不能相同例如 9300缸废、9301包蓝、9302
# 每個節(jié)點對應(yīng)的端口需與上面配置的一致。這邊只是舉例企量,已實際配置為準(zhǔn)测萎。
discovery.seed_hosts:
["192.168.10.100:9300","192.168.10.100:9301","192.168.10.100:9302"]
# 將上面修改的部分分別拷貝到三個節(jié)點的配置文件中
開放端口(推薦)
當(dāng)然你也可以關(guān)閉防火墻,但是注意如果關(guān)閉防火墻届巩,創(chuàng)建私有網(wǎng)絡(luò)會失敗硅瞧。
firewall-cmd --zone=public --add-port=9200/tcp --permanent
firewall-cmd --zone=public --add-port=9201/tcp --permanent
firewall-cmd --zone=public --add-port=9202/tcp --permanent
firewall-cmd --zone=public --add-port=9300/tcp --permanent
firewall-cmd --zone=public --add-port=9301/tcp --permanent
firewall-cmd --zone=public --add-port=9302/tcp --permanent
#這個是kibana端口
firewall-cmd --zone=public --add-port=5601/tcp --permanent
# 更新防火墻規(guī)則,使端口生效
firewall-cmd --complete-reload
# 查看當(dāng)前所開放的端口
firewall-cmd --zone=public --list-ports
創(chuàng)建私有網(wǎng)絡(luò)
# 私有網(wǎng)絡(luò)搭建:
docker network create \
--driver=bridge \
--subnet=10.10.0.0/16 \
--ip-range=10.10.10.0/24 \
--gateway=10.10.10.254 \
es-net
啟動容器
切換到主目錄下執(zhí)行恕汇,否則會報路徑錯誤問題腕唧。
cd /home/elasticsearch/v7.7/
- 啟動節(jié)點1
docker run -d --name es-node-1
--network=es-net
--ip=10.10.10.11
-e ES_JAVA_OPTS="-Xms256m -Xmx256m"
-p 9200:9200
-v $PWD/node-1/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
-v $PWD/node-1/plugins:/usr/share/elasticsearch/plugins
-v $PWD/node-1/data:/usr/share/elasticsearch/data
-v $PWD/node-1/logs:/usr/share/elasticsearch/logs
--privileged=true elasticsearch:7.7.0
- 啟動節(jié)點2
docker run -d --name es-node-2
--network=es-net
--ip=10.10.10.12
-e ES_JAVA_OPTS="-Xms256m -Xmx256m"
-p 9201:9200
-v $PWD/node-2/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
-v $PWD/node-2/plugins:/usr/share/elasticsearch/plugins
-v $PWD/node-2/data:/usr/share/elasticsearch/data
-v $PWD/node-2/logs:/usr/share/elasticsearch/logs
--privileged=true elasticsearch:7.7.0
- 啟動節(jié)點3
docker run -d --name es-node-3
--network=es-net
--ip=10.10.10.13
-e ES_JAVA_OPTS="-Xms256m -Xmx256m"
-p 9202:9200
-v $PWD/node-3/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
-v $PWD/node-3/plugins:/usr/share/elasticsearch/plugins -v $PWD/node-3/data:/usr/share/elasticsearch/data/
-v $PWD/node-3/logs:/usr/share/elasticsearch/logs
--privileged=true elasticsearch:7.7.0
補更 2020-12-14 在新版本新機器上進(jìn)行部署時報了兩個錯誤:
...
出現(xiàn)錯誤冒嫡。。四苇。孝凌。。
ERROR: [2] bootstrap checks failed
[1]: memory locking requested for elasticsearch process but memory is not locked
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/log/elk-v7.log
{"type": "server", "timestamp": "2020-12-14T09:05:19,181Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "stopping ..." }
{"type": "server", "timestamp": "2020-12-14T09:05:19,199Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "stopped" }
{"type": "server", "timestamp": "2020-12-14T09:05:19,199Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "closing ..." }
{"type": "server", "timestamp": "2020-12-14T09:05:19,216Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "closed" }
...
解決:memory locking requested for elasticsearch process but memory is not locked
網(wǎng)上的說法總結(jié)下來有兩種:
- 方法一
# 此方案適用于非systemd管理的linux發(fā)行版月腋,centos 6及以下可以僅通過這個方案解決
# 臨時解決蟀架,測試時可以使用
ulimit -l unlimited
# 永久解決方法:root權(quán)限編輯/etc/security/limits.conf
vim /etc/security/limits.conf
# 添加如下信息
* soft memlock unlimited
* hard memlock unlimited
# PS:
# 這里的*代表的是所有用戶名稱,可以更換為指定用戶名
# 另:坑榆骚!如果/etc/security/limits.d文件夾下有配置文件片拍,
# 會覆蓋剛才修改的文件,確認(rèn)刪除
# 修改/etc/sysctl.conf
echo "vm.swappiness=0" >> /etc/sysctl.conf
# 重新登錄或重啟服務(wù)器方可生效
# 然而妓肢,并沒有解決我的問題捌省。那我們看看其他方法。
- 方法二
我們是通過Docker部署的碉钠,上面的方法可能不適用這種方式纲缓。可以在配置下如下配置喊废。
# 全局生效方式:
sudo vim /etc/systemd/system.conf
# 添加:
DefaultLimitNOFILE=65536
DefaultLimitNPROC=32000
DefaultLimitMEMLOCK=infinity
# 保存重啟祝高。
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/log/elk-v7.log
{"type": "server", "timestamp": "2020-12-14T09:15:31,072Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "stopping ..." }
{"type": "server", "timestamp": "2020-12-14T09:15:31,093Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "stopped" }
{"type": "server", "timestamp": "2020-12-14T09:15:31,094Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "closing ..." }
{"type": "server", "timestamp": "2020-12-14T09:15:31,116Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elk-v7", "node.name": "node-1", "message": "closed" }
{"type": "server", "timestamp": "2020-12-14T09:15:31,123Z", "level": "INFO", "component": "o.e.x.m.p.NativeController", "cluster.name": "elk-v7", "node.name": "node-1", "message": "Native controller process has stopped - no new native processes can be started" }
[root@k8s-node-3 ~]#
解決:max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
在 /etc/sysctl.conf文件最后添加一行
vm.max_map_count=262144
# 重啟服務(wù)
Kibana部署
設(shè)置kibana掛載目錄
mkdir -p /home/kibana/config
創(chuàng)建文件
vim /home/kibana/config/kibana.yml
配置信息
#Kibana的映射端口
server.port: 5601
#網(wǎng)關(guān)地址
server.host: "0.0.0.0"
#Kibana實例對外展示的名稱
server.name: "kibana"
#Elasticsearch的集群地址,也就是說所有的集群IP
elasticsearch.hosts: ["http://10.10.10.11:9200","http://10.10.10.12:9201","http://10.10.10.13:9202"]
#設(shè)置頁面語言污筷,中文使用zh-CN工闺,英文使用en
i18n.locale: "zh-CN"
xpack.monitoring.ui.container.elasticsearch.enabled: true
啟動Kibana容器
docker run -d --name kibana
--network=es-net --ip=10.10.10.14 -p 5601:5601
-v $PWD/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
--privileged=true kibana:7.7.0
登錄Kibana網(wǎng)址
http://IP:5601
部署Logstash
logstash一般是一個服務(wù)器部署一個logstash,所以按需進(jìn)行擴(kuò)展即可瓣蛀。
創(chuàng)建掛載目錄
mkdir /home/logstsah/
配置文件
# 啟動容器
docker run -d --name logstash logstash:7.7.0
# 拷貝logstash的配置文件
docker cp logstash:/usr/share/logstash/config /home/logstsah/
# config下的文件:
? config ls
jvm.options log4j2.properties logstash-sample.conf logstash.yml pipelines.yml startup.options
修改配置信息
# 修改配置文件logstash.yml
http.host: "0.0.0.0"
# 可以配置多個elasticsearch地址
xpack.monitoring.elasticsearch.hosts: [ "http://10.10.10.11:9200" ]
創(chuàng)建pipelines目錄下的配置文件logstash.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://10.10.10.11:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}
啟動容器
docker run -d --name logstash -
-network=es-net --ip=10.10.10.15 -v $PWD/logstash/config/:/usr/share/logstash/config/
-v $PWD/logstash/pipeline:/usr/share/logstash/pipeline
-p 5044:5044
-p 9600:9600
--privileged=true logstash:7.7.0
在kibana的界面可以看的logstash已經(jīng)加入集群中
IK分詞器安裝
下載ik分詞器
將下載好的文件放到物理機的映射目錄/home/elasticsearch/v7.7/node-1/plugins下
# 切換到node-1的plugins目錄下陆蟆,解壓文件到ik文件夾
unzip elasticsearch-analysis-ik-7.7.0.zip -d ik
# 重啟容器
docker restart es-node-1
其他節(jié)點同樣方法操作,或者直接復(fù)制這個ik文件夾到其他節(jié)點惋增,然后重啟節(jié)點容器即可叠殷。
驗證是否生效
es默認(rèn)的分詞器
GET _analyze
{
"text": "共和國國歌"
}
# 結(jié)果
{
"tokens" : [
{
"token" : "共",
"start_offset" : 0,
"end_offset" : 1,
"type" : "<IDEOGRAPHIC>",
"position" : 0
},
{
"token" : "和",
"start_offset" : 1,
"end_offset" : 2,
"type" : "<IDEOGRAPHIC>",
"position" : 1
},
{
"token" : "國",
"start_offset" : 2,
"end_offset" : 3,
"type" : "<IDEOGRAPHIC>",
"position" : 2
},
{
"token" : "國",
"start_offset" : 3,
"end_offset" : 4,
"type" : "<IDEOGRAPHIC>",
"position" : 3
},
{
"token" : "歌",
"start_offset" : 4,
"end_offset" : 5,
"type" : "<IDEOGRAPHIC>",
"position" : 4
}
]
}
ik分詞器。ik_smart 分詞
GET _analyze
{
"analyzer":"ik_smart",
"text":"中華人民共和國中央人民政府萬歲"
}
# 結(jié)果
{
"tokens" : [
{
"token" : "中華人民共和國",
"start_offset" : 0,
"end_offset" : 7,
"type" : "CN_WORD",
"position" : 0
},
{
"token" : "中央人民政府",
"start_offset" : 7,
"end_offset" : 13,
"type" : "CN_WORD",
"position" : 1
},
{
"token" : "萬歲",
"start_offset" : 13,
"end_offset" : 15,
"type" : "CN_WORD",
"position" : 2
}
]
}
ik分詞器 ik_max_word分詞
GET _analyze
{
"analyzer":"ik_max_word",
"text":"中華人民共和國中央人民政府萬歲"
}
# 結(jié)果
{
"tokens" : [
{
"token" : "中華人民共和國",
"start_offset" : 0,
"end_offset" : 7,
"type" : "CN_WORD",
"position" : 0
},
{
"token" : "中華人民",
"start_offset" : 0,
"end_offset" : 4,
"type" : "CN_WORD",
"position" : 1
},
{
"token" : "中華",
"start_offset" : 0,
"end_offset" : 2,
"type" : "CN_WORD",
"position" : 2
},
{
"token" : "華人",
"start_offset" : 1,
"end_offset" : 3,
"type" : "CN_WORD",
"position" : 3
},
{
"token" : "人民共和國",
"start_offset" : 2,
"end_offset" : 7,
"type" : "CN_WORD",
"position" : 4
},
{
"token" : "人民",
"start_offset" : 2,
"end_offset" : 4,
"type" : "CN_WORD",
"position" : 5
},
{
"token" : "共和國",
"start_offset" : 4,
"end_offset" : 7,
"type" : "CN_WORD",
"position" : 6
},
{
"token" : "共和",
"start_offset" : 4,
"end_offset" : 6,
"type" : "CN_WORD",
"position" : 7
},
{
"token" : "國中",
"start_offset" : 6,
"end_offset" : 8,
"type" : "CN_WORD",
"position" : 8
},
{
"token" : "中央人民政府",
"start_offset" : 7,
"end_offset" : 13,
"type" : "CN_WORD",
"position" : 9
},
{
"token" : "中央",
"start_offset" : 7,
"end_offset" : 9,
"type" : "CN_WORD",
"position" : 10
},
{
"token" : "人民政府",
"start_offset" : 9,
"end_offset" : 13,
"type" : "CN_WORD",
"position" : 11
},
{
"token" : "人民",
"start_offset" : 9,
"end_offset" : 11,
"type" : "CN_WORD",
"position" : 12
},
{
"token" : "民政",
"start_offset" : 10,
"end_offset" : 12,
"type" : "CN_WORD",
"position" : 13
},
{
"token" : "政府",
"start_offset" : 11,
"end_offset" : 13,
"type" : "CN_WORD",
"position" : 14
},
{
"token" : "萬歲",
"start_offset" : 13,
"end_offset" : 15,
"type" : "CN_WORD",
"position" : 15
},
{
"token" : "萬",
"start_offset" : 13,
"end_offset" : 14,
"type" : "TYPE_CNUM",
"position" : 16
},
{
"token" : "歲",
"start_offset" : 14,
"end_offset" : 15,
"type" : "COUNT",
"position" : 17
}
]
}