elk架構(gòu)設(shè)計
kafka緩存版本 elk部署文檔轉(zhuǎn)至元數(shù)據(jù)結(jié)尾轉(zhuǎn)至元數(shù)據(jù)起始
1棋嘲、ELK環(huán)境介紹
一罢艾、ELK是什么
ELK 是elastic公司提供的一套完整的日志收集以及展示的解決方案灾测,是三個產(chǎn)品的首字母縮寫,分別是ElasticSearch、Logstash 和 Kibana。
ElasticSearch簡稱ES婿牍,它是一個實時的分布式搜索和分析引擎,它可以用于全文搜索惩歉,結(jié)構(gòu)化搜索以及分析等脂。它是一個建立在全文搜索引擎 Apache Lucene 基礎(chǔ)上的搜索引擎,使用 Java 語言編寫撑蚌。
Logstash是一個具有實時傳輸能力的數(shù)據(jù)收集引擎上遥,用來進(jìn)行數(shù)據(jù)收集(如:讀取文本文件)、解析争涌,并將數(shù)據(jù)發(fā)送給ES粉楚。
Kibana為 Elasticsearch 提供了分析和可視化的 Web 平臺。它可以在 Elasticsearch 的索引中查找第煮,交互數(shù)據(jù)解幼,并生成各種維度表格、圖形包警。
二、ELK用途
ELK是日志分析的一個開源解決方案底靠。日志分析并不僅僅包括系統(tǒng)產(chǎn)生的錯誤日志害晦,異常,也包括業(yè)務(wù)邏輯暑中,或者任何文本類的分析壹瘟。而基于日志的分析,能夠在其上產(chǎn)生非常多的解決方案鳄逾,譬如:
1.問題排查稻轨。能夠快速的定位問題,甚至防微杜漸雕凹,把問題殺死在搖籃里殴俱。日志分析技術(shù)顯然問題排查的基石政冻。
2.監(jiān)控和預(yù)警。 日志线欲,監(jiān)控明场,預(yù)警是相輔相成的±罘幔基于日志的監(jiān)控苦锨,預(yù)警使得運(yùn)維有自己的機(jī)械戰(zhàn)隊,大大節(jié)省人力以及延長運(yùn)維的壽命趴泌。
3.數(shù)據(jù)分析舟舒。 這個對于數(shù)據(jù)分析師有所裨益的。
三嗜憔、ELK stack架構(gòu)
數(shù)據(jù)源 → filebeat
filebeat進(jìn)行數(shù)據(jù)采集
filebeat → kafka
filebeat將根據(jù)設(shè)置tiops的不同將之轉(zhuǎn)發(fā)給kafka做緩存魏蔗。
kafka → logstash
logstash開啟多個pipeline通道分別讀取kafka庫中的數(shù)據(jù),并將之使用fileter解析痹筛,最后分發(fā)給不同的elasticsearch索引莺治。logstash會在接下來文章中具體講解。
logstash → elasticsearch
logstash將數(shù)據(jù)傳輸?shù)絜lasticsearch中時帚稠,會在es中自動創(chuàng)建索引谣旁,為了使es中的新建索引符合一定格式,我使用了es新建索引模板滋早,指定新建索引的Mapping榄审。會在接下來的文章中具體講解。
elasticsearch → kibana
Kibana通過得到的數(shù)據(jù)進(jìn)行統(tǒng)計分析杆麸,來實時監(jiān)控應(yīng)用的狀況搁进。
2、Elasticsearch安裝和集群部署
一昔头、搭建環(huán)境信息
操作系統(tǒng):centos 7.6 64位
elasticsearch版本:7.7.1
10.10.0.147 es-master1 端口:9200
10.10.0.220 es-master2 端口:9200
10.10.0.221 es-master3 端口:9200
10.10.0.224 es-data1 端口:9200
10.10.0.186 es-data2 端口:9200
10.10.0.188 es-data3 端口:9200
二饼问、安裝Elasticsearch
在七臺服務(wù)器上新增elasticsearch的yum源并安裝elasticsearch
vim /etc/yum.repos.d/CentOS6_7_Base_tset.repo
[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://mirrors.tset.com/repository/elk
gpgcheck=0
enabled=1
autorefresh=1
type=rpm-md
yum clean all && yum makecache
yum install elasticsearch -y
三、在七臺服務(wù)器上做準(zhǔn)備工作
1.備份原配置文件
cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.original
2.配置堆內(nèi)存Elasticsearch 默認(rèn)安裝后設(shè)置的堆內(nèi)存是 1 GB揭斧。修改 jvm.options配置文件(當(dāng)前服務(wù)器內(nèi)存的一半莱革,不超過32G)。
vim /etc/elasticsearch/jvm.options
-Xms2g
-Xmx2g
最大可分配堆內(nèi)存大小為: 32GB與當(dāng)前ES宿主機(jī)內(nèi)存二者的最小值讹开。
3.禁止swapping操作
elasticsearch.yml 文件中的Memory部分盅视,修改設(shè)置如下:
vim /etc/elasticsearch/elasticsearch.yml
bootstrap.memory_lock : true
核心原因:內(nèi)存交換 到磁盤對服務(wù)器性能來說是 致命 的。
4.配置文件描述符數(shù)目
vim /etc/profile
ulimit -n 65535
使得命令生效
source /etc/profile
5.修改最大映射數(shù)量MMP
Elasticsearch 對各種文件混合使用了 NioFs( 非阻塞文件系統(tǒng))和 MMapFs ( 內(nèi)存映射文件系統(tǒng))旦万。請確保你配置的最大映射數(shù)量闹击,以便有足夠的虛擬內(nèi)存可用于 mmapped 文件。在 /etc/sysctl.conf 通過修改 vm.max_map_count 永久設(shè)置它成艘。
vim /etc/sysctl.conf
vm.max_map_count=262144
6.配置開機(jī)自啟動
vim /usr/lib/systemd/system/elasticsearch.service
在[Service]部分的結(jié)尾添加如下兩行
#tset custom settings about memory
LimitMEMLOCK=infinity
systemctl daemon-reload && systemctl enable elasticsearch && systemctl start elasticsearch
7.更改path.data對應(yīng)目錄權(quán)限
master節(jié)點:
chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
data節(jié)點:
chown -R elasticsearch:elasticsearch /es_data/elasticsearch
四赏半、搭建集群
- 10.10.0.147的配置文件
cluster.name: my-elk
node.name: es-master1
node.attr.rack: elk
node.master: true
node.data: false
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
transport.tcp.compress: true
network.host: 10.10.0.147
http.port: 9200
discovery.seed_hosts: ["10.10.0.147", "10.10.0.221", "10.10.0.220", "10.10.0.186", "10.10.0.224","10.10.0.188"]
cluster.initial_master_nodes: ["es-master3","es-master2","es-master1"]
#gateway.recover_after_nodes: 2
#gateway.expected_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"
indices.memory.index_buffer_size: 40%
thread_pool.write.size: 5
thread_pool.write.queue_size: 1000
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack:
security:
authc:
realms:
ldap:
ldap1:
order: 0
url: "ldap://ldap.tset.com:389"
bind_dn: "cn=ldap_user, dc=chinatset, dc=com"
user_search:
base_dn: "dc=chinatset,dc=com"
filter: "(cn={0})"
group_search:
base_dn: "dc=chinatset,dc=com"
files:
role_mapping: "/etc/elasticsearch/role_mapping.yml"
unmapped_groups_as_roles: false
重啟elasticsearch
systemctl restart elasticsearch
10.10.0.220 配置文件
cat /etc/elasticsearch/elasticsearch.yml
cluster.name: my-elk
node.name: es-master2
node.attr.rack: elk
node.master: true
node.data: false
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
transport.tcp.compress: true
network.host: 10.10.0.220
http.port: 9200
discovery.seed_hosts: ["10.10.0.147", "10.10.0.221", "10.10.0.220", "10.10.0.186", "10.10.0.224","10.10.0.188"]
cluster.initial_master_nodes: ["es-master3","es-master2","es-master1"]
#gateway.recover_after_nodes: 2
#gateway.expected_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"
indices.memory.index_buffer_size: 40%
thread_pool.write.size: 5
thread_pool.write.queue_size: 1000
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack:
security:
authc:
realms:
ldap:
ldap1:
order: 0
url: "ldap://ldap.tset.com:389"
bind_dn: "cn=ldap_user, dc=chinatset, dc=com"
user_search:
base_dn: "dc=chinatset,dc=com"
filter: "(cn={0})"
group_search:
base_dn: "dc=chinatset,dc=com"
files:
role_mapping: "/etc/elasticsearch/role_mapping.yml"
unmapped_groups_as_roles: false
重啟elasticsearch
systemctl restart elasticsearch
10.10.0.221的配置文件
cluster.name: my-elk
node.name: es-master3
node.attr.rack: elk
node.master: true
node.data: false
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
transport.tcp.compress: true
network.host: 10.10.0.221
http.port: 9200
discovery.seed_hosts: ["10.10.0.147", "10.10.0.221", "10.10.0.220", "10.10.0.186", "10.10.0.224","10.10.0.188"]
cluster.initial_master_nodes: ["es-master3","es-master2","es-master1"]
#gateway.recover_after_nodes: 2
#gateway.expected_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"
indices.memory.index_buffer_size: 40%
thread_pool.write.size: 5
thread_pool.write.queue_size: 1000
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack:
security:
authc:
realms:
ldap:
ldap1:
order: 0
url: "ldap://ldap.tset.com:389"
bind_dn: "cn=ldap_user, dc=chinatset, dc=com"
user_search:
base_dn: "dc=chinatset,dc=com"
filter: "(cn={0})"
group_search:
base_dn: "dc=chinatset,dc=com"
files:
role_mapping: "/etc/elasticsearch/role_mapping.yml"
unmapped_groups_as_roles: false
data節(jié)點配置文件
每個data節(jié)點都是用如下配置贺归,需要更改 node.name 和network.host 兩個配置信息
已 10.10.0.224 為例:
cluster.name: my-elk
node.name: es-data1
node.attr.rack: elk
node.master: false # 非master節(jié)點
node.data: true # data節(jié)點
path.data: /es_data/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
transport.tcp.compress: true
network.host: 10.10.0.224
http.port: 9200
discovery.seed_hosts: ["10.10.0.147", "10.10.0.221", "10.10.0.220", "10.10.0.186", "10.10.0.224","10.10.0.188"]
cluster.initial_master_nodes: ["es-master3","es-master2","es-master1"]
gateway.recover_after_nodes: 2
gateway.expected_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"
indices.memory.index_buffer_size: 40%
indices.breaker.total.limit: 80%
indices.fielddata.cache.size: 10%
indices.breaker.fielddata.limit: 60%
indices.breaker.request.limit: 60%
#xpack.security.enabled: true
#xpack.security.transport.ssl.enabled: true
#xpack.security.transport.ssl.verification_mode: certificate
#xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
#xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack:
security:
authc:
realms:
ldap:
ldap1:
order: 0
url: "ldap://ldap.tset.com:389"
bind_dn: "cn=ldap_user, dc=chinatset, dc=com"
user_search:
base_dn: "dc=chinatset,dc=com"
filter: "(cn={0})"
group_search:
base_dn: "dc=chinatset,dc=com"
files:
role_mapping: "/etc/elasticsearch/role_mapping.yml"
unmapped_groups_as_roles: false
配置修改完后重啟es
systemctl restart elasticsearch
瀏覽器通過插件訪問可查看集群狀態(tài):
10.10.0.147:9200
五、配置TLS加密通信及身份驗證
1.步驟
1.1 生成CA證書
cd /usr/share/elasticsearch # 使用yum方式安裝的可執(zhí)行文件路徑
bin/elasticsearch-certutil ca (CA證書:elastic-stack-ca.p12)
1.2生成節(jié)點證書
bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 (節(jié)點證書:elastic-certificates.p12)
chmod 644 elastic-certificates.p12
1.3 copy證書到 /etc/elasticsearch/
bin/elasticsearch-certutil cert -out /etc/elasticsearch/elastic-certificates.p12 –pass
2.修改配置文件除破,開啟證書訪問
因為上面配置文件已經(jīng)是修改好的了牧氮,所以無需修改
編輯配置文件/etc/elasticsearch/elasticsearch.yml
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate # 證書認(rèn)證級別
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
3.將elastic-certificates.p12配置文件拷貝至其他節(jié)點,并修改配置文件
scp elastic-certificates.p12 root@es-master2:/etc/elasticsearch
scp elastic-certificates.p12 root@es-master3:/etc/elasticsearch
scp elastic-certificates.p12 es-data1:/etc/elasticsearch
scp elastic-certificates.p12 es-data2:/etc/elasticsearch
scp elastic-certificates.p12 es-data3:/etc/elasticsearch
4.分發(fā)完后重啟所有節(jié)點(按需分配重啟)
systemctl restart elasticsearch
5.設(shè)置密碼
啟動所有節(jié)點瑰枫,待節(jié)點啟動完畢之后踱葛,進(jìn)入第一個節(jié)點elasticsearch目錄,執(zhí)行以下命令光坝,進(jìn)行密碼設(shè)置:
cd /usr/share/elasticsearch
bin/elasticsearch-setup-passwords interactive
# 輸出結(jié)果
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y # 輸入y
# 直接輸入密碼尸诽,然后再重復(fù)一遍密碼,中括號里是賬號
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana]:
Reenter password for [kibana]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
6注意事項
1)盯另、設(shè)置密碼都是elk123456
驗證集群設(shè)置的賬號和密碼
打開瀏覽器訪問這個地址性含,出現(xiàn)需要輸入賬號密碼的界面證明設(shè)置成功,集群的一個節(jié)點
2)、elasticsearch-head訪問es集群的用戶及密碼
elasticsearch-head插件此時再去訪問有安全認(rèn)證的es集群時鸳惯,會發(fā)現(xiàn)無法進(jìn)行查看商蕴,打開控制臺可以看到報錯:401 unauthorized
解決辦法是修改elasticsearch.yml文件,增加以下配置芝发。
http.cors.allow-headers: Authorization,content-type
修改所有es節(jié)點绪商,然后重新啟動,再次url+認(rèn)證信息方式可以正常訪問es集群辅鲸。
3)格郁、logstash過濾數(shù)據(jù)之后往es中推送的時候,需要添加權(quán)限認(rèn)證独悴。增加訪問es集群的用戶及密碼例书,格式如下:
output {
if [fields][log_source] == 'messages' {
elasticsearch {
hosts => ["http://192.168.x.x:9200", "http://192.168.x.x:9200","http://192.168.x.x:9200"]
index => "messages-%{+YYYY.MM.dd}"
user => "elastic"
password => "elk123456"
}
}
if [fields][log_source] == "secure" {
elasticsearch {
hosts => ["http://192.168.x.x:9200", "http://192.168.x.x:9200","http://192.168.x.x:9200"]
index => "secure-%{+YYYY.MM.dd}"
user => "elastic"
password => "elk123456"
}
}
}
4)、Kibana組件訪問帶有安全認(rèn)證的Elasticsearch集群
配置文件kibana.yml中需要加入以下配置
elasticsearch.username: "kibana" # 注意:此處不用超級賬號elastic刻炒,而是使用kibana跟es連接的賬號kibana
elasticsearch.password: "elk123456"
然后重啟kibana决采,再次訪問的話就就需要輸入上述賬號密碼才能登陸訪問了。
3落蝙、ELK之kafka集群部署
一织狐、搭建環(huán)境
操作系統(tǒng):centos 7.6 64位
Jdk版本: 1.8.0
Zookpeer版本: v3.6.3
Kafka版本:v2.8.1
主服務(wù)器:10.10.0.181 端口:9092
從服務(wù)器:10.10.0.182 端口:9092
從服務(wù)器:10.10.0.183 端口:9092
1、安裝java環(huán)境
yum -y install java-1.8.0-openjdk-devel
#查看
java –version
openjdk version "1.8.0_332"
OpenJDK Runtime Environment (build 1.8.0_332-b09)
OpenJDK 64-Bit Server VM (build 25.332-b09, mixed mode)
2筏勒、安裝Zookeeper
#創(chuàng)建目錄解壓zookeeper安裝包
cd /data/nfs_share/zk-kafka
mkdir /usr/local/zookeeper
tar -xzvf apache-zookeeper-3.6.3-bin.tar.gz -C /usr/local/zookeeper
創(chuàng)建數(shù)據(jù)和日志目錄
cd /usr/local/zookeeper
mkdir data
mkdir logs
#備份配置文件并修改內(nèi)容
mv apache-zookeeper-3.6.3-bin zookeeper-3.6.3
cd zookeeper-3.6.3/conf/
cp zoo_sample.cfg zoo.cfg
#在配置文件zoo.cfg中配置數(shù)據(jù)和日志目錄:
vim zoo.cfg
#添加修改
dataDir=/usr/local/zookeeper/data
dataLogDir=/usr/local/zookeeper/logs
因為要搭建集群,所以需要給每一個zookeeper節(jié)點一個id旺嬉,這里是在data目錄下新建一個myid文件管行,里面只包括該節(jié)點的id
cd /usr/local/zookeeper/data
#此id可設(shè)置為服務(wù)器ip最后一位
vim myid
1
#三個節(jié)點分別配置為1、2邪媳、3捐顷,對應(yīng)下面配置中的server.xx
然后三臺虛擬機(jī)配完之后荡陷,還要到conf下的zoo.cfg把集群的節(jié)點都配置上
cd /usr/local/zookeeper/zookeeper-3.6.3/conf
vim zoo.cfg
#每個zookeeper節(jié)點都配置一下
server.1=10.10.0.181:2888:3888
server.2=10.10.0.182:2888:3888
server.3=10.10.0.183:2888:3888
為了我們使用,我們配置一下zookeeper的環(huán)境變量迅涮,否則的話每次啟動zookeeper都要到zookeeper的bin目錄下啟動
vim /etc/profile
#把下面內(nèi)容復(fù)制到文件末尾(第一行的目錄根據(jù)自己的安裝路徑來配置废赞,可以使用pwd查看):
# zk env
export ZOOKEEPER_HOME=/usr/local/zookeeper/zookeeper-3.6.3
export PATH=$ZOOKEEPER_HOME/bin:$PATH
export PATH
source /etc/profile
三臺電腦都這樣配置上。 然后就可以啟動了叮姑!
zkServer.sh start
3唉地、安裝Kafka
上傳并解壓
tar -xzvf kafka_2.13-2.8.1.tgz -C /usr/local/
mv /usr/local/kafka_2.13-2.8.1 /usr/local/kafka2.8.1
配置環(huán)境變量
cat <<EOF> /etc/profile.d/kafka.sh
export KAFKA_HOME=/usr/local/kafka2.8.1
export PATH=$PATH:$KAFKA_HOME/bin
EOF
source /etc/profile.d/kafka.sh
修改停止腳本
vim bin/kafka-server-stop.sh
#kill -s $SIGNAL $PIDS
#修改
kill -9 $PIDS
用于監(jiān)控的配置,修改 bin/kafka-server-start.sh传透,增加 JMX_PORT耘沼,可以獲取更多指標(biāo)
注意端口被占用
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
export JMX_PORT="9198"
fi
修改配置文件,三個節(jié)點id不一樣(可用1.2.3) 和listeners (使用本機(jī)ip)
[root@zk-node1 kafka2.8.1]# grep -Ev "^#" /usr/local/kafka2.8.1/config/server.properties|grep -v "^$"
#修改后內(nèi)容
broker.id=1
listeners=PLAINTEXT:// 10.10.0.181:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/usr/local/kafka2.8.1/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=10.10.0.181:2181, 10.10.0.182:2181, 10.10.0.183:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
以上配置文件為基礎(chǔ)配置
目前kafka默認(rèn)分片和副本數(shù)為1朱盐,存在單點故障問題
解決:
修改kafka配置文件
vim /usr/local/kafka2.8.1/config/server.properties
#修改一下配置
#num.partitions: 默認(rèn)partition數(shù)量群嗤,如果topic在創(chuàng)建時沒有指定partition數(shù)量,默認(rèn)使用此值
# default.replication.factor: 默認(rèn)副本數(shù)
num.partitions=3
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=3
default.replication.factor=3
新建啟動腳本
#!/bin/bash
nohup /usr/local/kafka2.8.1/bin/kafka-server-start.sh /usr/local/kafka2.8.1/config/server.properties >> /usr/local/kafka2.8.1/nohup.out 2>&1 &
新建重啟腳本
#!/bin/bash
/usr/local/kafka2.8.1/bin/kafka-server-stop.sh
nohup /usr/local/kafka2.8.1/bin/kafka-server-start.sh /usr/local/kafka2.8.1/config/server.properties >> /usr/local/kafka2.8.1/nohup.out 2>&1 &
編寫service文件
[root@zk-node1 kafka2.8.1]# cat /usr/lib/systemd/system/kafka.service
[Unit]
Description=kafka
After=network.target remote-fs.target nss-lookup.target zookeeper.service
[Service]
Type=forking
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/java/jdk1.8.0_161/bin"
ExecStart=/usr/local/kafka2.8.1/kafka_start_my.sh -daemon /usr/local/kafka2.8.1/config/server.properties
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/usr/local/kafka2.8.1/bin/kafka-server-stop.sh
#PrivateTmp=true
[Install]
WantedBy=multi-user.target
啟動kafak,并設(shè)置開機(jī)自啟
systemctl restart kafka.service
systemctl status kafka.service
systemctl enable kafka.service
安裝web頁面----(kowl)在 10.10.0.181
上傳鏡像包兵琳,并load成鏡像
docker load -i kowl.tar
在任意服務(wù)器創(chuàng)建web頁面(4.162)
docker run -d -p 19092:8080 -e KAFKA_BROKERS=10.10.0.181:9092, 10.10.0.182:9092, 10.10.0.183:9092 rsmnarts/kowl:latest
# KAFKA_BROKERS 節(jié)點地址:端口
#查看信息訪問
10.10.0.181:19092
登錄界面查看kafka狀態(tài):
4.ELK之安裝kibana
一狂秘、搭建環(huán)境
操作系統(tǒng):centos 7.6 64位
kibana版本:7.7.1
主服務(wù)器:10.10.0.16 端口:5601
二、安裝kibana
添加kibana 的yum源
vim /etc/yum.repos.d/CentOS6_7_Base_tset.repo
[kibana-7.x]
name=Kibana repository for 7.x packages
baseurl=https://mirrors.tset.com/repository/Kibana-CentOS/
gpgcheck=0
enabled=1
yum clean all && yum makecache
yum install kibana –y
備份原配置文件
cp /etc/kibana/kibana.yml /etc/kibana/kibana.yml.original
更改配置文件
[root@elk-kibana ~]# grep -Ev "#|^$" /etc/kibana/kibana.yml
server.port: 5601
server.host: "10.10.0.16"
server.name: "10.10.0.16"
elasticsearch.hosts: ["http://10.10.0.230:9200", "http://10.10.0.220:9200", "http://10.10.0.147:9200", "http://10.10.0.188:9200", "http://10.10.0.186:9200"]
kibana.index: ".kibana"
elasticsearch.username: "kibana"
elasticsearch.password: "elk123456"
logging.dest: /var/log/kibana/kibana.log
logging.verbose: true
i18n.locale: "zh-CN"設(shè)置開機(jī)自啟并啟動kibana
systemctl enable kibana
systemctl start kibana
訪問kibana
10.10.0.16:5601
5.安裝logstash
一躯肌、搭建環(huán)境
操作系統(tǒng):centos 7.6 64位
logstash版本:7.7.1
jdk版本:1.8.0_202
ip:10.10.0.184
二者春、配置jdk
wget https://mirrors.tset.com/package/jdk/jdk-8u202-linux-x64_2019.tar.gz
mkdir /usr/local/java
tar -zxvf jdk-8u202-linux-x64_2019.tar.gz -C /usr/local/java/
vim /etc/profile
#Custom settings
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
export JAVA_HOME=/usr/local/java/jdk1.8.0_202
export PATH=$PATH:${JAVA_HOME}/bin:${JAVA_HOME}/jre/bin
export CLASSPATH=.:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar:${JAVA_HOME}/jre/lib
source /etc/profile
三、安裝logstash
#添加logstash的yum源
vim /etc/yum.repos.d/CentOS6_7_Base_tset.repo
[logstash-7.x]
name=Elastic repository for 7.x packages
baseurl=https://mirrors.tset.com/repository/logstash-centos/
gpgcheck=0
enabled=1
安裝
yum clean all && yum makecache && yum install logstash -y
新增配置文件,并配置輸入源和輸出源
cd /etc/logstash/conf.d/
vim esb.conf
input {
kafka {
bootstrap_servers => "10.10.0.181:9092,10.10.0.182:9092,10.10.0.183:9092"
auto_offset_reset => "latest"
consumer_threads => 5
topics_pattern => ".*"
decorate_events => true
topics => "esb_log"
codec => json
# charset => "UTF-8"
}
}
output {
if "esb_log" in [tags] {
elasticsearch {
hosts => "http://10.10.0.147:9200"
index => "elg-log-%{[host][ip][0]}-%{+YYYY.MM}"
user => "elastic"
password => "elk123456"
}
}
}
啟動
cd /root/ && nohup /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/messages.conf --config.reload.automatic &
查看日志
tail -f nohup.out
舉例收集網(wǎng)絡(luò)日志的配置文件:
vim /etc/logstash/conf.d/network.conf
input {
kafka {
bootstrap_servers => "10.10.0.181:9092,10.10.0.182:9092,10.10.0.183:9092"
auto_offset_reset => "latest"
consumer_threads => 5
topics_pattern => ".*"
decorate_events => true
topics => "network_logs"
codec => json
}
}
output {
if "network_logs" in [tags] {
elasticsearch {
hosts => "http://10.10.0.147:9200"
manage_template => false
index => "network-log-%{+YYYY.MM}"
user => "elastic"
password => "elk123456"
}
}
}
- Filebeat環(huán)境部署
一羡榴、搭建環(huán)境
操作系統(tǒng):centos 7.6 64位
10.10.0.128 監(jiān)控esb_log
二碧查、安裝filebeat
#下載rpm包并安裝
wget https://mirrors.tset.com/package/ELK/filebeat-7.7.0-x86_64.rpm
rpm -ivh filebeat-7.7.0-x86_64.rpm
cp -a /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.original
三、編輯配置文件
vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /data/esblog/*.log
tags: ["esb_log"]
fields:
filebeat_tag: esb_log
fields_under_root: true
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 2
setup.kibana:
host: "10.10.0.16:5601"
output.kafka:
hosts: ["10.10.0.181:9092", "10.0.0.182:9092", "10.0.0.183:9092"]
compression: gzip
max_message_bytes: 100000000
topic: esb_log
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
重啟服務(wù)
systemctl restart filebeat
filebeat配置文件舉例
7.服務(wù)的啟停
Es :
systemctl start elasticsearch 啟動
systemctl stop elasticsearch 停止
Zookeeper:
zkServer.sh start 啟動
zkServer.sh stop 停止
Kafka:
systemctl start kafka.service 啟動
systemctl stop kafka.service 停止
Kibana:
systemctl start kibana
systemctl stop kibana
Logstash:
cd /root/ && nohup /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/messages.conf --config.reload.automatic &
Filebeat:
systemctl restart filebeat