架構(gòu)說(shuō)明
- 在需要采集日志的服務(wù)器上部署Filebeat服務(wù)膘怕,它將采集到的日志數(shù)據(jù)推送到Kafka集群;
- Logstash服務(wù)通過(guò)input插件讀取Kafka集群對(duì)應(yīng)主題的數(shù)據(jù)梧田,期間可以使用filter插件對(duì)數(shù)據(jù)做自定義過(guò)濾解析處理淳蔼,然后通過(guò)output插件將數(shù)據(jù)推送到Elasticsearch集群中;
- 最后用戶(hù)通過(guò)Kibana服務(wù)提供的web界面裁眯,對(duì)索引數(shù)據(jù)做匯總鹉梨,分析,搜索和展示等功能穿稳。
本文旨在部署安全可靠的生產(chǎn)架構(gòu)存皂,對(duì)ELK做XPack
安全加固,對(duì)Kafka做SASL
安全加固逢艘!
準(zhǔn)備工作
主機(jī)名 | 設(shè)備IP | 角色 | 系統(tǒng)版本 |
---|---|---|---|
es83 | 192.168.100.83 | filebeat旦袋,es,logstash它改,kafka疤孕,kibana | CentOS 7.6 |
es86 | 192.168.100.86 | es,logstash央拖,kafka | CentOS 7.6 |
es87 | 192.168.100.87 | es祭阀,logstash,kafka | CentOS 7.6 |
本文的ELK全家桶版本為7.2.0
鲜戒,Kafka版本為2.12-2.3.0
環(huán)境配置
主要的操作有:關(guān)閉selinux安全機(jī)制专控,關(guān)閉firewalld防火墻,關(guān)閉swap交換內(nèi)存空間遏餐,文件及內(nèi)存限制配置伦腐,設(shè)置jvm參數(shù),創(chuàng)建普通用戶(hù)失都,準(zhǔn)備磁盤(pán)存儲(chǔ)目錄等柏蘑;建議做好服務(wù)器間的免密登陸操作。
auto_elk_env.sh
#!/bin/bash
echo "##### Update /etc/hosts #####"
cat >> /etc/hosts <<EOF
192.168.100.83 es83
192.168.100.86 es86
192.168.100.87 es87
EOF
echo "##### Stop firewalld #####"
systemctl stop firewalld
systemctl disable firewalld
echo "##### Close selinux #####"
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
echo "##### Close swap #####"
swapoff -a
# 提示:修改完該文件后粹庞,需重新登錄終端才可生效咳焚,可通過(guò)ulimit -a查看。
echo "##### Modify /etc/security/limits.conf #####"
cat > /etc/security/limits.conf <<EOF
* soft nofile 65536
* hard nofile 131072
* soft nproc 65536
* hard nproc 65536
* soft memlock unlimited
* hard memlock unlimited
EOF
echo "##### Modify /etc/sysctl.conf #####"
cat >> /etc/sysctl.conf <<EOF
vm.max_map_count=562144
EOF
sysctl -p
echo "##### Create user(密碼隨意) #####"
useradd elkuser
echo 123456 | passwd --stdin elkuser
echo "##### 配置SSH免密通信 #####"
ssh-keygen # 一路回車(chē)即可
ssh-copy-id 192.168.100.83
ssh-copy-id 192.168.100.86
ssh-copy-id 192.168.100.87
Elasticsearch集群部署
Elasticsearch 是一個(gè)分布式信粮、RESTful風(fēng)格的搜索和數(shù)據(jù)分析引擎黔攒;它實(shí)現(xiàn)了用于全文檢索的倒排索引趁啸,而且為每個(gè)數(shù)據(jù)都編入了索引强缘,搜索速度非扯蕉瑁快;它具有可擴(kuò)展性和彈性旅掂,每秒鐘能處理海量事件赏胚,并且它適用于所有數(shù)據(jù)類(lèi)型,例如結(jié)構(gòu)化數(shù)據(jù)商虐、非結(jié)構(gòu)化數(shù)據(jù)觉阅、地理位置等。
筆者在生產(chǎn)環(huán)境上秘车,為Elasticsearch分配了30G內(nèi)存(
最大不要超過(guò)32G
)典勇,6塊446G的SSD磁盤(pán),并使用G1的垃圾回收策略叮趴,關(guān)于硬件配置大家根據(jù)實(shí)際情況來(lái)分配使用割笙!
提示:筆者已事先下載好了所有軟件包到服務(wù)器上;本文的三個(gè)es節(jié)點(diǎn)默認(rèn)都做主節(jié)點(diǎn)和數(shù)據(jù)節(jié)點(diǎn)眯亦,當(dāng)使用xpack加密時(shí)伤溉,主節(jié)點(diǎn)也必須做數(shù)據(jù)節(jié)點(diǎn),否則加密配置寫(xiě)入不進(jìn)es存儲(chǔ)妻率!
在本文中乱顾,筆者直接在83節(jié)點(diǎn)上完成了es集群的部署,請(qǐng)仔細(xì)閱讀下方的命令宫静!
# 下載方式:wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-linux-x86_64.tar.gz
echo "##### 解壓Elasticsearch #####"
[root@es83 ~]# cd /home/elkuser/
[root@es83 elkuser]# tar -xvf elasticsearch-7.2.0-linux-x86_64.tar.gz
echo "##### 修改jvm文件 #####"
[root@es83 elkuser]# cd ./elasticsearch-7.2.0/
[root@es83 elasticsearch-7.2.0]# sed -i -e 's/1g/30g/g' -e '36,38s/^-/#&/g' ./config/jvm.options
[root@es83 elasticsearch-7.2.0]# sed -i -e 'N;38 a -XX:+UseG1GC \n-XX:MaxGCPauseMillis=50' ./config/jvm.options
echo "##### 生成關(guān)鍵證書(shū)文件 #####"
[root@es83 elasticsearch-7.2.0]# ./bin/elasticsearch-certutil ca
......
Please enter the desired output file [elastic-stack-ca.p12]: 回車(chē)Enter
Enter password for elastic-stack-ca.p12 : 回車(chē)Enter
echo "##### 利用關(guān)鍵證書(shū)生成所有es節(jié)點(diǎn)證書(shū)文件 #####"
[root@es83 elasticsearch-7.2.0]# ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip 192.168.100.83
......
Enter password for CA (elastic-stack-ca.p12) : 回車(chē)Enter
Please enter the desired output file [elastic-certificates.p12]: es83.p12
Enter password for es83.p12 : 回車(chē)Enter
[root@es83 elasticsearch-7.2.0]# ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip 192.168.100.86
......
Enter password for CA (elastic-stack-ca.p12) : 回車(chē)Enter
Please enter the desired output file [elastic-certificates.p12]: es86.p12
Enter password for es86.p12 : 回車(chē)Enter
[root@es83 elasticsearch-7.2.0]# ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip 192.168.100.87
......
Enter password for CA (elastic-stack-ca.p12) : 回車(chē)Enter
Please enter the desired output file [elastic-certificates.p12]: es87.p12
Enter password for es87.p12 : 回車(chē)Enter
echo "##### 利用關(guān)鍵證書(shū)生成后續(xù)logstash所需證書(shū) #####"
[root@es83 elasticsearch-7.2.0]# openssl pkcs12 -in elastic-stack-ca.p12 -clcerts -nokeys > root.cer
[root@es83 elasticsearch-7.2.0]# openssl x509 -in root.cer -out root.pem
echo "##### 利用關(guān)鍵證書(shū)生成后續(xù)kibana所需證書(shū) #####"
[root@es83 elasticsearch-7.2.0]# ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 -name "CN=something,OU=Consulting Team,DC=mydomain,DC=com"
......
Enter password for CA (elastic-stack-ca.p12) : 回車(chē)Enter
Please enter the desired output file [CN=something,OU=Consulting Team,DC=mydomain,DC=com.p12]: client.p12
Enter password for client.p12 : 回車(chē)Enter
echo "##### 移動(dòng)所生成的證書(shū)文件到指定目錄下 #####"
[root@es83 elasticsearch-7.2.0]# cp *.p12 ./config/
echo "##### 修改es配置文件 #####"
[root@es83 elasticsearch-7.2.0]# cat > ./config/elasticsearch.yml <<EOF
cluster.name: chilu_elk
node.name: es83
node.master: true
node.data: true
path.data: /logdata/data1,/logdata/data2,/logdata/data3,/logdata/data4,/logdata/data5,/logdata/data6
bootstrap.memory_lock: true
bootstrap.system_call_filter: false
network.host: 192.168.100.83
http.port: 9200
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["192.168.100.83:9300","192.168.100.86:9300","192.168.100.87:9300"]
cluster.initial_master_nodes: ["192.168.100.83:9300","192.168.100.86:9300","192.168.100.87:9300"]
node.max_local_storage_nodes: 256
indices.fielddata.cache.size: 50%
http.cors.enabled: true
http.cors.allow-origin: "*"
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: es83.p12
xpack.security.transport.ssl.truststore.path: elastic-stack-ca.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: es83.p12
xpack.security.http.ssl.truststore.path: elastic-stack-ca.p12
xpack.security.http.ssl.client_authentication: optional
EOF
echo "##### scp目錄到其他節(jié)點(diǎn)上走净,并修改配置 #####"
[root@es83 elasticsearch-7.2.0]# cd ../
[root@es83 elkuser]# scp -r ./elasticsearch-7.2.0 192.168.100.86:/home/elkuser/
[root@es83 elkuser]# scp -r ./elasticsearch-7.2.0 192.168.100.87:/home/elkuser/
[root@es83 elkuser]# ssh 192.168.100.86 "sed -i -e 's/es83/es86/g' -e '8s/192.168.100.83/192.168.100.86/' /home/elkuser/elasticsearch-7.2.0/config/elasticsearch.yml"
[root@es83 elkuser]# ssh 192.168.100.87 "sed -i -e 's/es83/es87/g' -e '8s/192.168.100.83/192.168.100.87/' /home/elkuser/elasticsearch-7.2.0/config/elasticsearch.yml"
echo "##### 修改各目錄的屬主和組 #####"
[root@es83 elkuser]# chown -R elkuser:elkuser /logdata ./elasticsearch-7.2.0
[root@es83 elkuser]# ssh 192.168.100.86 "chown -R elkuser:elkuser /logdata /home/elkuser/elasticsearch-7.2.0"
[root@es83 elkuser]# ssh 192.168.100.87 "chown -R elkuser:elkuser /logdata /home/elkuser/elasticsearch-7.2.0"
echo "##### 切換普通用戶(hù),后臺(tái)運(yùn)行elasticsearch服務(wù) #####"
[root@es83 elasticsearch-7.2.0]# su elkuser
[elkuser@es83 elasticsearch-7.2.0]$ ./bin/elasticsearch -d
[elkuser@es83 elasticsearch-7.2.0]$ ssh elkuser@192.168.100.86 "/home/elkuser/elasticsearch-7.2.0/bin/elasticsearch -d"
[elkuser@es83 elasticsearch-7.2.0]$ ssh elkuser@192.168.100.87 "/home/elkuser/elasticsearch-7.2.0/bin/elasticsearch -d"
echo "##### 自動(dòng)生成用戶(hù)密碼(記得保存好用戶(hù)密碼) #####"
[elkuser@es83 elasticsearch-7.2.0]$ echo y | ./bin/elasticsearch-setup-passwords auto | tee elk_pwd.log
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Changed password for user apm_system
PASSWORD apm_system = HojN4w88Nwgl51Oe7o12
Changed password for user kibana
PASSWORD kibana = JPYDvJYn2CDmls5gIlNG
Changed password for user logstash_system
PASSWORD logstash_system = kXxmVCX34PGpUluBXABX
Changed password for user beats_system
PASSWORD beats_system = rY90aBHjAdidQPwgX87u
Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = 0VxaGROqo255y60P1kBV
Changed password for user elastic
PASSWORD elastic = NvOBRGpUE3DoaSbYaUp3
echo "##### 測(cè)試es加密囊嘉,查看集群狀態(tài)是否為green #####"
[elkuser@es83 elasticsearch-7.2.0]$ curl --tlsv1 -XGET "https://192.168.100.83:9200/_cluster/health?pretty" --user elastic:NvOBRGpUE3DoaSbYaUp3 -k
Kafka集群部署
Kafka 是最初由Linkedin公司開(kāi)發(fā)温技,是一個(gè)分布式、分區(qū)的扭粱、多副本的舵鳞、多訂閱者,基于zookeeper協(xié)調(diào)的分布式消息系統(tǒng)琢蛤;它具有高吞吐量蜓堕、低延遲、可擴(kuò)展性博其、持久性套才、可靠性、容錯(cuò)性和高并發(fā)等特點(diǎn)慕淡,可以處理幾十萬(wàn)條消息背伴,延遲只有幾毫秒,集群式部署支持熱擴(kuò)展,消息可被持久化到本地磁盤(pán)傻寂,防止數(shù)據(jù)丟失息尺,而且支持?jǐn)?shù)千個(gè)客戶(hù)端同時(shí)讀寫(xiě)。
在本文的架構(gòu)中疾掰,kafka是用作緩存消息隊(duì)列搂誉,用來(lái)實(shí)時(shí)接收日志和發(fā)送日志到logstash,實(shí)現(xiàn)解耦和流量削峰静檬,解決logstash消費(fèi)能力跟不上導(dǎo)致的數(shù)據(jù)丟失問(wèn)題炭懊;筆者采用的是kafka內(nèi)置的zookeeper,也是以集群方式部署拂檩,無(wú)需再單獨(dú)搭建zookeeper集群服務(wù)侮腹。
注意:kafka的集群配置信息,狀態(tài)維護(hù)是存儲(chǔ)在zookeeper這個(gè)進(jìn)程里的稻励,所以kafka在啟動(dòng)前需要先配置啟動(dòng)zookeeper凯旋!
筆者為zookeeper服務(wù)分配了4G內(nèi)存,為kafka服務(wù)分配了31G內(nèi)存和5塊SSD磁盤(pán)钉迷,關(guān)于硬件配置大家根據(jù)實(shí)際情況來(lái)分配使用至非!
# 下載方式:wget https://archive.apache.org/dist/kafka/2.3.0/kafka_2.12-2.3.0.tgz
echo "##### 解壓Kafka #####"
[root@es83 ~]# cd /opt/
[root@es83 opt]# tar -xvf ./kafka_2.12-2.3.0.tgz
echo "##### 修改zookeeper配置文件 #####"
[root@es83 opt]# cd ./kafka_2.12-2.3.0/
[root@es83 kafka_2.12-2.3.0]# cat > ./config/zookeeper.properties <<EOF
dataDir=/opt/zookeeper
clientPort=2181
maxClientCnxns=0
tickTime=2000
initLimit=10
syncLimit=5
server.1=192.168.100.83:2888:3888
server.2=192.168.100.86:2888:3888
server.3=192.168.100.87:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
4lw.commands.whitelist=
EOF
echo "##### 創(chuàng)建zookeeper數(shù)據(jù)目錄和對(duì)應(yīng)的myid文件 #####"
[root@es83 kafka_2.12-2.3.0]# mkdir /opt/zookeeper
[root@es83 kafka_2.12-2.3.0]# echo 1 > /opt/zookeeper/myid
echo "##### 修改kafka配置文件 #####"
[root@es83 kafka_2.12-2.3.0]# cat > ./config/server.properties <<EOF
broker.id=83
listeners=SASL_PLAINTEXT://192.168.100.83:9092
advertised.listeners=SASL_PLAINTEXT://192.168.100.83:9092
num.network.threads=5
num.io.threads=8
socket.send.buffer.bytes=1024000
socket.receive.buffer.bytes=1024000
socket.request.max.bytes=1048576000
log.dirs=/logdata/kfkdata1,/logdata/kfkdata2,/logdata/kfkdata3,/logdata/kfkdata4,/logdata/kfkdata5
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=72
log.segment.delete.delay.ms=1000
log.cleaner.enable=true
log.cleanup.policy=delete
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.100.83:2181,192.168.100.86:2181,192.168.100.87:2181
zookeeper.connection.timeout.ms=60000
group.initial.rebalance.delay.ms=0
delete.topic.enable=true
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
super.users=User:admin;User:kafka
EOF
echo "##### 創(chuàng)建zk和kafka的sasl jaas文件 #####"
[root@es83 kafka_2.12-2.3.0]# cat > ./config/zk_server_jaas.conf <<EOF
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="chilu@rljie"
user_kafka="chilu@rljie"
user_producer="chilu@rljie";
};
EOF
[root@es83 kafka_2.12-2.3.0]# cat > ./config/kafka_server_jaas.conf <<EOF
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="chilu@rljie"
user_admin="chilu@rljie"
user_producer="chilu@rljie"
user_consumer="chilu@rljie";
};
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafka"
password="chilu@rljie";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafka"
password="chilu@rljie";
};
EOF
echo "##### 修改zk和kafka的啟動(dòng)文件(增加SASL的環(huán)境配置) #####"
[root@es83 kafka_2.12-2.3.0]# sed -i -e 's/512M/4G/g' -e 's#Xms4G#Xms4G -Djava.security.auth.login.config=/opt/kafka_2.12-2.3.0/config/zk_server_jaas.conf#' ./bin/zookeeper-server-start.sh
[root@es83 kafka_2.12-2.3.0]# sed -i -e 's/1G/31G/g' -e 's#Xms31G#Xms31G -Djava.security.auth.login.config=/opt/kafka_2.12-2.3.0/config/kafka_server_jaas.conf#' ./bin/kafka-server-start.sh
echo "##### 將相關(guān)目錄復(fù)制到其他兩臺(tái)節(jié)點(diǎn)上,并進(jìn)行修改 #####"
[root@es83 kafka_2.12-2.3.0]# cd ../
[root@es83 opt]# scp -r ./zookeeper ./kafka_2.12-2.3.0 192.168.100.86:/opt/
[root@es83 opt]# scp -r ./zookeeper ./kafka_2.12-2.3.0 192.168.100.87:/opt/
[root@es83 opt]# ssh 192.168.100.86 "echo 2 > /opt/zookeeper/myid ; sed -i '1,3s/83/86/' /opt/kafka_2.12-2.3.0/config/server.properties"
[root@es83 opt]# ssh 192.168.100.87 "echo 3 > /opt/zookeeper/myid ; sed -i '1,3s/83/87/' /opt/kafka_2.12-2.3.0/config/server.properties"
echo "##### 后臺(tái)啟動(dòng)zookeeper服務(wù) #####"
[root@es83 opt]# cd ./kafka_2.12-2.3.0/
[root@es83 kafka_2.12-2.3.0]# ./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties
[root@es83 kafka_2.12-2.3.0]# ssh 192.168.100.86 "/opt/kafka_2.12-2.3.0/bin/zookeeper-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/zookeeper.properties"
[root@es83 kafka_2.12-2.3.0]# ssh 192.168.100.87 "/opt/kafka_2.12-2.3.0/bin/zookeeper-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/zookeeper.properties"
echo "##### 后臺(tái)啟動(dòng)kafka服務(wù) #####"
[root@es83 kafka_2.12-2.3.0]# ./bin/kafka-server-start.sh -daemon ./config/server.properties
[root@es83 kafka_2.12-2.3.0]# ssh 192.168.100.86 "/opt/kafka_2.12-2.3.0/bin/kafka-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/server.properties"
[root@es83 kafka_2.12-2.3.0]# ssh 192.168.100.87 "/opt/kafka_2.12-2.3.0/bin/kafka-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/server.properties"
當(dāng)zk和kafka服務(wù)都啟動(dòng)后糠聪,可以先檢查下相關(guān)端口狀態(tài)是否正常
[root@es83 kafka_2.12-2.3.0]# netstat -antlp | grep -E "2888|3888|2181|9092"
當(dāng)集群服務(wù)一切正常后荒椭,即可在其中一臺(tái)kafka節(jié)點(diǎn)上配置ACL訪問(wèn)控制權(quán)限,對(duì)生產(chǎn)者producer和消費(fèi)者consumer得主題topic和組group設(shè)置訪問(wèn)權(quán)限舰蟆,可以限制只允許指定的機(jī)器訪問(wèn)趣惠。
提示:下面的
mykafka
是通過(guò)/etc/hosts
自定義一個(gè)IP的域名,例如:192.168.100.83 mykafka
身害;如果寫(xiě)成localhost可能沒(méi)有權(quán)限味悄,執(zhí)行命令后會(huì)報(bào)NoAuth;如果寫(xiě)成IP地址會(huì)報(bào)CONNECT !!!
echo "##### 編寫(xiě)配置ACL訪問(wèn)權(quán)限腳本 #####"
[root@es83 kafka_2.12-2.3.0]# cat > ./kfkacls.sh <<EOF
#!/bin/bash
/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 --add --allow-principal User:producer --allow-host 0.0.0.0 --operation Read --operation Write --topic elk
/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 --add --allow-principal User:producer --topic elk --producer --group chilu
/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 --add --allow-principal User:consumer --allow-host 0.0.0.0 --operation Read --operation Write --topic elk
/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 --add --allow-principal User:consumer --topic elk --consumer --group chilu
EOF
echo "##### 執(zhí)行腳本 #####"
[root@es83 kafka_2.12-2.3.0]# bash ./kfkacls.sh
echo "##### 查看ACL權(quán)限列表 #####"
[root@es83 kafka_2.12-2.3.0]# ./bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 --list
# 提示:下面是交互式的命令配置
echo "##### 增加ACL訪問(wèn)權(quán)限 #####"
[root@es83 kafka_2.12-2.3.0]# ./bin/zookeeper-shell.sh mykafka:2181
Welcome to ZooKeeper!
JLine support is disabled
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
此時(shí)可以直接在這個(gè)控制臺(tái)輸入命令
如ls / 查看ZK的目錄
檢查默認(rèn)權(quán)限
getAcl /
默認(rèn)所有人可以查看
添加權(quán)限命令為:(僅添加kafka主機(jī)的IP)
setAcl / ip:192.168.100.83:cdrwa,ip:192.168.100.86:cdrwa,ip:192.168.100.87:cdrwa
setAcl /kafka-acl ip:192.168.100.83:cdrwa,ip:192.168.100.86:cdrwa,ip:192.168.100.87:cdrwa
檢查是否生效
getAcl /
輸出:
'ip,'192.168.100.83
: cdrwa
'ip,'192.168.100.86
: cdrwa
'ip,'192.168.100.87
: cdrwa
退出
quit
Logstash服務(wù)部署
Logstash 是免費(fèi)且開(kāi)放的服務(wù)器端數(shù)據(jù)處理管道塌鸯,采用的是可插拔框架侍瑟,擁有200多個(gè)插件,支持各種輸入和輸出選擇丙猬,能夠?qū)崟r(shí)解析和轉(zhuǎn)換數(shù)據(jù)涨颜,具有可伸縮性、彈性和靈活性茧球;但是它比較消耗資源庭瑰,運(yùn)行時(shí)占用較高的CPU和內(nèi)存,如果缺少消息隊(duì)列緩存抢埋,會(huì)有數(shù)據(jù)丟失的隱患弹灭,所以小伙伴們要結(jié)合自身情況來(lái)使用督暂!
筆者在生產(chǎn)環(huán)境上,也為L(zhǎng)ogstash分配了30G內(nèi)存穷吮,關(guān)于硬件配置大家根據(jù)實(shí)際情況來(lái)分配使用损痰!
# 下載方式:wget https://artifacts.elastic.co/downloads/logstash/logstash-7.2.0.tar.gz
echo "##### 解壓Logstash #####"
[root@es83 ~]# cd /home/elkuser/
[root@es83 elkuser]# tar -xvf ./logstash-7.2.0.tar.gz
echo "##### 修改啟動(dòng)內(nèi)存 #####"
[root@es83 elkuser]# cd ./logstash-7.2.0/
[root@es83 logstash-7.2.0]# sed -i -e 's/1g/30g/g' ./config/jvm.options
echo "##### 復(fù)制相關(guān)所需證書(shū)到logstash目錄下 #####"
[root@es83 elkuser]# cd ./logstash-7.2.0/config/
[root@es83 config]# cp /home/elkuser/elasticsearch-7.2.0/root.pem ./
echo "##### 修改logstash配置文件 #####"
[root@es83 config]# cat > ./logstash.yml <<EOF
http.host: "192.168.100.83"
node.name: "logstash83"
xpack.monitoring.elasticsearch.hosts: [ "https://192.168.100.83:9200" ]
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "NvOBRGpUE3DoaSbYaUp3"
xpack.monitoring.elasticsearch.ssl.certificate_authority: config/root.pem
xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
xpack.monitoring.collection.interval: 30s
xpack.monitoring.collection.pipeline.details.enabled: true
EOF
# 提示:配置的用戶(hù)名和密碼要跟kafka配置的一致!
echo "##### 配置接入kafka的客戶(hù)端文件 #####"
[root@es83 config]# cat > ./kafka-client-jaas.conf <<EOF
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="consumer"
password="chilu@rljie";
};
EOF
echo "##### input和ouput的配置示例 #####"
[root@es83 config]# cat > ./test.cfg <<EOF
input {
kafka {
bootstrap_servers => "192.168.100.83:9092,192.168.100.86:9092,192.168.100.87:9092"
client_id => "chilu83"
auto_offset_reset => "latest"
topics => "elk"
group_id => "chilu"
security_protocol => "SASL_PLAINTEXT"
sasl_mechanism => "PLAIN"
jaas_path => "/home/elkuser/logstash-7.2.0/config/kafka-client-jaas.conf"
}
}
filter {
}
output {
elasticsearch {
hosts => ["192.168.4.1:9200","192.168.4.2:9200","192.168.4.3:9200"]
user => "elastic"
password => "NvOBRGpUE3DoaSbYaUp3"
ssl => true
cacert => "/home/elkuser/logstash-7.2.0/config/root.pem"
index => "chilu_elk%{+YYYY.MM.dd}"
}
}
EOF
echo "##### 啟動(dòng)logstash服務(wù) #####"
[root@es83 config]# ../bin/logstash -r -f ./test.cfg
Kibana服務(wù)部署
Kibana 是一個(gè)開(kāi)源的分析和可視化平臺(tái)酒来,可以為L(zhǎng)ogstash和ElasticSearch提供的日志數(shù)據(jù)進(jìn)行高效的搜索、可視化匯總和多維度分析肪凛,并且與Elasticsearch索引中的數(shù)據(jù)進(jìn)行交互堰汉;它基于瀏覽器的界面操作可以快速創(chuàng)建動(dòng)態(tài)儀表板,實(shí)時(shí)監(jiān)控ElasticSearch的數(shù)據(jù)狀態(tài)與更改等伟墙。
筆者在生產(chǎn)環(huán)境上翘鸭,為Kibana分配了8G內(nèi)存,關(guān)于硬件配置大家根據(jù)實(shí)際情況來(lái)分配使用戳葵!
# 下載方式:wget https://artifacts.elastic.co/downloads/kibana/kibana-7.2.0-linux-x86_64.tar.gz
echo "##### 解壓Kibana #####"
[root@es83 ~]# cd /home/elkuser/
[root@es83 elkuser]# tar -xvf kibana-7.2.0-linux-x86_64.tar.gz
echo "##### 修改啟動(dòng)內(nèi)存 #####"
[root@es83 elkuser]# cd ./kibana-7.2.0-linux-x86_64/
[root@es83 kibana-7.2.0-linux-x86_64]# sed -i 's/warnings/warnings --max_old_space_size=8096/' ./bin/kibana
echo "##### 復(fù)制相關(guān)所需證書(shū)到kibana目錄下 #####"
[root@es83 kibana-7.2.0-linux-x86_64]# cd ./config/
[root@es83 config]# cp /home/elkuser/elasticsearch-7.2.0/client.p12 ./
echo "##### 利用client.p12證書(shū)生成其他所需證書(shū) #####"
[root@es83 config]# openssl pkcs12 -in client.p12 -nocerts -nodes > client.key
Enter Import Password: 回車(chē)Enter
MAC verified OK
[root@es83 config]# openssl pkcs12 -in client.p12 -clcerts -nokeys > client.cer
Enter Import Password: 回車(chē)Enter
MAC verified OK
[root@es83 config]# openssl pkcs12 -in client.p12 -cacerts -nokeys -chain > client-ca.cer
Enter Import Password: 回車(chē)Enter
MAC verified OK
echo "##### 升級(jí)kibana的web界面為https訪問(wèn) #####"
[root@es83 config]# cd ../
[root@es83 kibana-7.2.0-linux-x86_64]# openssl req -newkey rsa:2048 -nodes -keyout server.key -x509 -days 3650 -out server.crt -subj "/C=CN/ST=guangzhou/L=rljie/O=chilu/OU=linux/"
echo "##### 修改kibana的配置文件 #####"
[root@es83 kibana-7.2.0-linux-x86_64]# cat > ./config/kibana.yml <<EOF
server.name: kibana
server.host: "192.168.100.83"
elasticsearch.hosts: [ "https://192.168.100.83:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.username: "elastic"
elasticsearch.password: "NvOBRGpUE3DoaSbYaUp3"
xpack.security.enabled: true
elasticsearch.ssl.certificateAuthorities: config/client-ca.cer
elasticsearch.ssl.verificationMode: certificate
xpack.security.encryptionKey: "4297f44b13955235245b2497399d7a93"
xpack.reporting.encryptionKey: "4297f44b13955235245b2497399d7a93"
server.ssl.enabled: true
server.ssl.certificate: server.crt
server.ssl.key: server.key
EOF
echo "##### nohup后臺(tái)啟動(dòng)kibana服務(wù)(自行選擇后臺(tái)方式) #####"
[root@es83 kibana-7.2.0-linux-x86_64]# nohup ./bin/kibana --allow-root &
完成以上操作后就乓,可使用瀏覽器訪問(wèn)kibana地址https://192.168.100.83
,輸入elastic
用戶(hù)密碼即可拱烁!
curl 示例
curl --tlsv1 -XGET 'https://192.168.100.83:9200/_cluster/health?pretty' --cacert '/home/elkuser/elasticsearch-7.2.0/root.pem' --user elastic:NvOBRGpUE3DoaSbYaUp3
Filebeat服務(wù)部署
Filebeat 是一個(gè)用于轉(zhuǎn)發(fā)和集中日志數(shù)據(jù)的輕量級(jí)采集器生蚁,基于go語(yǔ)言開(kāi)發(fā),性能穩(wěn)定戏自,配置簡(jiǎn)單邦投,占用資源很少;它作為agent安裝在服務(wù)器上擅笔,可以監(jiān)控你指定的日志文件或位置志衣,收集日志事件,并將其轉(zhuǎn)發(fā)到配置的輸出猛们;主要通過(guò)探測(cè)器prospector和收集器harvester組件完成工作念脯。
# 下載方式:wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.2.0-linux-x86_64.tar.gz
echo "##### 解壓Filebeat #####"
[root@es83 ~]# cd /home/elkuser/
[root@es83 elkuser]# tar -xvf filebeat-7.2.0-linux-x86_64.tar.gz
echo "##### 修改filebeat配置文件 #####"
[root@es83 elkuser]# cd ./filebeat-7.2.0-linux-x86_64/
[root@es83 filebeat-7.2.0-linux-x86_64]# cat > ./filebeat.yml <<\EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/access.log
close_timeout: 1h
clean_inactive: 3h
ignore_older: 2h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
output.kafka:
hosts: ["192.168.100.83:9092","192.168.100.86:9092","192.168.100.87:9092"]
topic: elk
required_acks: 1
username: "producer"
password: "chilu@rljie"
EOF
echo "##### nohup后臺(tái)啟動(dòng)filebeat服務(wù) #####"
[root@es83 filebeat-7.2.0-linux-x86_64]# nohup ./filebeat -e -c filebeat.yml &