本人陸陸續(xù)續(xù)接觸了ELK的1.4躬充,2.0逃顶,2.4,5.0充甚,5.2版本以政,可以說(shuō)前面使用當(dāng)中一直沒(méi)有太多感觸,最近使用5.2才慢慢有了點(diǎn)感覺(jué)伴找,可見(jiàn)認(rèn)知事務(wù)的艱難盈蛮,本次文檔盡量詳細(xì)點(diǎn),現(xiàn)在寫文檔越來(lái)越喜歡簡(jiǎn)潔了技矮,不知道是不是不太好抖誉。不扯了看正文(注意這里的配置是優(yōu)化前配置殊轴,正常使用沒(méi)問(wèn)題,量大時(shí)需要優(yōu)化)寸五。
備注:
本次屬于大版本變更梳凛,有很多修改,部署重大修改如下:
1梳杏,filebeat直接輸出kafka韧拒,并drop不必要的字段如beat相關(guān)的
2,elasticsearch集群布局優(yōu)化:分三master節(jié)點(diǎn)6data節(jié)點(diǎn)
3十性,logstash filter 加入urldecode支持url叛溢、reffer、agent中文顯示
4劲适,logstash fileter加入geoip支持客戶端ip區(qū)域城市定位功能
5, logstash mutate替換字符串并remove不必要字段如kafka相關(guān)的
5楷掉,elasticsearch插件需要另外部署node.js,不能像以前一樣集成一起
6霞势,nginx日志新增request參數(shù)烹植、請(qǐng)求方法
一,架構(gòu)
可選架構(gòu)
filebeat--elasticsearch--kibana
filebeat--logstash--kafka--logstash--elasticsearch--kibana
filebeat--kafka--logstash--elasticsearch--kibana
由于filebeat5.2.2支持多種輸出logstash愕贡、elasticsearch草雕、kafka、redis固以、syslog墩虹、file等,為了優(yōu)化資源使用率且能夠支持大并發(fā)場(chǎng)景選擇
filebeat(18)--kafka(3)--logstash(3)--elasticsearch(3)--kibana(3--nginx負(fù)載均衡
共3臺(tái)物理機(jī)憨琳、12臺(tái)虛擬機(jī)诫钓、系統(tǒng)CentOS6.8、具體劃分如下:
服務(wù)器一(192.168.188.186)
kafka1??32G700G4CPU
logstash8G??????100G????4CPU
elasticsearch1??40G1.4T????8CPU
elasticsearch2??40G?????1.4T????8CPU
服務(wù)器二(192.168.188.187)
kafka2??32G700G4CPU
logstash8G??????100G????4CPU
elasticsearch3??40G1.4T????8CPU
elasticsearch4??40G?????1.4T????8CPU
服務(wù)器三(192.168.188.188)
kafka3??32G700G4CPU
logstash8G??????100G????4CPU
elasticsearch5??40G1.4T????8CPU
elasticsearch6??40G?????1.4T????8CPU
磁盤分區(qū)
Logstach?????100G
SWAP??8G/boot200M??剩下/
Kafka???????700G
SWAP??8G/boot200M/30G剩下/data
Elasticsearch?1.4T
SWAP??8G/boot200M/30G剩下/data
IP分配
Elasticsearch1-6?????192.168.188.191-196
kibana1-3??????????????192.168.188.191/193/195
kafka1-3????????????????192.168.188.237-239
logstash????????????????192.168.188.238/198/240
二篙螟,環(huán)境準(zhǔn)備
yum?-y?remove?java-1.6.0-openjdk
yum?-y?remove?java-1.7.0-openjdk
yum?-y?remove?perl-*
yum?-y?remove?sssd-*
yum?-yinstalljava-1.8.0-openjdk
java?-version
yum?update
reboot
設(shè)置host環(huán)境kafka需要用到
cat /etc/hosts
12192.168.188.191???ES191(master和data)
192.168.188.192???ES192(data)
192.168.188.193???ES193(master和data)
192.168.188.194???ES194(data)
192.168.188.195???ES195(master和data)
192.168.188.196???ES196(data)
192.168.188.237???kafka237
192.168.188.238???kafka238
192.168.188.239???kafka239
192.168.188.197???logstash197
192.168.188.198???logstash198
192.168.188.240???logstash240
三菌湃,部署elasticsearch集群
mkdir /data/esnginx
mkdir /data/eslog
rpm -ivh /srv/elasticsearch-5.2.2.rpm
chkconfig --add elasticsearch
chkconfig postfix off
rpm -ivh /srv/kibana-5.2.2-x86_64.rpm
chown ?elasticsearch:elasticsearch /data/eslog -R
chown ?elasticsearch:elasticsearch /data/esnginx -R
配置文件(3master+6data)
[root@ES191 elasticsearch]# cat elasticsearch.yml|grep -Ev '^#|^$'
cluster.name:?nginxlog
node.name:?ES191
node.master:true
node.data:true
node.attr.rack:?r1
path.data:/data/esnginx
path.logs:/data/eslog
bootstrap.memory_lock:true
network.host:?192.168.188.191
http.port:?9200
transport.tcp.port:?9300
discovery.zen.ping.unicast.hosts:?["192.168.188.191","192.168.188.192","192.168.188.193","192.168.188.194","192.168.188.195","192.168.188.196"]
discovery.zen.minimum_master_nodes:?2
gateway.recover_after_nodes:?5
gateway.recover_after_time:?5m
gateway.expected_nodes:?6
cluster.routing.allocation.same_shard.host:true
script.engine.groovy.inline.search:?on
script.engine.groovy.inline.aggs:?on
indices.recovery.max_bytes_per_sec:?30mb
http.cors.enabled:true
http.cors.allow-origin:"*"
bootstrap.system_call_filter:false#內(nèi)核3.0以下的需要,centos7內(nèi)核3.10不需要
特別注意
/etc/security/limits.conf
elasticsearch??soft??memlock??unlimited
elasticsearch??hard??memlock??unlimited
elasticsearch??soft??nofile???65536
elasticsearch??hard??nofile???131072
elasticsearch??soft??nproc????2048
elasticsearch??hard??nproc????4096
/etc/elasticsearch/jvm.options
#?Xms?represents?the?initial?size?of?total?heap?space
#?Xmx?represents?the?maximum?size?of?total?heap?space
-Xms20g
-Xmx20g
啟動(dòng)集群
service elasticsearch start
健康檢查
http://192.168.188.191:9200/_cluster/health?pretty=true
{
"cluster_name":"nginxlog",
"status":"green",
"timed_out":false,
"number_of_nodes":?6,
"number_of_data_nodes":?6,
"active_primary_shards":?0,
"active_shards":?0,
"relocating_shards":?0,
"initializing_shards":?0,
"unassigned_shards":?0,
"delayed_unassigned_shards":?0,
"number_of_pending_tasks":?0,
"number_of_in_flight_fetch":?0,
"task_max_waiting_in_queue_millis":?0,
"active_shards_percent_as_number":?100.0
}
elasticsearch-head插件
http://192.168.188.215:9100/
連接上面192.168.188.191:9200任意一臺(tái)即可
設(shè)置分片
官方建議生成索引時(shí)再設(shè)置
curl -XPUT 'http://192.168.188.193:9200/_all/_settings?preserve_existing=true' -d '{
"index.number_of_replicas" : "1",
"index.number_of_shards" : "6"
}'
沒(méi)有生效遍略,后來(lái)發(fā)現(xiàn)這個(gè)分片設(shè)置可以在模版創(chuàng)建時(shí)指定慢味,目前還是使用默認(rèn)1副本,5分片墅冷。
其他報(bào)錯(cuò)(這個(gè)只是參考纯路,優(yōu)化時(shí)有方案)
bootstrap.system_call_filter: false ? # 針對(duì) system call filters failed to install,
參見(jiàn) https://www.elastic.co/guide/en/elasticsearch/reference/current/system-call-filter-check.html
[WARN ][o.e.b.JNANatives ] unable to install syscall filter:
java.lang.UnsupportedOperationException: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in
四、部署kafka集群
kafka集群搭建
1寞忿,zookeeper集群
wget?http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
tarzxvf?zookeeper-3.4.10.tar.gz?-C/usr/local/
ln-s/usr/local/zookeeper-3.4.10//usr/local/zookeeper
mkdir-p/data/zookeeper/data/
vim/usr/local/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=5
syncLimit=2
dataDir=/data/zookeeper/data
clientPort=2181
server.1=192.168.188.237:2888:3888
server.2=192.168.188.238:2888:3888
server.3=192.168.188.239:2888:3888
vim/data/zookeeper/data/myid
1
/usr/local/zookeeper/bin/zkServer.sh?start
2驰唬,kafka集群
wget http://mirrors.hust.edu.cn/apache/kafka/0.10.0.1/kafka_2.11-0.10.0.1.tgz
tar zxvf kafka_2.11-0.10.0.1.tgz -C /usr/local/
ln -s /usr/local/kafka_2.11-0.10.0.1 /usr/local/kafka
diff了下server.properties和zookeeper.properties變動(dòng)不大可以直接使用
vim /usr/local/kafka/config/server.properties
broker.id=237
port=9092
host.name=192.168.188.237
num.network.threads=4
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafkalog
num.partitions=3
num.recovery.threads.per.data.dir=1
log.retention.hours=24
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:237
zookeeper.connection.timeout.ms=6000
producer.type=async
broker.list=192.168.188.237:9092,192.168.188.238:9092,192.168.188.239:9092
mkdir /data/kafkalog
修改內(nèi)存使用大小
vim /usr/local/kafka/bin/kafka-server-start.sh
export KAFKA_HEAP_OPTS="-Xmx16G -Xms16G"
啟動(dòng)kafka
/usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties
創(chuàng)建六組前端topic
/usr/local/kafka/bin/kafka-topics.sh --create --topic ngx1-168 --replication-factor 1 --partitions 3 --zookeeper 192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181
/usr/local/kafka/bin/kafka-topics.sh --create --topic ngx2-178 --replication-factor 1 --partitions 3 --zookeeper ?192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181
/usr/local/kafka/bin/kafka-topics.sh --create --topic ngx3-188 --replication-factor 1 --partitions 3 --zookeeper ?192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181
檢查topic
/usr/local/kafka/bin/kafka-topics.sh --list --zookeeper ?192.168.188.237:2181,192.168.188.238:2181,192.168.188.239:2181
ngx1-168
ngx2-178
ngx3-188
3,開(kāi)機(jī)啟動(dòng)
cat /etc/rc.local
/usr/local/zookeeper/bin/zkServer.sh start
/usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties &
五,部署配置logstash
安裝
rpm -ivh logstash-5.2.2.rpm
mkdir /usr/share/logstash/config
#1. 復(fù)制配置文件到logstash home
cp /etc/logstash /usr/share/logstash/config
#2. 配置路徑
vim /usr/share/logstash/config/logstash.yml
修改前:
path.config: /etc/logstash/conf.d
修改后:
path.config: /usr/share/logstash/config/conf.d
#3.修改 startup.options
修改前:
LS_SETTINGS_DIR=/etc/logstash
修改后:
LS_SETTINGS_DIR=/usr/share/logstash/config
修改startup.options需要執(zhí)行/usr/share/logstash/bin/system-install 生效
配置
消費(fèi)者輸出端3個(gè)logstash只負(fù)責(zé)一部分
in-kafka-ngx1-out-es.conf
in-kafka-ngx2-out-es.conf
in-kafka-ngx3-out-es.conf
[root@logstash197 conf.d]# cat in-kafka-ngx1-out-es.conf
input?{
kafka?{
bootstrap_servers?=>"192.168.188.237:9092,192.168.188.238:9092,192.168.188.239:9092"
group_id?=>"ngx1"
topics?=>?["ngx1-168"]
codec?=>"json"
consumer_threads?=>?3
decorate_events?=>true
}
}
filter?{
mutate?{
gsub?=>?["message","\\x","%"]
remove_field?=>?["kafka"]
}
json?{
source=>"message"
remove_field?=>?["message"]
}
geoip?{
source=>"clientRealIp"
}
urldecode?{
all_fields?=>true
}
}
output?{
elasticsearch?{
hosts?=>?["192.168.188.191:9200","192.168.188.192:9200","192.168.188.193:9200","192.168.188.194:9200","192.168.188.195:9200","192.168.188.196:9200"]
index?=>"filebeat-%{type}-%{+YYYY.MM.dd}"
manage_template?=>true
template_overwrite?=>true
template_name?=>"nginx_template"
template?=>"/usr/share/logstash/templates/nginx_template"
flush_size?=>?50000
idle_flush_time?=>?10
}
}
nginx 模版
[root@logstash197 logstash]# cat /usr/share/logstash/templates/nginx_template
{
"template":"filebeat-*",
"settings":?{
"index.refresh_interval":"10s"
},
"mappings":?{
"_default_":?{
"_all":?{"enabled":true,"omit_norms":true},
"dynamic_templates":?[
{
"string_fields":?{
"match_pattern":"regex",
"match":"(agent)|(status)|(url)|(clientRealIp)|(referrer)|(upstreamhost)|(http_host)|(request)|(request_method)|(upstreamstatus)",
"match_mapping_type":"string",
"mapping":?{
"type":"string","index":"analyzed","omit_norms":true,
"fields":?{
"raw":?{"type":"string","index":"not_analyzed","ignore_above":?512}
}
}
}
}?]叫编,
"properties":?{
"@version":?{"type":"string","index":"not_analyzed"},
"geoip":?{
"type":"object",
"dynamic":true,
"properties":?{
"location":?{"type":"geo_point"}
}
}
}
}
}
}
啟動(dòng)
/usr/share/logstash/bin/logstash -f /usr/share/logstash/config/conf.d/in-kafka-ngx1-out-es.conf ?&
默認(rèn)logstash開(kāi)機(jī)啟動(dòng)
參考
/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-kafka-5.1.5/DEVELOPER.md
報(bào)錯(cuò)處理
[2017-05-08T12:24:30,388][ERROR][logstash.inputs.kafka ? ?] Unknown setting 'zk_connect' for kafka
[2017-05-08T12:24:30,390][ERROR][logstash.inputs.kafka ? ?] Unknown setting 'topic_id' for kafka
[2017-05-08T12:24:30,390][ERROR][logstash.inputs.kafka ? ?] Unknown setting 'reset_beginning' for kafka
[2017-05-08T12:24:30,395][ERROR][logstash.agent ? ? ? ? ? ] Cannot load an invalid configuration {:reason=>"Something is wrong with your configuration."}
驗(yàn)證日志
[root@logstash197 conf.d]# cat /var/log/logstash/logstash-plain.log
[2017-05-09T10:43:20,832][INFO?][logstash.outputs.elasticsearch]?Elasticsearch?pool?URLs?updated?{:changes=>{:removed=>[],?:added=>[http://192.168.188.191:9200/,?http://192.168.188.192:9200/,?http://192.168.188.193:9200/,?http://192.168.188.194:9200/,?http://192.168.188.195:9200/,?http://192.168.188.196:9200/]}}
[2017-05-09T10:43:20,838][INFO?][logstash.outputs.elasticsearch]?Running?health?check?to?seeifan?Elasticsearch?connection?is?working?{:healthcheck_url=>http://192.168.188.191:9200/,?:path=>"/"}
[2017-05-09T10:43:20,919][WARN?][logstash.outputs.elasticsearch]?Restored?connection?to?ES?instance?{:url=>#}
[2017-05-09T10:43:20,920][INFO?][logstash.outputs.elasticsearch]?Running?health?check?to?seeifan?Elasticsearch?connection?is?working?{:healthcheck_url=>http://192.168.188.192:9200/,?:path=>"/"}
[2017-05-09T10:43:20,922][WARN?][logstash.outputs.elasticsearch]?Restored?connection?to?ES?instance?{:url=>#}
[2017-05-09T10:43:20,924][INFO?][logstash.outputs.elasticsearch]?Running?health?check?to?seeifan?Elasticsearch?connection?is?working?{:healthcheck_url=>http://192.168.188.193:9200/,?:path=>"/"}
[2017-05-09T10:43:20,927][WARN?][logstash.outputs.elasticsearch]?Restored?connection?to?ES?instance?{:url=>#}
[2017-05-09T10:43:20,927][INFO?][logstash.outputs.elasticsearch]?Running?health?check?to?seeifan?Elasticsearch?connection?is?working?{:healthcheck_url=>http://192.168.188.194:9200/,?:path=>"/"}
[2017-05-09T10:43:20,929][WARN?][logstash.outputs.elasticsearch]?Restored?connection?to?ES?instance?{:url=>#}
[2017-05-09T10:43:20,930][INFO?][logstash.outputs.elasticsearch]?Running?health?check?to?seeifan?Elasticsearch?connection?is?working?{:healthcheck_url=>http://192.168.188.195:9200/,?:path=>"/"}
[2017-05-09T10:43:20,932][WARN?][logstash.outputs.elasticsearch]?Restored?connection?to?ES?instance?{:url=>#}
[2017-05-09T10:43:20,933][INFO?][logstash.outputs.elasticsearch]?Running?health?check?to?seeifan?Elasticsearch?connection?is?working?{:healthcheck_url=>http://192.168.188.196:9200/,?:path=>"/"}
[2017-05-09T10:43:20,935][WARN?][logstash.outputs.elasticsearch]?Restored?connection?to?ES?instance?{:url=>#}
[2017-05-09T10:43:20,936][INFO?][logstash.outputs.elasticsearch]?Using?mapping?template?from?{:path=>"/usr/share/logstash/templates/nginx_template"}
[2017-05-09T10:43:20,970][INFO?][logstash.outputs.elasticsearch]?Attempting?toinstalltemplate?{:manage_template=>{"template"=>"filebeat-*","settings"=>{"index.refresh_interval"=>"10s"},"mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true,"omit_norms"=>true},"dynamic_templates"=>[{"string_fields"=>{"match_pattern"=>"regex","match"=>"(agent)|(status)|(url)|(clientRealIp)|(referrer)|(upstreamhost)|(http_host)|(request)|(request_method)","match_mapping_type"=>"string","mapping"=>{"type"=>"string","index"=>"analyzed","omit_norms"=>true,"fields"=>{"raw"=>{"type"=>"string","index"=>"not_analyzed","ignore_above"=>512}}}}}]}}}}
[2017-05-09T10:43:20,974][INFO?][logstash.outputs.elasticsearch]?Installing?elasticsearch?template?to?_template/nginx_template
[2017-05-09T10:43:21,009][INFO?][logstash.outputs.elasticsearch]?New?Elasticsearch?output?{:class=>"LogStash::Outputs::ElasticSearch",?:hosts=>[#,?#,?#,?#,?#,?#]}
[2017-05-09T10:43:21,010][INFO?][logstash.filters.geoip???]?Using?geoip?database?{:path=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.0.4-java/vendor/GeoLite2-City.mmdb"}
[2017-05-09T10:43:21,022][INFO?][logstash.pipeline????????]?Starting?pipeline?{"id"=>"main","pipeline.workers"=>4,"pipeline.batch.size"=>125,"pipeline.batch.delay"=>5,"pipeline.max_inflight"=>500}
[2017-05-09T10:43:21,037][INFO?][logstash.pipeline????????]?Pipeline?main?started
[2017-05-09T10:43:21,086][INFO?][logstash.agent???????????]?Successfully?started?Logstash?API?endpoint?{:port=>9600}
六辖佣,部署配置filebeat
安裝
rpm -ivh filebeat-5.2.2-x86_64.rpm
nginx日志格式需要為json的
log_format?access'{?"@timestamp":?"$time_iso8601",?'
'"clientRealIp":?"$clientRealIp",?'
'"size":?$body_bytes_sent,?'
'"request":?"$request",?'
'"method":?"$request_method",?'
'"responsetime":?$request_time,?'
'"upstreamhost":?"$upstream_addr",?'
'"http_host":?"$host",?'
'"url":?"$uri",?'
'"referrer":?"$http_referer",?'
'"agent":?"$http_user_agent",?'
'"status":?"$status"}?';
配置filebeat
vim /etc/filebeat/filebeat.yml
filebeat.prospectors:
-?input_type:?log
paths:
-/data/wwwlogs/*.log
document_type:?ngx1-168
tail_files:true
json.keys_under_root:true
json.add_error_key:true
output.kafka:
enabled:true
hosts:?["192.168.188.237:9092","192.168.188.238:9092","192.168.188.239:9092"]
topic:'%{[type]}'
partition.round_robin:
reachable_only:false
required_acks:?1
compression:gzip
max_message_bytes:?1000000
worker:?3
processors:
-?drop_fields:
fields:?["input_type","beat.hostname","beat.name","beat.version","offset","source"]
logging.to_files:true
logging.files:
path:/var/log/filebeat
name:?filebeat
rotateeverybytes:?10485760#?=?10MB
keepfiles:?7
filebeat詳細(xì)配置參考官網(wǎng)
https://www.elastic.co/guide/en/beats/filebeat/5.2/index.html
采用kafka作為日志輸出端
https://www.elastic.co/guide/en/beats/filebeat/5.2/kafka-output.html
output.kafka:
# initial brokers for reading cluster metadata
hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]
# message topic selection + partitioning
topic: '%{[type]}'
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 1000000
啟動(dòng)
chkconfig filebeat on
/etc/init.d/filebeat start
報(bào)錯(cuò)處理
[root@localhost ~]# tail -f /var/log/filebeat/filebeat
2017-05-09T15:21:39+08:00 ERR Error decoding JSON: invalid character 'x' in string escape code
使用$uri 可以在nginx對(duì)URL進(jìn)行更改或重寫,但是用于日志輸出可以使用$request_uri代替搓逾,如無(wú)特殊業(yè)務(wù)需求卷谈,完全可以替換
參考
http://www.mamicode.com/info-detail-1368765.html
七,驗(yàn)證
1,kafka消費(fèi)者查看
/usr/local/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic ngx1-168
2,elasticserch head查看Index及分片信息
八霞篡,部署配置kibana
1,配置啟動(dòng)
cat /etc/kibana/kibana.yml
server.port: 5601
server.host: "192.168.188.191"
elasticsearch.url: "http://192.168.188.191:9200"
chkconfig --add kibana
/etc/init.d/kibana start
2世蔗,字段格式
{
"_index":"filebeat-ngx1-168-2017.05.10",
"_type":"ngx1-168",
"_id":"AVvvtIJVy6ssC9hG9dKY",
"_score":?null,
"_source":?{
"request":"GET?/qiche/奧迪A3/?HTTP/1.1",
"agent":"Mozilla/5.0?(Windows?NT?6.1;?WOW64)?AppleWebKit/537.36?(KHTML,?like?Gecko)?Chrome/45.0.2454.101?Safari/537.36",
"geoip":?{
"city_name":"Jinhua",
"timezone":"Asia/Shanghai",
"ip":"122.226.77.150",
"latitude":?29.1068,
"country_code2":"CN",
"country_name":"China",
"continent_code":"AS",
"country_code3":"CN",
"region_name":"Zhejiang",
"location":?[
119.6442,
29.1068
],
"longitude":?119.6442,
"region_code":"33"
},
"method":"GET",
"type":"ngx1-168",
"http_host":"www.niubi.com",
"url":"/qiche/奧迪A3/",
"referrer":"http://www.niubi.com/qiche/奧迪S6/",
"upstreamhost":"172.17.4.205:80",
"@timestamp":"2017-05-10T08:14:00.000Z",
"size":?10027,
"beat":?{},
"@version":"1",
"responsetime":?0.217,
"clientRealIp":"122.226.77.150",
"status":"200"
},
"fields":?{
"@timestamp":?[
1494404040000
]
},
"sort":?[
1494404040000
]
}
3,視圖儀表盤
1)朗兵,添加高德地圖
編輯kibana配置文件kibana.yml污淋,最后面添加
tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'
ES 模版的調(diào)整,Geo-points 不適用 dynamic mapping 因此這類項(xiàng)目需要顯式的指定:
需要將 geoip.location 指定為 geo_point 類型余掖,則在模版的 properties 中增加一個(gè)項(xiàng)目寸爆,如下所示:
"properties": {
"@version": { "type": "string", "index": "not_analyzed" },
"geoip" ?: {
"type": "object",
"dynamic": true,
"properties": {
"location": { "type": "geo_point" }
}
}
}
4,安裝x-pack插件
參考
https://www.elastic.co/guide/en/x-pack/5.2/installing-xpack.html#xpack-installing-offline
https://www.elastic.co/guide/en/x-pack/5.2/setting-up-authentication.html#built-in-users
注意要修改密碼
http://192.168.188.215:5601/app/kibana#/dev_tools/console?load_from=https://www.elastic.co/guide/en/x-pack/5.2/snippets/setting-up-authentication/1.json
http://192.168.188.215:5601/app/kibana#/dev_tools/console?load_from=https://www.elastic.co/guide/en/x-pack/5.2/snippets/setting-up-authentication/2.json
http://192.168.188.215:5601/app/kibana#/dev_tools/console?load_from=https://www.elastic.co/guide/en/x-pack/5.2/snippets/setting-up-authentication/3.json
或者
curl -XPUT 'localhost:9200/_xpack/security/user/elastic/_password?pretty' -H 'Content-Type: application/json' -d'
{
"password": "elasticpassword"
}
'
curl -XPUT 'localhost:9200/_xpack/security/user/kibana/_password?pretty' -H 'Content-Type: application/json' -d'
{
"password": "kibanapassword"
}
'
curl -XPUT 'localhost:9200/_xpack/security/user/logstash_system/_password?pretty' -H 'Content-Type: application/json' -d'
{
"password": "logstashpassword"
}
'
下面是官網(wǎng)x-pack安裝升級(jí)卸載文檔盐欺,后發(fā)現(xiàn)注冊(cè)版本的x-pack赁豆,只具有監(jiān)控功能,就沒(méi)安裝
Installing?X-Pack?on?Offline?Machines
The?plugininstallscripts?require?direct?Internet?access?to?download?andinstallX-Pack.?If?your?server?doesn’t?have?Internet?access,?you?can?manually?download?andinstallX-Pack.
ToinstallX-Pack?on?a?machine?that?doesn’t?have?Internet?access:
Manually?download?the?X-Pack?zipfile:?https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-5.2.2.zip?(sha1)
Transfer?the?zipfileto?a?temporary?directory?on?the?offline?machine.?(Do?NOT?put?thefileinthe?Elasticsearch?plugins?directory.)
Run?bin/elasticsearch-plugininstallfrom?the?Elasticsearchinstalldirectory?and?specify?the?location?of?the?X-Pack?zipfile.?For?example:
bin/elasticsearch-plugininstallfile:///path/to/file/x-pack-5.2.2.zip
Note
You?must?specify?an?absolute?path?to?the?zipfileafter?thefile://protocol.
Run?bin/kibana-plugininstallfrom?the?Kibanainstalldirectory?and?specify?the?location?of?the?X-Pack?zipfile.?(The?pluginsforElasticsearch,?Kibana,?and?Logstash?are?includedinthe?same?zipfile.)?For?example:
bin/kibana-plugininstallfile:///path/to/file/x-pack-5.2.2.zip
Run?bin/logstash-plugininstallfrom?the?Logstashinstalldirectory?and?specify?the?location?of?the?X-Pack?zipfile.?(The?pluginsforElasticsearch,?Kibana,?and?Logstash?are?includedinthe?same?zipfile.)?For?example:
bin/logstash-plugininstallfile:///path/to/file/x-pack-5.2.2.zip
Enabling?and?Disabling?X-Pack?Features
By?default,?all?X-Pack?features?are?enabled.?You?can?explicitlyenableor?disable?X-Pack?featuresinelasticsearch.yml?and?kibana.yml:
SettingDescription
xpack.security.enabled
Set?tofalseto?disable?X-Pack?security.?Configureinboth?elasticsearch.yml?and?kibana.yml.
xpack.monitoring.enabled
Set?tofalseto?disable?X-Pack?monitoring.?Configureinboth?elasticsearch.yml?and?kibana.yml.
xpack.graph.enabled
Set?tofalseto?disable?X-Pack?graph.?Configureinboth?elasticsearch.yml?and?kibana.yml.
xpack.watcher.enabled
Set?tofalseto?disable?Watcher.?Configureinelasticsearch.yml?only.
xpack.reporting.enabled
Set?tofalseto?disable?X-Pack?reporting.?Configureinkibana.yml?only.
九冗美、Nginx負(fù)載均衡
1魔种,配置負(fù)載
[root@~# cat /usr/local/nginx/conf/nginx.conf
server
{
listen???5601;
server_name?192.168.188.215;
index?index.html?index.htm?index.shtml;
location?/?{
allow??192.168.188.0/24;
deny?all;
proxy_pass?http://kibanangx_niubi_com;
proxy_http_version?1.1;
proxy_set_header?Upgrade?$http_upgrade;
proxy_set_header?Connection'upgrade';
proxy_set_header?Host?$host;
proxy_cache_bypass?$http_upgrade;
auth_basic"Please?input?Username?and?Password";
auth_basic_user_file/usr/local/nginx/conf/.pass_file_elk;
}
access_log/data/wwwlogs/access_kibanangx.niubi.com.log??access;
}
upstream?kibanangx_niubi_com?{
ip_hash;
server??192.168.188.191:5601;
server??192.168.188.193:5601;
server??192.168.188.195:5601;
}
2,訪問(wèn)
http://192.168.188.215:5601/app/kibana#
-------------------------------------------------------------------------------------------------
完美的分割線
-------------------------------------------------------------------------------------------------
優(yōu)化文檔
ELKB5.2集群優(yōu)化方案
一墩衙,優(yōu)化效果
優(yōu)化前
收集日志請(qǐng)求達(dá)到1萬(wàn)/s,延時(shí)10s內(nèi)甲抖,默認(rèn)設(shè)置數(shù)據(jù)10s刷新漆改。
優(yōu)化后
收集日志請(qǐng)求達(dá)到3萬(wàn)/s,延時(shí)10s內(nèi)准谚,默認(rèn)設(shè)置數(shù)據(jù)10s刷新挫剑。(預(yù)估可以滿足最大請(qǐng)求5萬(wàn)/s)
缺點(diǎn):CPU處理能力不足,在dashboard大時(shí)間聚合運(yùn)算是生成儀表視圖會(huì)有超時(shí)現(xiàn)象發(fā)生;另外elasticsarch結(jié)構(gòu)和搜索語(yǔ)法等還有進(jìn)一步優(yōu)化空間柱衔。
二樊破,優(yōu)化步驟
1,內(nèi)存和CPU重新規(guī)劃
1)唆铐,es16CPU ?48G內(nèi)存
2)哲戚,kafka8CPU ? 16G內(nèi)存
3),logstash ? ? ? ? ? ?16CPU ?12G內(nèi)存
2艾岂,kafka優(yōu)化
kafka manager 監(jiān)控觀察消費(fèi)情況
kafka heap size需要修改
logstash涉及kafka的一個(gè)參數(shù)修改
1)顺少,修改jvm內(nèi)存數(shù)
vi /usr/local/kafka/bin/kafka-server-start.sh
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx8G -Xms8G"
export JMX_PORT="8999"
fi
2),Broker參數(shù)配置
配置優(yōu)化都是修改server.properties文件中參數(shù)值
網(wǎng)絡(luò)和io操作線程配置優(yōu)化
# broker處理消息的最大線程數(shù)(默認(rèn)3,可以為CPU核數(shù))
num.network.threads=4
# broker處理磁盤IO的線程數(shù) (默認(rèn)4脆炎,可以為CPU核數(shù)2倍左右)
num.io.threads=8
3)梅猿,安裝kafka監(jiān)控
/data/scripts/kafka-manager-1.3.3.4/bin/kafka-manager
http://192.168.188.215:8099/clusters/ngxlog/consumers
3,logstah優(yōu)化
logstas需要修改2個(gè)配置文件
1)秒裕,修改jvm參數(shù)
vi /usr/share/logstash/config/jvm.options
-Xms2g
-Xmx6g
2)袱蚓,修改logstash.yml
vi /usr/share/logstash/config/logstash.yml
path.data: /var/lib/logstash
pipeline.workers: 16#cpu核心數(shù)
pipeline.output.workers: 4#這里相當(dāng)于output elasticsearch里面的workers數(shù)
pipeline.batch.size: 5000#根據(jù)qps,壓力情況等填寫
pipeline.batch.delay: 5
path.config: /usr/share/logstash/config/conf.d
path.logs: /var/log/logstash
3)几蜻,修改對(duì)應(yīng)的logstash.conf文件
input文件
vi /usr/share/logstash/config/in-kafka-ngx12-out-es.conf
input?{
kafka?{
bootstrap_servers?=>"192.168.188.237:9092,192.168.188.238:9092,192.168.188.239:9092"
group_id?=>"ngx1"
topics?=>?["ngx1-168"]
codec?=>"json"
consumer_threads?=>?3
auto_offset_reset?=>"latest"#添加這行
#decorate_events?=>???#true?這行去掉
}
}
filter文件
filter?{
mutate?{
gsub?=>?["message","\\x","%"]#這個(gè)是轉(zhuǎn)義喇潘,url里面的加密方式和request等不一樣,用于漢字顯示
#remove_field?=>?["kafka"]這行去掉??decorate?events?默認(rèn)false后就不添加kafka.{}字段了入蛆,這里也及不需要再remove了
}
output文件
修改前
flush_size => 50000
idle_flush_time => 10
修改后
4秒集齊8萬(wàn)條一次性輸出
flush_size => 80000
idle_flush_time => 4
啟動(dòng)后logstash輸出(pipeline.max_inflight是8萬(wàn))
[2017-05-16T10:07:02,552][INFO?][logstash.pipeline????????]?Starting?pipeline?{"id"=>"main","pipeline.workers"=>16,"pipeline.batch.size"=>5000,"pipeline.batch.delay"=>5,"pipeline.max_inflight"=>80000}
[2017-05-16T10:07:02,553][WARN?][logstash.pipeline????????]?CAUTION:?Recommended?inflight?events?max?exceeded!?Logstash?will?run?with?up?to?80000?eventsinmemoryinyour?current?configuration.?If?your?message?sizes?are?large?this?may?cause?instability?with?the?default?heap?size.?Please?consider?setting?a?non-standard?heap?size,?changing?the?batch?size?(currently?5000),?or?changing?the?number?of?pipeline?workers?(currently?16)
4响蓉,elasticsearch優(yōu)化
1),修改jvm參加
vi /etc/elasticsearch/jvm.options
調(diào)整為24g哨毁,最大為虛擬機(jī)內(nèi)存的50%
-Xms24g
-Xmx24g
2)枫甲,修改GC方法(待定,后續(xù)觀察扼褪,該參數(shù)不確定時(shí)不建議修改)
elasticsearch默認(rèn)使用的GC是CMS GC
如果你的內(nèi)存大小超過(guò)6G想幻,CMS是不給力的,容易出現(xiàn)stop-the-world
建議使用G1 GC
注釋掉:
JAVA_OPTS=”$JAVA_OPTS -XX:+UseParNewGC”
JAVA_OPTS=”$JAVA_OPTS -XX:+UseConcMarkSweepGC”
JAVA_OPTS=”$JAVA_OPTS -XX:CMSInitiatingOccupancyFraction=75″
JAVA_OPTS=”$JAVA_OPTS -XX:+UseCMSInitiatingOccupancyOnly”
修改為:
JAVA_OPTS=”$JAVA_OPTS -XX:+UseG1GC”
JAVA_OPTS=”$JAVA_OPTS -XX:MaxGCPauseMillis=200″
3)话浇,安裝elasticsearch集群監(jiān)控工具Cerebro
https://github.com/lmenezes/cerebro
Cerebro 時(shí)一個(gè)第三方的 elasticsearch 集群管理軟件脏毯,可以方便地查看集群狀態(tài):
https://github.com/lmenezes/cerebro/releases/download/v0.6.5/cerebro-0.6.5.tgz
安裝后訪問(wèn)地址
http://192.168.188.215:9000/
4),elasticsearch搜索參數(shù)優(yōu)化(難點(diǎn)問(wèn)題)
發(fā)現(xiàn)沒(méi)事可做的幔崖,首先默認(rèn)配置已經(jīng)很好了食店,其次bulk,刷新等配置里都寫好了
5)赏寇,elasticsarch集群角色優(yōu)化
es191,es193,es195只做master節(jié)點(diǎn)+ingest節(jié)點(diǎn)
es192,es194,es196只做data節(jié)點(diǎn)(上面是虛擬機(jī)2個(gè)虛擬機(jī)共用一組raid5磁盤吉嫩,如果都做data節(jié)點(diǎn)性能表現(xiàn)不好)
再加2個(gè)data節(jié)點(diǎn),這樣聚合計(jì)算性能提升很大
5嗅定,filebeat優(yōu)化
1)自娩,使用json格式輸入,這樣logstash就不需要dcode減輕后端壓力
json.keys_under_root: true
json.add_error_key: true
2)渠退,drop不必要的字段如下
vim /etc/filebeat/filebeat.yml
processors:
- drop_fields:
fields: ["input_type", "beat.hostname", "beat.name", "beat.version", "offset", "source"]
3)忙迁,計(jì)劃任務(wù)刪索引
index默認(rèn)保留5天
cat /data/scripts/delindex.sh
#!/bin/bash
OLDDATE=`date-d??-5days??+%Y.%m.%d`
echo$OLDDATE1
curl?-XDELETE?http://192.168.188.193:9200/filebeat-ngx1-168-$OLDDATE
curl?-XDELETE?http://192.168.188.193:9200/filebeat-ngx2-178-$OLDDATE
curl?-XDELETE?http://192.168.188.193:9200/filebeat-ngx3-188-$OLDDATE