主要是面向多應(yīng)用多服務(wù)器,需要集中管理绣檬,來確認(rèn)應(yīng)用是否存在一些風(fēng)險性等問題
相應(yīng)的原理在網(wǎng)上一搜都可以知道的了,這里就不再做任何的解釋性的問題
應(yīng)用的流程圖:
關(guān)于部署的網(wǎng)絡(luò)拓補(bǔ)圖:
一、關(guān)于部署性的問題:
192.168.158.129? zoo01
192.168.158.130? zoo02
192.168.158.132? zoo03
?1. 關(guān)于zookeeper的部署
從官網(wǎng)上進(jìn)行下載:kafka_2.10-0.10.0.0.tgz
mkdir /usr/local/kafkaCluster
cd /usr/local/kafkaCluster
tar zxvf kafka_2.10-0.10.0.0.tgz
mv kafka_2.10-0.10.0.0 kafka01
cd kafka01/config
vim zookeeper.properties
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/kafkaCluster/kafka01/dataDir
clientPort=2181
server.1=zoo01:2888:3888
server.2=zoo02:2888:3888
server.3=zoo03:2888:3888
cd ?..
mkdir dataDir ?zolog
echo '1' > dataDir/myid ?#注意欲芹,這里要對應(yīng)的每一臺id要與配置文件里的server.1相同
三臺zookeeper都要進(jìn)行部署好
啟動:
cd zolog?
nohup /usr/local/kafkaCluster/kafka01/bin/zookeeper-server-start.sh /usr/local/kafkaCluster/kafka01/config/zookeeper.properties? &
2. ?kafka 配置
vim config/server.properties
broker.id=1 ?#注意三臺都要不一樣的
port=9092
host.name=192.168.158.129
log.dir=/usr/local/kafkaCluster/kafka01/logs
zookeeper.connect=zoo01:2181,zoo02:2181,zoo03:2181
log.dirs=/usr/local/kafkaCluster/kafka01/logs
log.retention.hours=168
啟動:
mkdir kalog
cd kalog
nohup /usr/local/kafkaCluster/kafka01/bin/kafka-server-start.sh /usr/local/kafkaCluster/kafka01/config/server.properties &
3. elasticsearch的部署
su - elk
#創(chuàng)建目錄
mkdir –p /usr/local/elkCluster
##進(jìn)行解壓, z表示解壓gz,x表示解壓吟吝,v表示顯示菱父,f表示指定文件;-C表示指定目錄到
tar zxvf ?elasticsearch-2.3.5.tar.gz–C/usr/local/elkCluster
#改變目錄名
mv elasticsearch-2.3.5elasticsearch
#切換目錄到elasticsearch/config
cdelasticsearch/config
vim elasticsearch.yml
##集群名
cluster.name:
elk-cluster
##節(jié)點名
node.name:
"elk-node1"
##是否為master剑逃,是否選舉為master
node.master:
true
##是否存儲索引數(shù)據(jù)
node.data: true
##設(shè)置默認(rèn)索引分片個數(shù)浙宜,默認(rèn)為5片
index.number_of_shards:
5
##設(shè)置默認(rèn)索引副本個數(shù),默認(rèn)為1個副本
index.number_of_replicas:
1
##索引數(shù)據(jù)的存儲路徑
path.data:
/usr/local/elkCluster/elasticsearch/data
##臨時文件的存儲路徑
path.work: /usr/local/elkCluster/elasticsearch/worker
##日志文件的存儲路徑
path.logs:
/usr/local/elkCluster/elasticsearch/logs
##插件的存放路徑
path.plugins:
/usr/local/elkCluster/elasticsearch/plugins
##綁定的ip地址蛹磺,可以是ipv4或ipv6的粟瞬,默認(rèn)為0.0.0.0
network.host:
192.168.158.128
##bootstrap.mlockall: true生產(chǎn)上開啟,作用是強(qiáng)制所有內(nèi)存鎖定萤捆,不要搞什么swap的來影響性能
##節(jié)點間交互的tcp端口
transport.tcp.port:
9300
##是否壓縮tcp傳輸時的數(shù)據(jù)
transport.tcp.compress:
true
##對外服務(wù)的http端口
http.port: 9200
#進(jìn)行檢測作用裙品。
discovery.zen.ping.unicast.hosts:
["192.168.158.128","192.168.158.131"]
啟動:
cd bin/
./elasticsearch -d
./plugin? install mobz/elasticsearch-head
./plugin install lmenezes/elasticsearch-kopf
4.安裝kibana
su - elk
tar zxvf? kibana-4.5.1-linux-x64.tar.gz
mv? kibana-4.5.1-linux-x64? /app/kibana
vim /app/kibana/config/kibana.yml
######################################################
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://10.10.230.158:9201"
pid.file: /app/kibana/logs/kibana.pid
#######################################################
啟動:
/app/kibana/bin/kibana -c /app/kibana/config/kibana.yml -l /app/kibana/logs/kibana.log > /dev/null 2>&1 &
5俗批、logstatsh部署server
##進(jìn)行解壓, z表示解壓gz,x表示解壓市怎,v表示顯示岁忘,f表示指定文件;-C表示指定目錄到
tar zxvf logstash-2.3.4.tar.gz
–C /usr/local
##切換目錄
cd /usr/local
##重命名為logstash
mv logstash-2.3.4
logstash
##切換目錄
cd logstash
##創(chuàng)建目錄
mkdir etc
vim logstash.conf
input {
kafka {
zk_connect =>"192.168.158.130:2181,192.168.158.129:2181,192.168.132:2181"
topic_id => "systemlog"
codec => plain
reset_beginning => false
consumer_threads => 5
decorate_events => true
}
}
output {
elasticsearch {
hosts =>["192.168.158.128:9200","192.168.158.131:9200"]
index =>"systemlog-%{+YYYY-MM-DD}"
}
}
#啟動
/usr/local/logstash/bin/logstashagent -f /usr/local/logstash/etc/logstash.conf?
6. logstatsh部署client
配置文件不一樣
input{
file {
type => "syslog"
path => "/var/log/messages"
discover_interval => 15
stat_insterval => 1
}
}
output{
kafka {
bootstrap_servers =>"192.168.158.129:9092,192.168.158.130:9092,192.168.158.132:9092"
topic_id => "syslog"
compression_type => "snappy"
}
}
這里寫文章編輯功能真是有點無語区匠,不過喜歡這里的簡潔臭觉!
后面繼續(xù)更新Elk相關(guān)性文章,讓技術(shù)更加前進(jìn)