filebeat+kafka+graylog+es+mongodb可視化日志詳解

graylog 是一個(gè)開(kāi)源的專業(yè)的日志聚合挫掏、分析、審計(jì)秩命、展示尉共、預(yù)警的工具褒傅,跟 ELK 很相似,但是更簡(jiǎn)單袄友,下面說(shuō)一說(shuō) graylog 如何部署殿托,使用,以及對(duì) graylog 的工作流程做一個(gè)簡(jiǎn)單的梳理

本文篇幅比較長(zhǎng)杠河,一共使用了三臺(tái)機(jī)器碌尔,這三臺(tái)機(jī)器上部署了 kafka 集群(2.3),es 集群(7.11.2)券敌,MongoDB 副本集(4.2)唾戚,還有 graylog 集群(4.0.2),搜集的日志是 k8s 的日志待诅,使用 DaemonSet 的方式通過(guò) filebeat(7.11.2)將日志搜集到 kafka 中叹坦。本文將從部署開(kāi)始,一步一步的了解 graylog 是怎么部署卑雁,以及簡(jiǎn)單的使用募书。

graylog 介紹

image

組件介紹

從架構(gòu)圖中可以看出,graylog 是由三部分組成:

  • mongodb 存放 gralog 控制臺(tái)上的配置信息测蹲,以及 graylog 集群狀態(tài)信息莹捡,還有一些元信息
  • es 存放日志數(shù)據(jù),以及檢索數(shù)據(jù)等
  • graylog 相當(dāng)于一個(gè)中轉(zhuǎn)的角色

mongodb 和 es 沒(méi)什么好說(shuō)的扣甲,作用都比較清晰篮赢,重點(diǎn)說(shuō)一下 graylog 的一些組件,及其作用琉挖。

  • Inputs 日志數(shù)據(jù)來(lái)源启泣,可以通過(guò) graylog 的 Sidecar 來(lái)主動(dòng)抓取,也可以通過(guò)其他 beats,syslog 等主動(dòng)推送
  • Extractors 日志數(shù)據(jù)格式轉(zhuǎn)換示辈,主要用于 json 解析寥茫、kv 解析、時(shí)間戳解析矾麻、正則解析
  • Streams 日志信息分類纱耻,通過(guò)設(shè)置一些規(guī)則來(lái)將日志發(fā)送到指定的索引中
  • Indices 持久化數(shù)據(jù)存儲(chǔ),設(shè)置索引名及索引的過(guò)期策略射富、分片數(shù)膝迎、副本數(shù)、flush 時(shí)間間隔等
  • Outputs 日志數(shù)據(jù)的轉(zhuǎn)發(fā)胰耗,將解析的 Stream 發(fā)送到其他的 graylog 集群
  • Pipelines 日志數(shù)據(jù)的過(guò)濾限次,建立數(shù)據(jù)清洗的過(guò)濾規(guī)則、字段添加或刪除、條件過(guò)濾卖漫、自定義函數(shù)
  • Sidecar 輕量級(jí)的日志采集器
  • LookupTables 服務(wù)解析费尽,基于 IP 的 Whois 查詢和基于源 IP 的情報(bào)監(jiān)控
  • Geolocation 可視化地理位置,基于來(lái)源 IP 的監(jiān)控

流程介紹

image

Graylog 通過(guò)設(shè)置 Input 來(lái)搜集日志羊始,比如這里通過(guò)設(shè)置好 kafka 或者 redis 或者直接通過(guò) filebeat 將日志搜集過(guò)來(lái)旱幼,然后 Input 配置好 Extractors,用來(lái)對(duì)日志中的字段做提取和轉(zhuǎn)換突委,可以設(shè)置多個(gè) Extractors柏卤,按照順序執(zhí)行,設(shè)置好后匀油,系統(tǒng)會(huì)把日志通過(guò)在 Stream 中設(shè)置的匹配規(guī)則保存到 Stream 中缘缚,可以在 Stream 中指定索引位置,然后存儲(chǔ)到 es 的索引中敌蚜,完成這些操作后桥滨,可以在控制臺(tái)中通過(guò)指定 Stream 名稱來(lái)查看對(duì)應(yīng)的日志。

安裝 mongodb

按照官方文檔弛车,裝的是 4.2.x 的

時(shí)間同步

安裝 ntpdate

yum install ntpdate -y
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

添加到計(jì)劃任務(wù)中

# crontab -e
5 * * * * ntpdate -u ntp.ntsc.ac.cn

配置倉(cāng)庫(kù)源并安裝

vim /etc/yum.repos.d/mongodb-org.repo
[mongodb-org-4.2]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.2/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.2.asc

然后安裝

yum makecache
yum -y install mongodb-org

然后啟動(dòng)

systemctl daemon-reload
systemctl enable mongod.service
systemctl start mongod.service
systemctl --type=service --state=active | grep mongod

修改配置文件設(shè)置副本集

# vim /etc/mongod.conf
# mongod.conf

# for documentation of all options, see:
#   http://docs.mongodb.org/manual/reference/configuration-options/

# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

# Where and how to store data.
storage:
  dbPath: /var/lib/mongo
  journal:
    enabled: true
#  engine:
#  wiredTiger:

# how the process runs
processManagement:
  fork: true  # fork and run in background
  pidFilePath: /var/run/mongodb/mongod.pid  # location of pidfile
  timeZoneInfo: /usr/share/zoneinfo

# network interfaces
net:
  port: 27017
  bindIp: 0.0.0.0  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.


#security:

#operationProfiling:

replication:
  replSetName: graylog-rs  #設(shè)置副本集名稱

#sharding:

## Enterprise-Only Options

#auditLog:

#snmp:

初始化副本集

> use admin;
switched to db admin
> rs.initiate( {
...      _id : "graylog-rs",
...      members: [
...          { _id: 0, host: "10.0.105.74:27017"},
...          { _id: 1, host: "10.0.105.76:27017"},
...          { _id: 2, host: "10.0.105.96:27017"}
...      ]
...  })
{
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1615885669, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1615885669, 1)
}

確認(rèn)副本集狀態(tài)

不出意外的話齐媒,集群會(huì)有兩個(gè)角色,一個(gè)是Primary纷跛,另一個(gè)是Secondary喻括,使用命令可以查看

rs.status()

會(huì)返回一堆信息,如下所示:

        "members" : [
                {
                        "_id" : 0,
                        "name" : "10.0.105.74:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 623,
                        "optime" : {
                                "ts" : Timestamp(1615885829, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2021-03-16T09:10:29Z"),
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "",
                        "electionTime" : Timestamp(1615885679, 1),
                        "electionDate" : ISODate("2021-03-16T09:07:59Z"),
                        "configVersion" : 1,
                        "self" : true,
                        "lastHeartbeatMessage" : ""
                },
                {
                        "_id" : 1,
                        "name" : "10.0.105.76:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 162,
                        "optime" : {
                                "ts" : Timestamp(1615885829, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1615885829, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2021-03-16T09:10:29Z"),
                        "optimeDurableDate" : ISODate("2021-03-16T09:10:29Z"),
                        "lastHeartbeat" : ISODate("2021-03-16T09:10:31.690Z"),
                        "lastHeartbeatRecv" : ISODate("2021-03-16T09:10:30.288Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "10.0.105.74:27017",
                        "syncSourceHost" : "10.0.105.74:27017",
                        "syncSourceId" : 0,
                        "infoMessage" : "",
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "10.0.105.96:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 162,
                        "optime" : {
                                "ts" : Timestamp(1615885829, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1615885829, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2021-03-16T09:10:29Z"),
                        "optimeDurableDate" : ISODate("2021-03-16T09:10:29Z"),
                        "lastHeartbeat" : ISODate("2021-03-16T09:10:31.690Z"),
                        "lastHeartbeatRecv" : ISODate("2021-03-16T09:10:30.286Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "10.0.105.74:27017",
                        "syncSourceHost" : "10.0.105.74:27017",
                        "syncSourceId" : 0,
                        "infoMessage" : "",
                        "configVersion" : 1
                }
        ]

創(chuàng)建用戶

隨便找一臺(tái)機(jī)器執(zhí)行即可

use admin
db.createUser({user: "admin", pwd: "Root_1234", roles: ["root"]})
db.auth("admin","Root_1234")

然后別退出贫奠,再創(chuàng)建一個(gè)用于 graylog 連接的用戶

use graylog
db.createUser("graylog", {
  "roles" : [{
      "role" : "dbOwner",
      "db" : "graylog"
    }, {
      "role" : "readWrite",
      "db" : "graylog"
    }]
})

生成 keyFile 文件

openssl rand -base64 756 > /var/lib/mongo/access.key

修改權(quán)限

chown -R mongod.mongod /var/lib/mongo/access.key
chmod 600 /var/lib/mongo/access.key

生成完這個(gè) key 之后双妨,需要拷貝到其他另外兩臺(tái)機(jī)器上,并同樣修改好權(quán)限

scp -r /var/lib/mongo/access.key 10.0.105.76:/var/lib/mongo/

拷貝完成后叮阅,需要修改配置文件

# vim /etc/mongod.conf
#添加如下配置
security:
  keyFile: /var/lib/mongo/access.key
  authorization: enabled

三臺(tái)機(jī)器都需要如此設(shè)置,然后重啟服務(wù)

systemctl restart mongod

然后登陸驗(yàn)證即可泣特,驗(yàn)證兩塊地方

  • 是否能認(rèn)證成功
  • 副本集狀態(tài)是否正常

如果以上 ok浩姥,那通過(guò) yum 安裝的 mongodb4.2 版本的副本集就部署好了,下面去部署 es 集群

部署 Es 集群

es 版本為目前為止最新的版本:7.11.x

系統(tǒng)優(yōu)化

  1. 內(nèi)核參數(shù)優(yōu)化
# vim /etc/sysctl.conf
fs.file-max=655360
vm.max_map_count=655360
vm.swappiness = 0
  1. 修改 limits
# vim /etc/security/limits.conf
* soft nproc 655350
* hard nproc  655350
* soft nofile 655350
* hard nofile 655350
* hard memlock unlimited
* soft memlock unlimited
  1. 添加普通用戶
    啟動(dòng) es 需要使用普通用戶
useradd es
groupadd es
echo 123456 | passwd es --stdin
  1. 安裝 jdk
yum install -y java-1.8.0-openjdk-devel.x86_64

設(shè)置環(huán)境變量

# vim /etc/profile
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.282.b08-1.el7_9.x86_64/
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin

上傳壓縮包

es 下載地址:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.11.2-linux-x86_64.tar.gz

解壓

tar zxvf elasticsearch-7.11.2-linux-x86_64.tar.gz -C /usr/local/

修改權(quán)限

chown -R es.es /usr/local/elasticsearch-7.11.2

修改 es 配置

配置集群

# vim /usr/local/elasticsearch-7.11.2/config/elasticsearch.yml
cluster.name: graylog-cluster
node.name: node03
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["10.0.105.74","10.0.105.76","10.0.105.96"]
cluster.initial_master_nodes: ["10.0.105.74","10.0.105.76"]
http.cors.enabled: true
http.cors.allow-origin: "*"

修改 jvm 內(nèi)存大小

-Xms16g #設(shè)置為宿主機(jī)內(nèi)存的一半
-Xmx16g

使用 systemd 管理服務(wù)

# vim /usr/lib/systemd/system/elasticsearch.service
[Unit]
Description=elasticsearch server daemon
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
User=es
Group=es
LimitMEMLOCK=infinity
LimitNOFILE=655350
LimitNPROC=655350
ExecStart=/usr/local/elasticsearch-7.11.2/bin/elasticsearch
Restart=always

[Install]
WantedBy=multi-user.target

啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng)

systemctl daemon-reload
systemctl enable elasticsearch
systemctl start elasticsearch

簡(jiǎn)單驗(yàn)證下

# curl -XGET http://127.0.0.1:9200/_cluster/health?pretty
{
  "cluster_name" : "graylog-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 1,
  "active_shards" : 2,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

到這里 es 就安裝完成了

部署 kafka 集群

因?yàn)槲业臋C(jī)器是復(fù)用的状您,之前已經(jīng)安裝過(guò) java 環(huán)境了勒叠,所以這里就不再寫(xiě)了

下載安裝包

kafka: https://www.dogfei.cn/pkgs/kafka_2.12-2.3.0.tgz
zookeeper: https://www.dogfei.cn/pkgs/apache-zookeeper-3.6.0-bin.tar.gz

解壓

tar zxvf kafka_2.12-2.3.0.tgz -C /usr/local/
tar zxvf apache-zookeeper-3.6.0-bin.tar.gz -C /usr/local/

修改配置文件

kafka

# grep -v -E "^#|^$" /usr/local/kafka_2.12-2.3.0/config/server.properties
broker.id=1
listeners=PLAINTEXT://10.0.105.74:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
log.dirs=/data/kafka/data
num.partitions=8
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
message.max.bytes=20971520
log.retention.hours=1
log.retention.bytes=1073741824
log.segment.bytes=536870912
log.retention.check.interval.ms=300000
zookeeper.connect=10.0.105.74:2181,10.0.105.76:2181,10.0.105.96:2181
zookeeper.connection.timeout.ms=1000000
zookeeper.sync.time.ms=2000
group.initial.rebalance.delay.ms=0
log.cleaner.enable=true
delete.topic.enable=true

zookeeper

# grep -v -E "^#|^$" /usr/local/apache-zookeeper-3.6.0-bin/conf/zoo.cfg
tickTime=10000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
clientPort=2181
admin.serverPort=8888
server.1=10.0.105.74:22888:33888
server.2=10.0.105.76:22888:33888
server.3=10.0.105.96:22888:33888

嗯,別忘了創(chuàng)建好相應(yīng)的目錄

加入 systemd

kafka

# cat /usr/lib/systemd/system/kafka.service
[Unit]
Description=Kafka
After=zookeeper.service

[Service]
Type=simple
Environment=LOG_DIR=/data/kafka/logs
WorkingDirectory=/usr/local/kafka_2.12-2.3.0
ExecStart=/usr/local/kafka_2.12-2.3.0/bin/kafka-server-start.sh /usr/local/kafka_2.12-2.3.0/config/server.properties
ExecStop=/usr/local/kafka_2.12-2.3.0/bin/kafka-server-stop.sh
Restart=always

[Install]
WantedBy=multi-user.target

zookeeper

# cat /usr/lib/systemd/system/zookeeper.service
[Unit]
Description=zookeeper.service
After=network.target

[Service]
Type=forking
Environment=ZOO_LOG_DIR=/data/zookeeper/logs
ExecStart=/usr/local/apache-zookeeper-3.6.0-bin/bin/zkServer.sh start
ExecStop=/usr/local/apache-zookeeper-3.6.0-bin/bin/zkServer.sh stop
Restart=always

[Install]
WantedBy=multi-user.target

啟動(dòng)服務(wù)

systemctl daemon-reload
systemctl start zookeeper
systemctl start kafka
systemctl enable zookeeper
systemctl enable kafka

部署 filebeat

由于收集的是 k8s 的日志膏孟,filebeat 是采用 DaemonSet 方式部署眯分,示例如下:
daemonset 參考

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: filebeat
  name: filebeat
  namespace: default
spec:
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
      name: filebeat
    spec:
      affinity: {}
      containers:
      - args:
        - -e
        - -E
        - http.enabled=true
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        image: docker.elastic.co/beats/filebeat:7.11.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - |
              #!/usr/bin/env bash -e
              curl --fail 127.0.0.1:5066
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: filebeat
        resources:
          limits:
            cpu: "1"
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        securityContext:
          privileged: false
          runAsUser: 0
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /usr/share/filebeat/filebeat.yml
          name: filebeat-config
          readOnly: true
          subPath: filebeat.yml
        - mountPath: /usr/share/filebeat/data
          name: data
        - mountPath: /opt/docker/containers/
          name: varlibdockercontainers
          readOnly: true
        - mountPath: /var/log
          name: varlog
          readOnly: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: filebeat
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      tolerations:
      - operator: Exists
      volumes:
      - configMap:
          defaultMode: 384
          name: filebeat-daemonset-config
        name: filebeat-config
      - hostPath:
          path: /opt/docker/containers
          type: ""
        name: varlibdockercontainers
      - hostPath:
          path: /var/log
          type: ""
        name: varlog
      - hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
        name: data
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate

configmap 參考

apiVersion: v1
data:
  filebeat.yml: |
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log

      #多行合并
      multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
      multiline.negate: true
      multiline.match: after
      multiline.timeout: 30
      fields:
        #自定義字段用于logstash識(shí)別k8s輸入的日志
        service: k8s-log

      #禁止收集host.xxxx字段
      #publisher_pipeline.disable_host: true
      processors:
        - add_kubernetes_metadata:
            #添加k8s描述字段
            default_indexers.enabled: true
            default_matchers.enabled: true
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"
        - drop_fields:
            #刪除的多余字段
            fields: ["host", "tags", "ecs", "log", "prospector", "agent", "input", "beat", "offset"]
            ignore_missing: true
    output.kafka:
      hosts: ["10.0.105.74:9092","10.0.105.76:9092","10.0.105.96:9092"]
      topic: "dev-k8s-log"
      compression: gzip
      max_message_bytes: 1000000
kind: ConfigMap
metadata:
  labels:
    app: filebeat
  name: filebeat-daemonset-config
  namespace: default

然后執(zhí)行下,把 pod 啟動(dòng)起來(lái)就可以了

部署 graylog 集群

導(dǎo)入 rpm 包

rpm -Uvh https://packages.graylog2.org/repo/packages/graylog-4.0-repository_latest.rpm

安裝

yum install graylog-server -y

啟動(dòng)并加入開(kāi)機(jī)啟動(dòng)

systemctl enable graylog-server
systemctl start graylog-server

生成秘鑰

生成兩個(gè)秘鑰柒桑,分別用于配置文件中的root_password_sha2password_secret

# echo -n "Enter Password: " && head -1 </dev/stdin | tr -d '\n' | sha256sum | cut -d" " -f1
# pwgen -N -1 -s 40 1 #這個(gè)命令要是沒(méi)有弊决,就找一臺(tái)ubuntu機(jī)器,apt install pwgen下載下就可以了

修改配置文件

# vim /etc/graylog/server/server.conf
is_master = false  #是否是主節(jié)點(diǎn),如果是主節(jié)點(diǎn)飘诗,則設(shè)置為true, 集群中只有一個(gè)主節(jié)點(diǎn)
node_id_file = /etc/graylog/server/node-id
password_secret = iMh21uM57Pt2nMHDicInjPvnE8o894AIs7rJj9SW  #將上面生成的秘鑰配置到這里
root_password_sha2 = 8d969eef6ecad3c29a3a629280e686cf0c3f5d5a86aff3ca12020c923adc6c92 #將上面生成的秘鑰配置到這里
plugin_dir = /usr/share/graylog-server/plugin
http_bind_address = 0.0.0.0:9000
http_publish_uri = http://10.0.105.96:9000/
web_enable = true
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 2
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 5000
output_flush_interval = 120
output_fault_count_threshold = 8
output_fault_penalty_seconds = 120
processbuffer_processors = 20
outputbuffer_processors = 40
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://graylog:Graylog_1234@10.0.105.74:27017,10.0.105.76:27017,10.0.105.96:27017/graylog?replicaSet=graylog-rs
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load = grok-patterns.json
proxied_requests_thread_pool_size = 32
elasticsearch_hosts = http://10.0.105.74:9200,http://10.0.105.76:9200,http://10.0.105.96:9200
elasticsearch_discovery_enabled = true

在這里要注意 mongodb 和 es 的連接方式与倡,我這里全都是部署的集群,所以寫(xiě)的是集群的連接方式

mongodb_uri = mongodb://graylog:Graylog_1234@10.0.105.74:27017,10.0.105.76:27017,10.0.105.96:27017/graylog?replicaSet=graylog-rs
elasticsearch_hosts = http://10.0.105.74:9200,http://10.0.105.76:9200,http://10.0.105.96:9200

到這里部署工作就結(jié)束了昆稿,下面是在 graylog 控制臺(tái)上進(jìn)行配置下纺座,但是首先得把 graylog 給代理出來(lái),可以通過(guò) nginx 進(jìn)行代理溉潭,nginx 配置文件參考:

user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 65535;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    include /etc/nginx/conf.d/*.conf;

    upstream graylog_servers {
        server 10.0.105.74:9000;
        server 10.0.105.76:9000;
        server 10.0.105.96:9000;
    }

    server {
        listen       80 default_server;
        server_name  設(shè)置一個(gè)域名;
        location / {
            proxy_set_header Host $http_host;
            proxy_set_header X-Forwarded-Host $host;
            proxy_set_header X-Forwarded-Server $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Graylog-Server-URL http://$server_name/;
            proxy_pass http://graylog_servers;
        }
    }
}

完事后净响,重啟下 nginx,瀏覽器上訪問(wèn)即可,用戶名是 admin喳瓣,密碼是之前使用 sha25 加密方式創(chuàng)建的密碼

graylog 接入日志

配置輸入源

System --> Inputs

image

Raw/Plaintext Kafka ---> Lauch new input

image

設(shè)置 kafka 和 zookeeper 地址漂洋,設(shè)置 topic 名稱,保存

image

狀態(tài)都要是 running 狀態(tài)

image

創(chuàng)建索引

System/indices

image
image

設(shè)置索引信息蒿往,索引名杉畜,副本數(shù)、分片數(shù)蹭秋,過(guò)期策略扰付,創(chuàng)建索引策略


image

創(chuàng)建 Streams

image
image

添加規(guī)則


image
image

保存,就可以了仁讨,然后去首頁(yè)就可以看到日志了

image

總結(jié)

到這里羽莺,一個(gè)完整的部署流程就結(jié)束了,這里先講一下 graylog 是怎么部署的洞豁,然后又說(shuō)了下怎么使用盐固,后面會(huì)對(duì)它的其他功能做下探索,對(duì)日志字段做下提取之類的丈挟,敬請(qǐng)關(guān)注刁卜。

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市曙咽,隨后出現(xiàn)的幾起案子蛔趴,更是在濱河造成了極大的恐慌,老刑警劉巖例朱,帶你破解...
    沈念sama閱讀 218,755評(píng)論 6 507
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件孝情,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡洒嗤,警方通過(guò)查閱死者的電腦和手機(jī)箫荡,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,305評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門(mén),熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)渔隶,“玉大人羔挡,你說(shuō)我怎么就攤上這事。” “怎么了婉弹?”我有些...
    開(kāi)封第一講書(shū)人閱讀 165,138評(píng)論 0 355
  • 文/不壞的土叔 我叫張陵睬魂,是天一觀的道長(zhǎng)。 經(jīng)常有香客問(wèn)我镀赌,道長(zhǎng)氯哮,這世上最難降的妖魔是什么? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 58,791評(píng)論 1 295
  • 正文 為了忘掉前任商佛,我火速辦了婚禮喉钢,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘良姆。我一直安慰自己肠虽,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,794評(píng)論 6 392
  • 文/花漫 我一把揭開(kāi)白布玛追。 她就那樣靜靜地躺著税课,像睡著了一般。 火紅的嫁衣襯著肌膚如雪痊剖。 梳的紋絲不亂的頭發(fā)上韩玩,一...
    開(kāi)封第一講書(shū)人閱讀 51,631評(píng)論 1 305
  • 那天,我揣著相機(jī)與錄音陆馁,去河邊找鬼找颓。 笑死,一個(gè)胖子當(dāng)著我的面吹牛叮贩,可吹牛的內(nèi)容都是我干的击狮。 我是一名探鬼主播,決...
    沈念sama閱讀 40,362評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼益老,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼彪蓬!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起捺萌,我...
    開(kāi)封第一講書(shū)人閱讀 39,264評(píng)論 0 276
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤寞焙,失蹤者是張志新(化名)和其女友劉穎,沒(méi)想到半個(gè)月后互婿,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,724評(píng)論 1 315
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡辽狈,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,900評(píng)論 3 336
  • 正文 我和宋清朗相戀三年慈参,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片刮萌。...
    茶點(diǎn)故事閱讀 40,040評(píng)論 1 350
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡驮配,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情壮锻,我是刑警寧澤琐旁,帶...
    沈念sama閱讀 35,742評(píng)論 5 346
  • 正文 年R本政府宣布,位于F島的核電站猜绣,受9級(jí)特大地震影響灰殴,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜掰邢,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,364評(píng)論 3 330
  • 文/蒙蒙 一牺陶、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧辣之,春花似錦掰伸、人聲如沸。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 31,944評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)。三九已至多搀,卻和暖如春歧蕉,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背酗昼。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 33,060評(píng)論 1 270
  • 我被黑心中介騙來(lái)泰國(guó)打工廊谓, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人麻削。 一個(gè)月前我還...
    沈念sama閱讀 48,247評(píng)論 3 371
  • 正文 我出身青樓蒸痹,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親呛哟。 傳聞我的和親對(duì)象是個(gè)殘疾皇子叠荠,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,979評(píng)論 2 355

推薦閱讀更多精彩內(nèi)容