使用Elasticsearch監(jiān)控Kubernetes日志

我們將要安裝配置 Filebeat 來(lái)收集 Kubernetes 集群中的日志數(shù)據(jù)菱鸥,然后發(fā)送到 ElasticSearch 去中宗兼,F(xiàn)ilebeat 是一個(gè)輕量級(jí)的日志采集代理,還可以配置特定的模塊來(lái)解析和可視化應(yīng)用(比如數(shù)據(jù)庫(kù)氮采、Nginx 等)的日志格式殷绍。

和 Metricbeat 類似,F(xiàn)ilebeat 也需要一個(gè)配置文件來(lái)設(shè)置和 ElasticSearch 的鏈接信息鹊漠、和 Kibana 的連接已經(jīng)日志采集和解析的方式主到。
如下資源對(duì)象就是我們這里用于日志采集的信息

1 Filebeat配置

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-logging
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.config:
      inputs:
        path: ${path.config}/inputs.d/*.yml
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        reload.enabled: false
    #output.logstash:
    #  hosts: ["10.168.101.77:8080","10.168.101.78:8080","10.168.101.79:8080"]

    output.kafka:
      hosts: ["10.168.101.77:9092","10.168.101.78:9092","10.168.101.79:9092"]
      enabbled: true
      topic: "dev"
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-inputs
  namespace: kube-logging
  labels:
    k8s-app: filebeat
data:
  kubernetes.yml: |-
    - type: docker
      containers.ids:
      - "*"
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-logging
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: elastic/filebeat:7.13.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: 10.168.101.77
        - name: ELASTICSEARCH_PORT
          value: "9200"
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: inputs
          mountPath: /usr/share/filebeat/inputs.d
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: inputs
        configMap:
          defaultMode: 0600
          name: filebeat-inputs
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-logging
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  - nodes
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-logging
  labels:
    k8s-app: filebeat
---
kubectl apply -f filebeat.yaml 
configmap/filebeat-config unchanged
configmap/filebeat-inputs unchanged
daemonset.apps/filebeat configured
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/filebeat unchanged
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/filebeat unchanged
serviceaccount/filebeat unchanged
kubectl get pods -n kube-logging
NAME             READY   STATUS    RESTARTS   AGE
filebeat-58b8s   1/1     Running   0          91s
filebeat-5sj2c   1/1     Running   0          115s
filebeat-k7qdl   1/1     Running   0          104s
filebeat-xpgrs   1/1     Running   0          81s

2: Logstash配置

我這里使用的是單節(jié)點(diǎn)Logstash 并沒(méi)有使用Kubernetes搭建.相應(yīng)的kubernetes yaml文件如下

apiVersion: apps/v1
kind: Deployment
metadata:
  name: logstash
  namespace: kube-logging
spec:
  replicas: 1
  selector:
    matchLabels:
      app: logstash
  template:
    metadata:
      labels:
        app: logstash
    spec:
      containers:
      - name: logstash
        image: elastic/logstash:7.13.0
        volumeMounts:
        - name: config
          mountPath: /opt/logstash/config/containers.conf
          subPath: containers.conf
        env:
        - name: "XPACK_MONITORING_ELASTICSEARCH_URL"
          value: "http://elasticsearch:9200"
        command:
        - "/bin/sh"
        - "-c"
        - "/opt/logstash/bin/logstash -f /opt/logstash/config/containers.conf"
      volumes:
      - name: config
        configMap:
          name: logstash-k8s-config
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: logstash
  name: logstash
  namespace: kube-logging
spec:
  ports:
    - port: 8080
      targetPort: 8080
      nodePort: 32003
  selector:
    app: logstash
  type: NodePort

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-k8s-config
  namespace: kube-logging
data:
  containers.conf: |
    input {
      beats {
        port => 8080  #filebeat連接端口
      }
    }
    filter {
      mutate {
        remove_field => ["agent","@version","container","[kubernetes][labels]","[kubernetes][node]","[kubernetes][pod][ip]","[kubernetes][pod][uid]","tag","log","stream","_score","_type"]
        }
      if [kubernetes][namespace] == "x" {
        mutate {
          add_field => {
            "[@metadata][target_index]" => "%{[kubernetes][container][name]}-%{+YYYY.MM.dd}"
            #"[@metadata][target_index]" => 'test'
          }
        }
      }
      else if [kubernetes][namespace] == "x" {
        mutate {
          add_field => {
            "[@metadata][target_index]" => "dev-%{[kubernetes][container][name]}-%{+YYYY.MM.dd}"
          }
        }
      }
      else if [kubernetes][namespace] == "x" {
        mutate {
          add_field => {
            "[@metadata][target_index]" => "dev-%{[kubernetes][container][name]}-%{+YYYY.MM.dd}"
          }
        }
      } else if [kubernetes][namespace] == "x" {
        mutate {
          add_field => {
            "[@metadata][target_index]" => "dev-%{[kubernetes][container][name]}-%{+YYYY.MM.dd}"
          }
        }
      }
      else {
        drop {}
      }
    }
    output {
     #   stdout {
     #           codec => rubydebug
     #   }
      elasticsearch {
        hosts => "elasticsearch:9200"
        index => "%{[@metadata][target_index]}"
      }
    }

由于我這里沒(méi)有使用Kubernetes搭建.所以展示一下我的Logstash配置文件

cat container.conf
input {
  kafka {
    bootstrap_servers => ["10.168.101.77:9092,10.168.101.78:9092,10.168.101.79:9092"]
    topics => ["dev"]
    consumer_threads => 1
    codec => json
  }
}
filter {
  mutate {
    remove_field => ["agent","@version","container","[kubernetes][labels]","[kubernetes][node]","[kubernetes][pod][ip]","[kubernetes][pod][uid]","tag","log","stream","_score","_type"]
    }
  if [kubernetes][namespace] == "es-backend-dev" {
    mutate {
      add_field => {
        "[@metadata][target_index]" => "dev-%{[kubernetes][container][name]}-%{+YYYY.MM.dd}"
      }
    }
  }
  else if [kubernetes][namespace] == "es-frontend-dev" {
    mutate {
      add_field => {
        "[@metadata][target_index]" => "dev-%{[kubernetes][container][name]}-%{+YYYY.MM.dd}"
      }
    }
  }
  else if [kubernetes][namespace] == "es-influxdb-dev" {
    mutate {
      add_field => {
        "[@metadata][target_index]" => "dev-db%{[kubernetes][container][name]}-%{+YYYY.MM.dd}"
      }
    }
  }
  else if [kubernetes][namespace] == "es-kafka-dev" {
    mutate {
      add_field => {
        "[@metadata][target_index]" => "dev-db%{[kubernetes][container][name]}-%{+YYYY.MM.dd}"
      }
    }
  }
  else if [kubernetes][namespace] == "es-mongodb-dev" {
    mutate {
      add_field => {
        "[@metadata][target_index]" => "dev-db%{[kubernetes][container][name]}-%{+YYYY.MM.dd}"
      }
    }
  }
  else if [kubernetes][namespace] == "es-mqtt-dev" {
    mutate {
      add_field => {
        "[@metadata][target_index]" => "dev-db%{[kubernetes][container][name]}-%{+YYYY.MM.dd}"
      }
    }
  }
  else if [kubernetes][namespace] == "es-mysql-dev" {
    mutate {
      add_field => {
        "[@metadata][target_index]" => "dev-db%{[kubernetes][container][name]}-%{+YYYY.MM.dd}"
      }
    }
  }
  else if [kubernetes][namespace] == "es-redis-dev" {
    mutate {
      add_field => {
        "[@metadata][target_index]" => "dev-db%{[kubernetes][container][name]}-%{+YYYY.MM.dd}"
      }
    }
  }
  else if [kubernetes][namespace] == "websocket-dev" {
    mutate {
      add_field => {
        "[@metadata][target_index]" => "dev-db%{[kubernetes][container][name]}-%{+YYYY.MM.dd}"
      }
    }
  }
  else {
    drop {}
  }
}
output {
  elasticsearch {
    hosts => ["10.168.101.77:9200","10.168.101.78:9200","10.168.101.79:9200"]
    index => "%{[@metadata][target_index]}"
  }
}

3: Kibana數(shù)據(jù)展示

image.png
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市躯概,隨后出現(xiàn)的幾起案子登钥,更是在濱河造成了極大的恐慌,老刑警劉巖娶靡,帶你破解...
    沈念sama閱讀 211,884評(píng)論 6 492
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件牧牢,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡姿锭,警方通過(guò)查閱死者的電腦和手機(jī)塔鳍,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,347評(píng)論 3 385
  • 文/潘曉璐 我一進(jìn)店門(mén),熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)呻此,“玉大人轮纫,你說(shuō)我怎么就攤上這事》傧剩” “怎么了掌唾?”我有些...
    開(kāi)封第一講書(shū)人閱讀 157,435評(píng)論 0 348
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)恃泪。 經(jīng)常有香客問(wèn)我郑兴,道長(zhǎng),這世上最難降的妖魔是什么贝乎? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 56,509評(píng)論 1 284
  • 正文 為了忘掉前任情连,我火速辦了婚禮,結(jié)果婚禮上览效,老公的妹妹穿的比我還像新娘却舀。我一直安慰自己,他們只是感情好锤灿,可當(dāng)我...
    茶點(diǎn)故事閱讀 65,611評(píng)論 6 386
  • 文/花漫 我一把揭開(kāi)白布挽拔。 她就那樣靜靜地躺著,像睡著了一般但校。 火紅的嫁衣襯著肌膚如雪螃诅。 梳的紋絲不亂的頭發(fā)上,一...
    開(kāi)封第一講書(shū)人閱讀 49,837評(píng)論 1 290
  • 那天,我揣著相機(jī)與錄音术裸,去河邊找鬼倘是。 笑死,一個(gè)胖子當(dāng)著我的面吹牛袭艺,可吹牛的內(nèi)容都是我干的搀崭。 我是一名探鬼主播,決...
    沈念sama閱讀 38,987評(píng)論 3 408
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼猾编,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼瘤睹!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起答倡,我...
    開(kāi)封第一講書(shū)人閱讀 37,730評(píng)論 0 267
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤轰传,失蹤者是張志新(化名)和其女友劉穎,沒(méi)想到半個(gè)月后苇羡,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體绸吸,經(jīng)...
    沈念sama閱讀 44,194評(píng)論 1 303
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,525評(píng)論 2 327
  • 正文 我和宋清朗相戀三年设江,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了锦茁。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 38,664評(píng)論 1 340
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡叉存,死狀恐怖码俩,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情歼捏,我是刑警寧澤稿存,帶...
    沈念sama閱讀 34,334評(píng)論 4 330
  • 正文 年R本政府宣布,位于F島的核電站瞳秽,受9級(jí)特大地震影響瓣履,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜练俐,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,944評(píng)論 3 313
  • 文/蒙蒙 一袖迎、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧腺晾,春花似錦燕锥、人聲如沸。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 30,764評(píng)論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)。三九已至鼻由,卻和暖如春暇榴,著一層夾襖步出監(jiān)牢的瞬間厚棵,已是汗流浹背。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 31,997評(píng)論 1 266
  • 我被黑心中介騙來(lái)泰國(guó)打工跺撼, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留窟感,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 46,389評(píng)論 2 360
  • 正文 我出身青樓歉井,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親哈误。 傳聞我的和親對(duì)象是個(gè)殘疾皇子哩至,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 43,554評(píng)論 2 349

推薦閱讀更多精彩內(nèi)容