Prometheus監(jiān)控
1陋桂、Prometheus概述
prometheus本身就是一監(jiān)控系統(tǒng),也分為server端和agent端,server端從被監(jiān)控主機(jī)獲取數(shù)據(jù)杨箭,而agent端需要部署一個(gè)node_exporter,主要用于數(shù)據(jù)采集和暴露節(jié)點(diǎn)的數(shù)據(jù)储狭,那么 在獲取Pod級(jí)別或者是mysql等多種應(yīng)用的數(shù)據(jù)互婿,也是需要部署相關(guān)的exporter。我們可以通過PromQL的方式對(duì)數(shù)據(jù)進(jìn)行查詢辽狈,但是由于本身prometheus屬于第三方的 解決方案慈参,原生的k8s系統(tǒng)并不能對(duì)Prometheus的自定義指標(biāo)進(jìn)行解析,就需要借助于k8s-prometheus-adapter將這些指標(biāo)數(shù)據(jù)查詢接口轉(zhuǎn)換為標(biāo)準(zhǔn)的Kubernetes自定義指標(biāo)刮萌。
Prometheus是一個(gè)開源的服務(wù)監(jiān)控系統(tǒng)和時(shí)序數(shù)據(jù)庫(kù)驮配,其提供了通用的數(shù)據(jù)模型和快捷數(shù)據(jù)采集、存儲(chǔ)和查詢接口。它的核心組件Prometheus服務(wù)器定期從靜態(tài)配置的監(jiān)控目標(biāo)或者基于服務(wù)發(fā)現(xiàn)自動(dòng)配置的目標(biāo)中進(jìn)行拉取數(shù)據(jù)壮锻,新拉取到啊的 數(shù)據(jù)大于配置的內(nèi)存緩存區(qū)時(shí)琐旁,數(shù)據(jù)就會(huì)持久化到存儲(chǔ)設(shè)備當(dāng)中。Prometheus組件架構(gòu)圖如下:如上圖猜绣,每個(gè)被監(jiān)控的主機(jī)都可以通過專用的exporter程序提供輸出監(jiān)控?cái)?shù)據(jù)的接口灰殴,并等待Prometheus服務(wù)器周期性的進(jìn)行數(shù)據(jù)抓取。如果存在告警規(guī)則掰邢,則抓取到數(shù)據(jù)之后會(huì)根據(jù)規(guī)則進(jìn)行計(jì)算牺陶,滿足告警條件則會(huì)生成告警,并發(fā)送到Alertmanager完成告警的匯總和分發(fā)辣之。當(dāng)被監(jiān)控的目標(biāo)有主動(dòng)推送數(shù)據(jù)的需求時(shí)掰伸,可以以Pushgateway組件進(jìn)行接收并臨時(shí)存儲(chǔ)數(shù)據(jù),然后等待Prometheus服務(wù)器完成數(shù)據(jù)的采集召烂。
任何被監(jiān)控的目標(biāo)都需要事先納入到監(jiān)控系統(tǒng)中才能進(jìn)行時(shí)序數(shù)據(jù)采集碱工、存儲(chǔ)、告警和展示奏夫,監(jiān)控目標(biāo)可以通過配置信息以靜態(tài)形式指定怕篷,也可以讓Prometheus通過服務(wù)發(fā)現(xiàn)的機(jī)制進(jìn)行動(dòng)態(tài)管理。下面是組件的一些解析:
監(jiān)控代理程序:如node_exporter:收集主機(jī)的指標(biāo)數(shù)據(jù)酗昼,如平均負(fù)載廊谓、CPU、內(nèi)存麻削、磁盤蒸痹、網(wǎng)絡(luò)等等多個(gè)維度的指標(biāo)數(shù)據(jù)。
kubelet(cAdvisor):收集容器指標(biāo)數(shù)據(jù)呛哟,也是K8S的核心指標(biāo)收集叠荠,每個(gè)容器的相關(guān)指標(biāo)數(shù)據(jù)包括:CPU使用率、限額扫责、文件系統(tǒng)讀寫限額榛鼎、內(nèi)存使用率和限額、網(wǎng)絡(luò)報(bào)文發(fā)送鳖孤、接收者娱、丟棄速率等等。
API Server:收集API Server的性能指標(biāo)數(shù)據(jù)苏揣,包括控制隊(duì)列的性能黄鳍、請(qǐng)求速率和延遲時(shí)長(zhǎng)等等
etcd:收集etcd存儲(chǔ)集群的相關(guān)指標(biāo)數(shù)據(jù)
kube-state-metrics:該組件可以派生出k8s相關(guān)的多個(gè)指標(biāo)數(shù)據(jù),主要是資源類型相關(guān)的計(jì)數(shù)器和元數(shù)據(jù)信息平匈,包括制定類型的對(duì)象總數(shù)框沟、資源限額藏古、容器狀態(tài)以及Pod資源標(biāo)簽系列等。
Prometheus 能夠 直接 把 Kubernetes API Server 作為 服務(wù) 發(fā)現(xiàn) 系統(tǒng) 使用 進(jìn)而 動(dòng)態(tài) 發(fā)現(xiàn) 和 監(jiān)控 集群 中的 所有 可被 監(jiān)控 的 對(duì)象忍燥。 這里 需要 特別 說(shuō)明 的 是校翔, Pod 資源 需要 添加 下列 注解 信息 才 能被 Prometheus 系統(tǒng) 自動(dòng) 發(fā)現(xiàn) 并 抓取 其 內(nèi)建 的 指標(biāo) 數(shù)據(jù)。
1) prometheus. io/ scrape: 用于 標(biāo)識(shí) 是否 需要 被 采集 指標(biāo) 數(shù)據(jù)灾前, 布爾 型 值, true 或 false孟辑。
2) prometheus. io/ path: 抓取 指標(biāo) 數(shù)據(jù) 時(shí) 使用 的 URL 路徑哎甲, 一般 為/ metrics。
3) prometheus. io/ port: 抓取 指標(biāo) 數(shù)據(jù) 時(shí) 使 用的 套 接 字 端口饲嗽, 如 8080炭玫。
另外, 僅 期望 Prometheus 為 后端 生成 自定義 指標(biāo) 時(shí) 僅 部署 Prometheus 服務(wù)器 即可貌虾, 它 甚至 也不 需要 數(shù)據(jù) 持久 功能吞加。 但 若要 配置 完整 功能 的 監(jiān)控 系統(tǒng), 管理員 還需 要在 每個(gè) 主機(jī) 上 部署 node_ exporter尽狠、 按 需 部署 其他 特有 類型 的 exporter 以及 Alertmanager衔憨。
2、Prometheus監(jiān)控架構(gòu)
3袄膏、Prometheus監(jiān)控配置
3.1 首先在Kubernetes中搭建Prometheus,yaml文件如下
deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-dev-app
namespace: prometheus-exporter-dev-app
labels:
app: prometheus-dev-app
spec:
selector:
matchLabels:
app: prometheus-dev-app
template:
metadata:
labels:
app: prometheus-dev-app
spec:
serviceAccountName: prometheus-dev-app-sa
volumes:
- name: prometheus-dev-pvc
persistentVolumeClaim:
claimName: prometheus-dev-pvc
- name: prometheus-dev-cm
configMap:
name: prometheus-dev-cm
initContainers:
- name: fix-permissions
image: busybox
command: [chown, -R, "nobody:nobody", /prometheus]
volumeMounts:
- name: prometheus-dev-pvc
mountPath: /prometheus
containers:
- name: prometheus-dev
image: prom/prometheus:v2.24.1
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention.time=120h"
- "--web.enable-admin-api"
- "--web.enable-lifecycle"
ports:
- name: http
containerPort: 9090
volumeMounts:
- name: prometheus-dev-cm
mountPath: "/etc/prometheus"
- name: prometheus-dev-pvc
mountPath: "/prometheus"
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: 100m
memory: 2048Mi
pv pvc
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus-dev-pv
labels:
app: prometheus-dev-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
storageClassName: local-storage
local:
path: /data/k8s/prometheus
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values: ["es-k8s-app-dev-nd03"]
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-dev-pvc
namespace: prometheus-exporter-dev-app
spec:
selector:
matchLabels:
app: prometheus-dev-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: local-storage
Cluster role
kind: ClusterRole
metadata:
name: prometheus-dev-app-cr
rules:
- apiGroups:
- ""
resources:
- configmaps
- secrets
- nodes
- pods
- services
- resourcequotas
- replicationcontrollers
- limitranges
- persistentvolumeclaims
- persistentvolumes
- namespaces
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
- nodes/metrics
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get
- apiGroups:
- apps
resources:
- statefulsets
- daemonsets
- deployments
- replicasets
verbs:
- list
- watch
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- list
- watch
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- list
- watch
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests
verbs:
- list
- watch
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
- volumeattachments
verbs:
- list
- watch
- apiGroups:
- admissionregistration.k8s.io
resources:
- mutatingwebhookconfigurations
- validatingwebhookconfigurations
verbs:
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
- ingresses
verbs:
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus-dev-app-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-dev-app-cr
subjects:
- kind: ServiceAccount
name: prometheus-dev-app-sa
namespace: prometheus-exporter-dev-app
Service
apiVersion: v1
kind: Service
metadata:
name: prometheus-dev-app
namespace: prometheus-exporter-dev-app
labels:
app: prometheus-dev-app
spec:
type: NodePort
ports:
- name: prometheus-dev
port: 9090
targetPort: http
nodePort: 30097
selector:
app: prometheus-dev-app
Configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-dev-cm
namespace: prometheus-exporter-dev-app
data:
prometheus.yml: |
global:
scrape_interval: 15s
scrape_timeout: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'nodes'
kubernetes_sd_configs:
- role: node
# static_configs:
# - targets: ['10.20.20.100:10250','10.20.20.101:10250','10.20.20.102:10250']
relabel_configs:
- action: replace
source_labels: ['__address__']
regex: '(.*):10250'
replacement: '${1}:9100'
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- job_name: 'kubelet'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- job_name: 'cadvisor'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: replace
source_labels: ['__metrics_path__']
regex: '(.*)'
replacement: '${1}/cadvisor'
target_label: __metrics_path__
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- job_name: 'apiserver'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: keep
source_labels: ['__address__']
regex: (.*):6443
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- job_name: 'pod'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
3.2 在Kubernetes部署Kube-metrics-state和node-exporter
node-exporter
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: dev-app-node-exporter
namespace: prometheus-exporter-dev-app
labels:
app: dev-app-node-exporter
spec:
selector:
matchLabels:
app: dev-app-node-exporter
template:
metadata:
labels:
app: dev-app-node-exporter
spec:
hostPID: true
hostIPC: true
hostNetwork: true
nodeSelector:
kubernetes.io/os: linux
containers:
- name: node-exporter
image: prom/node-exporter:v1.1.1
args:
- --web.listen-address=$(HOSTIP):9100
- --path.procfs=/host/proc
- --path.sysfs=/host/sys
- --path.rootfs=/host/root
- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
- --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
ports:
- containerPort: 9100
env:
- name: HOSTIP
valueFrom:
fieldRef:
fieldPath: status.hostIP
resources:
requests:
cpu: 150m
memory: 180Mi
limits:
cpu: 150m
memory: 180Mi
securityContext:
runAsNonRoot: true
runAsUser: 65534
volumeMounts:
- name: proc
mountPath: /host/proc
- name: sys
mountPath: /host/sys
- name: root
mountPath: /host/root
mountPropagation: HostToContainer
readOnly: true
tolerations:
- operator: "Exists"
volumes:
- name: proc
hostPath:
path: /proc
- name: dev
hostPath:
path: /dev
- name: sys
hostPath:
path: /sys
- name: root
hostPath:
path: /
kube-state-metrics
參考鏈接:https://github.com/kubernetes/kube-state-metrics
部署:
git clone https://github.com/kubernetes/kube-state-metrics.git
kubectl apply -f kube-state-metrics/examples/standard
根據(jù)需求修改對(duì)應(yīng)yaml配置
3.2践图、搭建Prometheus監(jiān)控
我這里通過docker運(yùn)行
docker run -d -p 9090:9090 --name=prometheus -v /opt/prometheus/new-prometheus.yml:/etc/prometheus/prometheus.yml -v /opt/prometheus/rules/:/etc/prometheus/rules -v /opt/prometheus/prom_job_conf/:/etc/prometheus/prom_job_conf/ prom/prometheus
Prometheus配置目錄結(jié)構(gòu)
.
├── new-prometheus.yml
├── new-prometheus.yml.bak
├── prom_job_conf
│ ├── kubernetes
│ │ ├── demo-cn.json
│ │ ├── dev.json
│ │ ├── pre.json
│ │ └── qa.json
│ ├── mongodb
│ │ ├── demo_cn_mongodb.json
│ │ ├── dev_mongodb.json
│ │ ├── pre_mongodb.json
│ │ └── qa_mongodb.json
│ ├── mysql
│ │ ├── demo_cn_mysql.json
│ │ ├── dev_mysql.json
│ │ ├── pre_mysql.json
│ │ └── qa_mysql.json
│ └── node
│ ├── demo_cn_db.json
│ ├── demo_cn.json
│ ├── dev_db.json
│ ├── dev.json
│ ├── pre_db.json
│ ├── pre.json
│ ├── qa_db.json
│ └── qa.json
└── rules
├── kubernetes.yml
├── mongo.yml
├── mysql.yml
└── node.yml
Prometheus配置文件
global:
scrape_interval: 60s
scrape_timeout: 60s
evaluation_interval: 60s
alerting:
alertmanagers:
- static_configs:
- targets: ["10.168.101.80:9093"]
rule_files:
- "/etc/prometheus/rules/kubernetes.yml"
- "/etc/prometheus/rules/mysql.yml"
- "/etc/prometheus/rules/mongo.yml"
scrape_configs:
- job_name: 'kubernetes'
scrape_interval: 1m
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{job=~"apiserver"}'
- '{job=~"cadvisor"}'
- '{job=~"kubelet"}'
# - '{job=~"nodes"}'
- '{job=~"pod"}'
# - '{job=~"prometheus"}'
# - '{job=~"dev-mysql"}'
# - '{job=~"dev-mongodb"}'
# - '{job=~"qa-mysql"}'
# - '{job=~"qa-mongodb"}'
file_sd_configs:
- files:
- /etc/prometheus/prom_job_conf/kubernetes/*.json
refresh_interval: 5m
- job_name: 'mysql'
scrape_interval: 30s
file_sd_configs:
- files:
- /etc/prometheus/prom_job_conf/mysql/*.json
refresh_interval: 5m
- job_name: 'node'
scrape_interval: 30s
file_sd_configs:
- files:
- /etc/prometheus/prom_job_conf/node/*.json
refresh_interval: 5m
- job_name: 'mongodb'
scrape_interval: 30s
file_sd_configs:
- files:
- /etc/prometheus/prom_job_conf/mongodb/*.json
refresh_interval: 5m
···