前言:
? 目前項(xiàng)目上了rancher的K8S,rancher自帶的應(yīng)用商店可以一鍵部署EFK集群,但是生產(chǎn)環(huán)境有安全性的需求,這里需要對(duì)這個(gè)EFK集群進(jìn)行改造,增加用戶名密碼的驗(yàn)證登陸.
1.efk基礎(chǔ)設(shè)置
這里采用的是rancher自帶的應(yīng)用商店里的efk离例,并自定義了鏡像地址(harbor轉(zhuǎn)儲(chǔ))
所有鏡像均取自elastic的官方源,鏡像版本均為7.7.1:
鏡像下載地址:https://www.docker.elastic.co/
由于日志數(shù)據(jù)不太重要丁恭,就沒有選擇持久化數(shù)據(jù)宁昭,這樣性能也會(huì)相對(duì)好一點(diǎn)兴泥,缺點(diǎn)是如果重新部署摧扇,elasticsearch的數(shù)據(jù)都會(huì)清空。目前rancher自己的分布式存儲(chǔ)longhorn也正式發(fā)布了,配置也簡(jiǎn)單,有條件的可以考慮將數(shù)據(jù)存放到分布式存儲(chǔ)上.
2.配置信息變更
2.1 elasticsearch 的StatefulSet配置變更:
變更的參數(shù):
env: ES_JAVA_OPTS跟認(rèn)證無關(guān)岖赋,默認(rèn)配置資源太少,容易o(hù)om鞭莽;ELASTIC_USERNAME双炕,ELASTIC_PASSWORD是為了elasticsearch集群的狀態(tài)檢測(cè)準(zhǔn)備的
- name: ES_JAVA_OPTS
value: -Xmx4g -Xms4g
- name: xpack.security.enabled
value: "true"
- name: ELASTIC_USERNAME
value: elastic
- name: ELASTIC_PASSWORD
value: elasticpassword
resource:跟開啟用戶認(rèn)證無關(guān),默認(rèn)配置資源太少撮抓,容易o(hù)om
resources:
limits:
cpu: "4"
memory: 8Gi
requests:
cpu: 100m
memory: 8Gi
附上rancher上完整的yaml文件:
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
esMajorVersion: "7"
field.cattle.io/publicEndpoints: '[{"addresses":["10.1.99.51"],"port":80,"protocol":"HTTP","serviceName":"efk:elasticsearch-master-headless","ingressName":"efk:elastic-ingress","hostname":"elastic-prod.hlet.com","allNodes":true}]'
creationTimestamp: "2020-06-03T08:34:13Z"
generation: 4
labels:
app: elasticsearch-master
chart: elasticsearch-7.3.0
heritage: Tiller
io.cattle.field/appId: efk
release: efk
name: elasticsearch-master
namespace: efk
resourceVersion: "22963322"
selfLink: /apis/apps/v1/namespaces/efk/statefulsets/elasticsearch-master
uid: 03f40362-4e89-4bd1-b8d3-285a36cbce35
spec:
podManagementPolicy: Parallel
replicas: 5
revisionHistoryLimit: 10
selector:
matchLabels:
app: elasticsearch-master
serviceName: elasticsearch-master-headless
template:
metadata:
creationTimestamp: null
labels:
app: elasticsearch-master
chart: elasticsearch-7.3.0
heritage: Tiller
release: efk
name: elasticsearch-master
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- elasticsearch-master
topologyKey: kubernetes.io/hostname
containers:
- env:
- name: node.name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: cluster.initial_master_nodes
value: elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2,elasticsearch-master-3,elasticsearch-master-4,
- name: discovery.seed_hosts
value: elasticsearch-master-headless
- name: cluster.name
value: elasticsearch
- name: network.host
value: 0.0.0.0
- name: ES_JAVA_OPTS
value: -Xmx4g -Xms4g
- name: node.data
value: "true"
- name: node.ingest
value: "true"
- name: node.master
value: "true"
- name: xpack.security.enabled
value: "true"
- name: ELASTIC_USERNAME
value: elastic
- name: ELASTIC_PASSWORD
value: elasticpassword
image: 10.1.99.42/ranchercharts/elasticsearch-elasticsearch:7.7.1
imagePullPolicy: IfNotPresent
name: elasticsearch
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
# If the node is starting up wait for the cluster to be ready (request params: 'wait_for_status=green&timeout=1s' )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file
http () {
local path="${1}"
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
else
BASIC_AUTH=''
fi
curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path}
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
http "/"
else
echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "wait_for_status=green&timeout=1s" )'
if http "/_cluster/health?wait_for_status=green&timeout=1s" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
exit 1
fi
fi
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
resources:
limits:
cpu: "4"
memory: 8Gi
requests:
cpu: 100m
memory: 8Gi
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
initContainers:
- command:
- sysctl
- -w
- vm.max_map_count=262144
image: 10.1.99.42/ranchercharts/elasticsearch-elasticsearch:7.7.1
imagePullPolicy: IfNotPresent
name: configure-sysctl
resources: {}
securityContext:
privileged: true
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1000
terminationGracePeriodSeconds: 120
updateStrategy:
type: RollingUpdate
status:
collisionCount: 0
currentReplicas: 5
currentRevision: elasticsearch-master-85f58497dd
observedGeneration: 4
readyReplicas: 5
replicas: 5
updateRevision: elasticsearch-master-85f58497dd
updatedReplicas: 5
配置完后點(diǎn)擊保存妇斤,elasticsearch集群會(huì)自動(dòng)重新部署
注意:如果集群一直不能初始化完成,建議一次性刪除所有elastic節(jié)點(diǎn)丹拯,讓節(jié)點(diǎn)完全重新初始化
待重新部署完成后站超,我們需要初始化一下elastic內(nèi)置的賬戶密碼:
登陸任意一臺(tái)elastic,執(zhí)行命令:
elasticsearch-setup-passwords interactive
至此乖酬,elasticsearch集群初始化完成
2.2 kibana 配置變更
因?yàn)槭鞘褂玫膽?yīng)用商店自動(dòng)部署的死相,所以會(huì)自動(dòng)生成兩個(gè)service,分別是efk-kibana和kibana-http咬像,
在實(shí)際配置中算撮,將service應(yīng)用到ingress的時(shí)候,出現(xiàn)了無法訪問的問題县昂,具體的問題是在kibana本地訪問http://0.0.0.0:5601 是可以訪問的肮柜,但是使用http://efk-kibana:5601 訪問就不通,后來就重新加了一個(gè)efk-kibana-headless的無頭服務(wù)倒彰,并應(yīng)用至kibana的ingress配置上去就好了审洞。后來晚些時(shí)候service自己又恢復(fù)正常了。待讳。芒澜。
[root@hlet-prod-k8s-rancher ~]# kubectl get svc -n efk
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
efk-kibana ClusterIP 10.43.127.11 <none> 5601/TCP 17d
efk-kibana-headless ClusterIP None <none> 5601/TCP 130m
elasticsearch-apm ClusterIP 10.43.238.31 <none> 8200/TCP 52d
elasticsearch-heartbeat ClusterIP 10.43.172.214 <none> 9200/TCP 2d
elasticsearch-master ClusterIP 10.43.21.168 <none> 9200/TCP,9300/TCP 17d
elasticsearch-master-headless ClusterIP None <none> 9200/TCP,9300/TCP 17d
kibana-http ClusterIP 10.43.71.157 <none> 80/TCP 174m
ingress配置:
svc配置自帶的就不貼了
kibana的yaml主要修改了兩塊:
ENV:兩組用戶名密碼分別是連接elastic集群的用戶名密碼和存活檢測(cè)腳本調(diào)用
- name: xpack.security.enabled
value: "true"
- name: ELASTICSEARCH_USERNAME
value: kibana
- name: ELASTIC_USERNAME
value: kibana
- name: ELASTICSEARCH_PASSWORD
value: elasticpassword
- name: ELASTIC_PASSWORD
value: elasticpassword
存活檢測(cè):就改了最后一行,默認(rèn)的地址在開啟認(rèn)證后沒有登陸會(huì)一直報(bào)404
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
http () {
local path="${1}"
set -- -XGET -s --fail
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
set -- "$@" -u "${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
fi
curl -k "$@" "http://localhost:5601${path}"
}
http "/login"
附上完整的Deployment的yaml配置:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "23"
field.cattle.io/publicEndpoints: '[{"addresses":["10.1.99.51"],"port":80,"protocol":"HTTP","serviceName":"efk:kibana-http","ingressName":"efk:kibana-ingress","hostname":"kibana-prod.hlet.com","allNodes":true}]'
creationTimestamp: "2020-05-26T00:53:53Z"
generation: 49
labels:
app: kibana
io.cattle.field/appId: efk
release: efk
name: efk-kibana
namespace: efk
resourceVersion: "23026049"
selfLink: /apis/apps/v1/namespaces/efk/deployments/efk-kibana
uid: 85017148-3738-46f9-8e29-65d072549a92
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: kibana
release: efk
strategy:
type: Recreate
template:
metadata:
annotations:
cattle.io/timestamp: "2020-06-09T00:17:32Z"
field.cattle.io/ports: '[[{"containerPort":80,"dnsName":"efk-kibana","kind":"ClusterIP","name":"http","protocol":"TCP"}],[{"containerPort":5601,"dnsName":"efk-kibana","kind":"ClusterIP","name":"5601tcp2","protocol":"TCP"}]]'
field.cattle.io/publicEndpoints: '[{"addresses":["10.1.99.51"],"allNodes":true,"hostname":"kibana-prod.hlet.com","ingressId":"efk:kibana-ingress","port":80,"protocol":"HTTP","serviceId":"efk:kibana-http"}]'
creationTimestamp: null
labels:
app: kibana
release: efk
spec:
containers:
- args:
- nginx
- -g
- daemon off;
- -c
- /nginx/nginx.conf
image: rancher/nginx:1.15.8-alpine
imagePullPolicy: IfNotPresent
name: kibana-proxy
ports:
- containerPort: 80
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /nginx/
name: kibana-nginx
- env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch-master:9200
- name: I18N_LOCALE
value: zh-CN
- name: LOGGING_QUIET
value: "true"
- name: SERVER_HOST
value: 0.0.0.0
- name: xpack.security.enabled
value: "true"
- name: ELASTICSEARCH_USERNAME
value: kibana
- name: ELASTIC_USERNAME
value: kibana
- name: ELASTICSEARCH_PASSWORD
value: elasticpassword
- name: ELASTIC_PASSWORD
value: elasticpassword
image: 10.1.99.42/ranchercharts/kibana-kibana:7.7.1
imagePullPolicy: IfNotPresent
name: kibana
ports:
- containerPort: 5601
name: 5601tcp2
protocol: TCP
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
http () {
local path="${1}"
set -- -XGET -s --fail
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
set -- "$@" -u "${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
fi
curl -k "$@" "http://localhost:5601${path}"
}
http "/login"
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 100m
memory: 500m
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1000
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: nginx.conf
mode: 438
path: nginx.conf
name: efk-kibana-nginx
name: kibana-nginx
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-06-12T07:46:09Z"
lastUpdateTime: "2020-06-12T07:46:09Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2020-06-12T07:29:26Z"
lastUpdateTime: "2020-06-12T07:46:09Z"
message: ReplicaSet "efk-kibana-9884bd66b" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 49
readyReplicas: 1
replicas: 1
updatedReplicas: 1
到這里就可以嘗試登陸kibana了登陸界面:
2.3 apm 配置變更
由于我們elastic的組件還使用到了apm,繼續(xù)修改apm相關(guān)設(shè)置
原始部署相關(guān)步驟:
apm是不包含在應(yīng)用商店中的创淡,部署相關(guān)yaml:
部署順序:
kubectl create configmap elasticsearch-apm --from-file=apm-server.docker.yml -n efk
kubectl apply -f elasticsearch-apm-server.yaml
apm-server.docker.yml:
apm-server:
host: "0.0.0.0:8200"
kibana.enabled: true
kibana.host: "efk-kibana:5601"
kibana.protocol: "http"
logging.level: warning
output.elasticsearch:
hosts: ["elasticsearch-master-headless:9200"]
apm.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
labels:
app: elasticsearch-apm
name: elasticsearch-apm
namespace: efk
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
app: elasticsearch-apm
template:
metadata:
labels:
app: elasticsearch-apm
spec:
containers:
- image: 10.1.99.42/docker.elastic.co/apm/apm-server:7.7.1
imagePullPolicy: IfNotPresent
name: elasticsearch-apm
ports:
- containerPort: 8200
protocol: TCP
resources:
limits:
cpu: "1"
requests:
cpu: 25m
memory: 512Mi
volumeMounts:
- mountPath: /usr/share/apm-server/apm-server.yml
name: config
subPath: apm-server.docker.yml
volumes:
- configMap:
defaultMode: 420
name: elasticsearch-apm
name: config
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch-apm
name: elasticsearch-apm
namespace: efk
spec:
ports:
- name: elasticsearch-apm
port: 8200
protocol: TCP
selector:
app: elasticsearch-apm
修改配置文件痴晦,適配用戶認(rèn)證
修改elasticsearch-apm這個(gè)configmap
apm-server.docker.yml
apm-server:
host: "0.0.0.0:8200"
kibana.enabled: true
kibana.host: "efk-kibana-headless:5601"
kibana.username: "elastic"
kibana.password: "elasticpassword"
kibana.protocol: "http"
logging.level: warning
#logging.level: info
output.elasticsearch:
hosts: ["elasticsearch-master-headless:9200"]
username: "elastic"
password: "elasticpassword"
修改完成后,重新部署一下即可琳彩。
2.4 filebeat 配置變更
應(yīng)用商店自帶的誊酌,直接修改相應(yīng)的configmap即可
修改efk-filebeat-config這個(gè)configmap
filebeat.yml:
filebeat.inputs:
- type: docker
containers.ids:
- '*'
processors:
- add_kubernetes_metadata:
in_cluster: true
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}'
username: "elastic"
password: "elasticpassword"