一复罐、Dashboard安裝
1入问、設(shè)定yaml定義文件
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: registry.cn-hangzhou.aliyuncs.com/kuberneters/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
#- --apiserver-host=http://172.16.214.210:6443
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- nodePort: 30030
port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
##說(shuō)明
1爬泥、修改image鏡像為自己的鏡像 這里使用阿里云鏡像帘睦,已經(jīng)提前下載效拭,并且docker load
docker load -i kubernetes-dashboard-amd64.tar.gz
2瓣蛀、type: NodePort猛铅,可以外部訪問(wèn)字支,端口是30030
配置NodePort,外部通過(guò)https://NodeIp:NodePort 訪問(wèn)Dashboard奸忽,此時(shí)端口為30030
2堕伪、訪問(wèn)測(cè)試
瀏覽器訪問(wèn)dashboard登陸頁(yè)面
如https://172.16.214.210:30030/
首次登陸權(quán)限報(bào)錯(cuò)
3. 新增管理員帳號(hào)
cat >> kubernetes-dashboard.yaml << EOF
---
# ------------------- dashboard-admin ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
EOF
#創(chuàng)建超級(jí)管理員的賬號(hào)用于登錄Dashboard
4. 重啟對(duì)應(yīng)的容器,查看狀態(tài)
4.1 重啟
kubectl replace --force -f kubernetes-dashboard.yaml
4.2 狀態(tài)查看
[root@master ~]# kubectl get deployment kubernetes-dashboard -n kube-system
[root@master ~]# kubectl get pods -n kube-system -o wide
[root@master ~]# kubectl get services -n kube-system
5. 令牌查看
kubectl describe secrets -n kube-system dashboard-admin
令牌為:
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tdmdmNXoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjUxYzUwOWQtZmVmMy0xMWVhLTkyOTEtMDA1MDU2MmE5MzUyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.d4UwRFsQ7Wg_hSLrRaAOFFkjqKwEDQXjlYM2sNMBgjeu7MCLuE_H5xX9BkZnkJtSoLwBrOqsTOVTQk3t3LIlUFHxCOY3CQ8--QoQXPQTcIlzgM_r4gxfQkAX5hYOE2cEFKUOUuaNUqQHfS3zJE0q3CJhIhjGx8LER6-sMhX0qO7ucU1kVdjwAWf9D4aP9XCrt0tfJ-FaFAkbvora2kcKC4GsGw3wEWkbY08ef3VrGVKwmrjrxgaxxH0P8uee4SUXeqt1xZvbsPEBQ1T4vjc8Y8plsZcNOkuxTdvDZQXCn76iTECSKz37a6GIALQV84NfxtTNu-71f2fKkPfBCntxQw
6. 訪問(wèn)
https://172.16.214.210:30030/
二栗菜、Heapster+InfluxDB+Grafana安裝
1欠雌、準(zhǔn)備好對(duì)應(yīng)的鏡像。上傳到各個(gè)節(jié)點(diǎn),冰添加標(biāo)簽
docker load < heapster-amd64.tgz
docker tag f57c75cd7b0a k8s.gcr.io/heapster-amd64:v1.5.3
docker load < heapster-grafana-amd64.tgz
docker tag 8cb3de219af7 k8s.gcr.io/heapster-grafana-amd64:v4.4.3
docker load < heapster-influxdb-amd64.tgz
docker tag 577260d221db k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
2疙筹、準(zhǔn)備對(duì)應(yīng)的yaml
cat > grafana.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
containers:
- name: grafana
image: k8s.gcr.io/heapster-grafana-amd64:v4.4.3
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certificates
readOnly: true
- mountPath: /var
name: grafana-storage
env:
- name: INFLUXDB_HOST
value: monitoring-influxdb
- name: GF_SERVER_HTTP_PORT
value: "3000"
# The following env variables are required to make Grafana accessible via
# the kubernetes api-server proxy. On production clusters, we recommend
# removing these env variables, setup auth for grafana, and expose the grafana
# service using a LoadBalancer or a public IP.
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
# If you're only using the API Server proxy, set this value instead:
# value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
value: /
volumes:
- name: ca-certificates
hostPath:
path: /etc/ssl/certs
- name: grafana-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: kube-system
spec:
# In a production setup, we recommend accessing Grafana through an external Loadbalancer
# or through a public IP.
# type: LoadBalancer
# You could also use NodePort to expose the service at a randomly-generated port
# type: NodePort
type: NodePort
ports:
- nodePort: 30108
port: 80
targetPort: 3000
selector:
k8s-app: grafana
EOF
cat > influxdb.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
containers:
- name: influxdb
image: k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
type: NodePort
ports:
- nodePort: 31001
port: 8086
targetPort: 8086
selector:
k8s-app: influxdb
EOF
#influxdb.yaml文件需更改的地方:
(1) image: k8s.gcr.io/heapster-influxdb-amd64:v1.3.3 (換成自己的images)
##說(shuō)明:這里我在前文中提供的有images下載鏈接富俄,直接下載使用不用更改!
(2)這里我們使用NotePort暴露monitoring-influxdb服務(wù)在主機(jī)的31001端口上而咆,那么InfluxDB服務(wù)端的地址:http://[host-ip]:31001 霍比,記下這個(gè)地址,以便創(chuàng)建heapster和為grafana配置數(shù)據(jù)源時(shí)暴备,可以直接使用悠瞬。
cat > heapster.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: heapster
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: heapster
spec:
serviceAccountName: heapster
containers:
- name: heapster
image: k8s.gcr.io/heapster-amd64:v1.5.3
imagePullPolicy: IfNotPresent
command:
- /heapster
- --source=kubernetes:https://kubernetes.default
# - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
- --sink=influxdb:http://172.16.214.210:31001
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
EOF
#說(shuō)明
--source 為heapster指定獲取集群信息的數(shù)據(jù)源。參考:[https://github.com/kubernetes/heapster/blob/master/docs/source-configuration.md](https://github.com/kubernetes/heapster/blob/master/docs/source-configuration.md)
--sink 為heaster指定后端存儲(chǔ)馍驯,這里我們使用InfluxDB阁危,其他的,請(qǐng)參考:[https://github.com/kubernetes/heapster/blob/master/docs/sink-owners.md](https://github.com/kubernetes/heapster/blob/master/docs/sink-owners.md)
cat > heapster-rbac.yaml << EOF
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: heapster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:heapster
subjects:
- kind: ServiceAccount
name: heapster
namespace: kube-system
EOF
#說(shuō)明
heapster-rbac.yaml 文件作用
如沒(méi)有heapster-rbac.yaml 將導(dǎo)致權(quán)限的問(wèn)題汰瘫,heaster默認(rèn)使用一個(gè)令×××(Token)與ApiServer進(jìn)行認(rèn)證狂打,通過(guò)查看heapster.yml發(fā)現(xiàn) serviceAccountName: heapster ,現(xiàn)在明白了吧混弥,就是heaster沒(méi)有權(quán)限趴乡,那么如何授權(quán)呢-----給heaster綁定一個(gè)有權(quán)限的角色就行了,即heapster-rbac.yaml配置的那樣蝗拿!
#修改yaml中image為自己的
#grafana.yaml
- name: grafana
image: k8s.gcr.io/heapster-grafana-amd64:v4.4.3
#heapster.yaml
- name: heapster
image: k8s.gcr.io/heapster-amd64:v1.5.3
#influxdb.yaml
- name: influxdb
image: k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
3. Heapster配置
雖然Heapster已經(jīng)預(yù)先配置好了Grafana的Datasource和Dashboard晾捏,但是為了方便訪問(wèn),這里我們使用NotePort暴露monitoring-grafana服務(wù)在主機(jī)的30108上哀托,那么Grafana服務(wù)端的地址:http://192.168.245.16:30108 惦辛,通過(guò)瀏覽器訪問(wèn),為Grafana修改數(shù)據(jù)源仓手,如下:
4. 訪問(wèn)測(cè)試
通過(guò)dashboard查看集群概況
https://172.16.214.210:30030/
5. 通過(guò)Grafana查看集群詳情(cpu、memory呀伙、filesystem补履、network)
6. 錯(cuò)誤說(shuō)明
k8s dashboard安裝完之后也是沒(méi)有圖像,查看heapster-565b564d5d-m8tlh 日志剿另,報(bào)一下錯(cuò)誤:
E0308 08:33:05.124339 1 kubelet.go:231] error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "http://192.168.19.137:10255/stats/container/": Post http://192.168.19.137:10255/stats/container/: dial tcp 192.168.19.137:10255: getsockopt: connection refused
解決方案
我在/etc/sysconfig/kubelet 的選項(xiàng)里加了--read-only-port=10255 KUBELET_EXTRA_ARGS="--fail-swap-on=false --read-only-port=10255" (所有節(jié)點(diǎn)都進(jìn)行添加)
然后重啟對(duì)應(yīng)的kubeclt服務(wù)以及容器進(jìn)行重啟
systemctl restart kubelet.service
kubectl replace --force -f ./