環(huán)境準(zhǔn)備:
k8s-master ---- 192.168.1.199 --2CPU2核,內(nèi)存2G多搀,Centos7.x
k8s-node1 ---- 192.168.1.220-- 2CPU2核,內(nèi)存4G,Centos7.x
k8s-node2 ----- 192.168.1.221--2CPU2核近零,內(nèi)存4G,Centos7.x
一.系統(tǒng)初始化:
使用ansible批量操作:
cat /etc/ansible/hosts
[k8s]
192.168.1.199 name=k8s-master
192.168.1.220 name=k8s-node1
192.168.1.221 name=k8s-node2
playbook.yml
---
- hosts: k8s
gather_facts: no
tasks:
- name: 關(guān)閉防火墻
systemd: name=firewalld state=stopped enabled=no
- name: 關(guān)閉sellinux
shell: sed -i 's/enforcing/disabled/' /etc/selinux/config
- name: 關(guān)閉swp
shell: sed -ri 's/.*swap.*/#&/' /etc/fstab
- name: 時(shí)間同步
yum: name=ntpdate state=installed
- name: 執(zhí)行時(shí)間同步
shell: ntpdate time.windows.com
- name: 拷貝修改參數(shù)腳本
copy: src=edit_sysctl.sh dest=/root/
- name: 運(yùn)行腳本
shell: sh /root/edit_sysctl.sh
- name: 設(shè)置主機(jī)名
tags: hostname
shell: hostnamectl set-hostname {{ name }}
edit_sysctl.sh
#!/bin/bash
# 將橋接的IPv4流量傳遞到iptables的鏈
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
所有主機(jī)改下hosts文件
vim /etc/hosts
192.168.1.199 k8s-master
192.168.1.220 k8s-node1
192.168.1.221 k8s-node2
系統(tǒng)初始工作完成后抄肖,最好重啟下系統(tǒng)秒赤,讓剛才關(guān)閉swap生效。
二.安裝Docker/kubeadm/kubelet【所有節(jié)點(diǎn)】
Docker
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
$ yum -y install docker-ce
$ systemctl enable docker && systemctl start docker
配置加速器
$ cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
重啟docker
$ systemctl restart docker
$ docker info
添加阿里云YUM軟件源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安裝kubeadm憎瘸,kubelet和kubectl
$ yum install -y kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0
$ systemctl enable kubelet
三.部署Kubernets Master
1.在master(119)執(zhí)行初始化命令
kubeadm init \
--apiserver-advertise-address=192.168.1.199 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.19.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=all
或者編輯配置文件vi kubeadm.conf ,將以上內(nèi)容配置好陈瘦,在引用執(zhí)行kubeadm init --config kubeadm.conf --ignore-preflight-errors=all
參數(shù)說(shuō)明:
- --apiserver-advertise-address 集群通告地址
- --image-repository 由于默認(rèn)拉取鏡像地址k8s.gcr.io國(guó)內(nèi)無(wú)法訪問(wèn)幌甘,這里指定阿里云鏡像倉(cāng)庫(kù)地址
- --kubernetes-version K8s版本,與上面安裝的一致
- --service-cidr 集群內(nèi)部虛擬網(wǎng)絡(luò)痊项,Pod統(tǒng)一訪問(wèn)入口
-
--pod-network-cidr Pod網(wǎng)絡(luò)锅风,,與下面部署的CNI網(wǎng)絡(luò)組件yaml中保持一致
2.拷貝kubectl使用的連接k8s認(rèn)證文件到默認(rèn)路徑:
bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
3.查看節(jié)點(diǎn)狀態(tài)
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 2m v1.18.0
四.Worker Node加入集群
向集群添加新節(jié)點(diǎn)鞍泉,執(zhí)行在kubeadm init 最后輸出的kubeadm join命令:
kubeadm join 192.168.1.199:6443 --token brjfr2.xtp3ytjalcrw7div \
> --discovery-token-ca-cert-hash sha256:fbaf11091afe3792277492d7253fa7dfb81293bdc6ecaac7df0f6213ec4d43c5
五.部署網(wǎng)絡(luò)CNI
現(xiàn)在還是NotReady狀態(tài)皱埠,一般這種情況先看kubelet的日志來(lái)找問(wèn)題。
以為現(xiàn)在還沒(méi)部署網(wǎng)絡(luò)咖驮,所以肯定是網(wǎng)絡(luò)問(wèn)題啦边器。
k8s-master kubelet[6737]: W0603 15:52:00.865681 6737 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
Calicoa組件
Calico是一個(gè)純?nèi)龑拥臄?shù)據(jù)中心網(wǎng)絡(luò)方案,Calico支持廣泛的平臺(tái)托修,包括Kubernetes忘巧、OpenStack等。
Calico 在每一個(gè)計(jì)算節(jié)點(diǎn)利用 Linux Kernel 實(shí)現(xiàn)了一個(gè)高效的虛擬路由器( vRouter) 來(lái)負(fù)責(zé)數(shù)據(jù)轉(zhuǎn)發(fā)睦刃,而每個(gè) vRouter 通過(guò) BGP 協(xié)議負(fù)責(zé)把自己上運(yùn)行的 workload 的路由信息向整個(gè) Calico 網(wǎng)絡(luò)內(nèi)傳播砚嘴。
此外,Calico 項(xiàng)目還實(shí)現(xiàn)了 Kubernetes 網(wǎng)絡(luò)策略涩拙,提供ACL功能际长。
部署Calicoa
下載calico的yaml
wget https://docs.projectcalico.org/manifests/calico.yaml
修改 calico.yaml中的 ip網(wǎng)段
這是將默認(rèn)的192.168.網(wǎng)段改成10.244.0.0/16(最開(kāi)始初始化kubeadm init的時(shí)候指定的)
加載組件
kubectl apply -f calico.yml
稍等一下,查看集群節(jié)點(diǎn)狀態(tài)和pod狀態(tài)兴泥,全部正常工育。
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 62m v1.19.0
k8s-node1 Ready <none> 24m v1.19.0
k8s-node2 Ready <none> 6m52s v1.19.0
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-97769f7c7-7kwqm 1/1 Running 0 8m38s
calico-node-bqv2g 1/1 Running 0 8m38s
calico-node-jgxln 1/1 Running 0 8m38s
calico-node-t2qh4 1/1 Running 0 6m55s
coredns-6d56c8448f-9mgzw 1/1 Running 0 62m
coredns-6d56c8448f-s6z7b 1/1 Running 0 62m
etcd-k8s-master 1/1 Running 0 62m
kube-apiserver-k8s-master 1/1 Running 0 62m
kube-controller-manager-k8s-master 1/1 Running 0 62m
kube-proxy-hqvkk 1/1 Running 0 6m55s
kube-proxy-nc6l8 1/1 Running 0 24m
kube-proxy-psfcn 1/1 Running 0 62m
kube-scheduler-k8s-master 1/1 Running 0 62m
注意:
如果執(zhí)行kubeadm失敗了就會(huì)出現(xiàn)一些錯(cuò)誤信息,然后解決了搓彻,再次使用kubeadm init來(lái)初始化也不會(huì)成功翅娶,因?yàn)榈谝淮芜\(yùn)行環(huán)境是錯(cuò)誤的環(huán)境文留。這個(gè)需要將清理當(dāng)前環(huán)境,保持一個(gè)純凈的環(huán)境再去執(zhí)行初始化
1.清空當(dāng)前初始化環(huán)境
kubeadm reset
2.calico pod未準(zhǔn)備就緒竭沫,那么需要每個(gè)節(jié)點(diǎn)手動(dòng)拉取鏡像看是否拉取到
grep image calico.yaml 每個(gè)節(jié)點(diǎn)拉取看看是否快
docker pull calico/xxx
六.測(cè)試k8s集群
- 驗(yàn)證Pod工作
- 驗(yàn)證Pod網(wǎng)絡(luò)通信
- 驗(yàn)證DNS解析
在Kubernetes集群中創(chuàng)建一個(gè)pod燥翅,驗(yàn)證是否正常運(yùn)行:
$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc
七.部署Dashboard UI
[root@k8s-master ~]# cat kubernertes-dashboard.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.3
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
加載組件
kubectl apply -f kubernertes-dashboard.yaml
查看pod狀態(tài)
[root@k8s-master ~]# kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-7b59f7d4df-v79bx 1/1 Running 0 2m52s
kubernetes-dashboard-5dbf55bd9d-hblj7 1/1 Running 0 2m52s
訪問(wèn): https://任意一NodeIP:30001
創(chuàng)建service account并綁定默認(rèn)cluster-admin管理員集群角色:
# 創(chuàng)建用戶(hù)
$ kubectl create serviceaccount dashboard-admin -n kube-system
# 用戶(hù)授權(quán)
$ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
# 獲取用戶(hù)Token
$ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
使用輸出的token登錄Dashboard。
整套部署下來(lái)遇到的問(wèn)題:
排查半天蜕提,感覺(jué)是權(quán)限問(wèn)題森书,然后從新清空環(huán)境部署,問(wèn)題依舊谎势。
最后打算看實(shí)時(shí)日志凛膏,就新開(kāi)了一個(gè)終端,又執(zhí)行了一次apply脏榆,重新加載組件猖毫。。须喂。居然解決了吁断。。坞生。