寫在前面
記錄和分享使用二進(jìn)制搭建K8S集群的詳細(xì)過(guò)程断箫,由于操作比較冗長(zhǎng)捉兴,大概會(huì)分四篇寫完:
K8S的Node上需要運(yùn)行kubelet和kube-proxy衔蹲。本篇介紹在Node機(jī)器安裝這兩個(gè)組件还最,除此之外,安裝通信需要的cni插件荡灾。
本篇的執(zhí)行命令需要在準(zhǔn)備的兩臺(tái)Node機(jī)器上執(zhí)行瓤狐。
安裝docker
可以參照官網(wǎng):https://docs.docker.com/engine/install/
# 卸載老版本或重裝docker時(shí)執(zhí)行第一行
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine -y
# 安裝docker
yum install -y yum-utils
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io -y
# 查看Docker版本
docker version
啟動(dòng)Docker
systemctl enable docker
systemctl start docker
安裝kubelet
cd /root/kubernetes/resources
tar -zxvf ./kubernetes-node-linux-amd64.tar.gz
mkdir /etc/kubernetes/{ssl,bin} -p
cp kubernetes/node/bin/kubelet ./kubernetes/node/bin/kube-proxy /etc/kubernetes/bin
cd /etc/kubernetes
準(zhǔn)備kubelet配置文件
vim kubelet
執(zhí)行上行命令,在k8s-node01寫入文件內(nèi)容如下:
KUBELET_ARGS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes \
--enable-server=true \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--hostname-override=k8s-node01 \
--network-plugin=cni \
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--config=/etc/kubernetes/kubelet-config.yml \
--cert-dir=/etc/kubernetes/ssl"
在k8s-node02寫入文件內(nèi)容如下:
KUBELET_ARGS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes \
--enable-server=true \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--hostname-override=k8s-node02 \
--network-plugin=cni \
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--config=/etc/kubernetes/kubelet-config.yml \
--cert-dir=/etc/kubernetes/ssl"
準(zhǔn)備bootstrap.kubeconfig文件
vim /etc/kubernetes/bootstrap.kubeconfig
執(zhí)行上行命令卧晓,寫入文件內(nèi)容如下:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
server: https://192.168.115.131:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubelet-bootstrap
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user:
token: d5c5d767b64db39db132b433e9c45fbc
注意:token的值需要替換為master生成的token.csv中所用的token芬首。
準(zhǔn)備kubelet-config.yml文件
vim kubelet-config.yml
執(zhí)行上行命令,寫入文件內(nèi)容如下:
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
準(zhǔn)備kubelet.kubeconfig文件
vim kubelet.kubeconfig
執(zhí)行上行命令逼裆,寫入文件內(nèi)容如下:
kubelet.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
server: https://192.168.115.131:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: default
user: default-auth
name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-auth
user:
client-certificate: /etc/kubernetes/ssl/kubelet-client-current.pem
client-key: /etc/kubernetes/ssl/kubelet-client-current.pem
準(zhǔn)備kubelet服務(wù)配置文件
vim /usr/lib/systemd/system/kubelet.service
執(zhí)行上行命令郁稍,寫入文件內(nèi)容如下:
[Unit]
Description=Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/etc/kubernetes/bin/kubelet $KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
啟動(dòng)kubelet:
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl status kubelet
給Node頒發(fā)證書,在Master上執(zhí)行:
kubectl get csr
# 輸出如下
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-a-BmW9xMglOXlUdwBjD2QQphXLdu4iwtamEIIbhJKcY 10m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
node-csr-zDDrVyKH7ug8fTUcDjdvDgh-f9rVCyoHuLMGaWbykAQ 10m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
得到證書的NAME胜宇,給其Approve:
kubectl certificate approve node-csr-a-BmW9xMglOXlUdwBjD2QQphXLdu4iwtamEIIbhJKcY
kubectl certificate approve node-csr-zDDrVyKH7ug8fTUcDjdvDgh-f9rVCyoHuLMGaWbykAQ
再次查看證書耀怜,證書的CONDITION就會(huì)更新了
kubectl get csr
# 輸出如下
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-a-BmW9xMglOXlUdwBjD2QQphXLdu4iwtamEIIbhJKcY 10m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
node-csr-zDDrVyKH7ug8fTUcDjdvDgh-f9rVCyoHuLMGaWbykAQ 10m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
接下來(lái)使用查看Node的命令,應(yīng)該可以獲取到Node信息:
kubectl get node
# 輸出如下
NAME STATUS ROLES AGE VERSION
k8s-node01 NotReady <none> 50s v1.18.3
k8s-node02 NotReady <none> 56s v1.18.3
安裝kube-proxy
準(zhǔn)備kube-proxy配置文件
vim kube-proxy
執(zhí)行上行命令桐愉,寫入文件內(nèi)容如下:
KUBE_PROXY_ARGS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes \
--config=/etc/kubernetes/kube-proxy-config.yml"
準(zhǔn)備kube-proxy-config.yml文件
vim /etc/kubernetes/kube-proxy-config.yml
執(zhí)行上行命令财破,在k8s-node01寫入文件內(nèi)容如下:
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
iclientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
hostnameOverride: k8s-node01
clusterCIDR: 10.0.0.0/24
mode: ipvs
ipvs:
scheduler: "rr"
iptables:
masqueradeAll: true
在k8s-node02寫入文件內(nèi)容如下:
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
hostnameOverride: k8s-node02
clusterCIDR: 10.0.0.0/24
mode: ipvs
ipvs:
scheduler: "rr"
iptables:
masqueradeAll: true
準(zhǔn)備kube-proxy.kubeconfig文件
vim /etc/kubernetes/kube-proxy.kubeconfig
執(zhí)行上行命令,寫入文件內(nèi)容如下:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
server: https://192.168.115.131:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kube-proxy
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
user:
client-certificate: /etc/kubernetes/ssl/kube-proxy.pem
client-key: /etc/kubernetes/ssl/kube-proxy-key.pem
準(zhǔn)備kube-proxy服務(wù)配置文件
vim /usr/lib/systemd/system/kube-proxy.service
執(zhí)行上行命令从诲,寫入文件內(nèi)容如下:
[Unit]
Description=Kube-Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
Requires=network.target
[Service]
EnvironmentFile=/etc/kubernetes/kube-proxy
ExecStart=/etc/kubernetes/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
啟動(dòng)kubelet:
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
systemctl status kube-proxy
部署cni網(wǎng)絡(luò)插件
cd /root/kubernetes/resources
mkdir -p /opt/cni/bin /etc/cni/net.d
tar -zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
部署Flannel集群網(wǎng)絡(luò)
需要在Master機(jī)器上執(zhí)行
cd /root/kubernetes/resources
kubectl apply -f kube-flannel.yml
創(chuàng)建角色綁定
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
K8S集群測(cè)試
部署一個(gè)nginx的deployment:
kubectl create deployment nginx --image=nginx
# 在等待幾秒后左痢,獲取deployment
kubectl get deployment
ifconfig cni0
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get svc
可以看到nginx已經(jīng)啟動(dòng)成功。
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 7m7s
注意:如果啟動(dòng)失敗系洛,可能是由于網(wǎng)絡(luò)原因拉取鏡像失敗導(dǎo)致俊性。可以通過(guò)kubectl describe pod <pod-name>查看描扯。
使用service暴露K8S集群內(nèi)部Pod服務(wù):
kubectl expose deployment nginx --port=80 --type=NodePort
# 獲取service
kubectl get svc
可以看到定页,service將nginx的服務(wù)轉(zhuǎn)發(fā)到了31839端口
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10h
nginx NodePort 10.0.0.101 <none> 80:31839/TCP 10s
此時(shí),我們?cè)贜ode機(jī)器上使用該端口訪問(wèn)nginx绽诚,可以看到成功訪問(wèn)典徊。
[root@k8s-node01]# curl 192.168.115.132:31839
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a >nginx.org</a>.<br/>
Commercial support is available at
<a >nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
好了杭煎,至此第四段落部署Node也順利結(jié)束。
結(jié)束語(yǔ)
在使用二進(jìn)制搭建K8S集群的過(guò)程中卒落,搭建的過(guò)程參考了很多園友的博客羡铲。由于我是使用最新的K8S、etcd版本搭建的导绷,遇到了很多的問(wèn)題犀勒,但沒有關(guān)系,好事多磨妥曲。
在遇到問(wèn)題的時(shí)候,幾乎都是通過(guò)查看K8S中組件的運(yùn)行狀態(tài)和日志來(lái)尋找問(wèn)題根源和解決方案的钦购。
大部分問(wèn)題都是出在配置方面檐盟,或是文件路徑配置問(wèn)題,或是新版本的配置不兼容問(wèn)題押桃。