1 系統(tǒng)準(zhǔn)備
1.1 關(guān)閉防火墻
systemctl stop firewalld
systemctl disable firewalld
1.2 禁用SELinux
root@localhost ~]# setenforce 0 //臨時關(guān)閉
[root@localhost ~]# getenforce
Permissive
[root@localhost ~]# vim /etc/sysconfig/selinux //永久關(guān)閉
將SELINUX=enforcing 改為 SELINUX=disabled 。
1.3 關(guān)閉系統(tǒng)Swap
Kubernetes 1.8開始要求關(guān)閉系統(tǒng)的Swap改抡,如果不關(guān)閉矢炼,默認(rèn)配置下kubelet將無法啟動。方法一,通過kubelet的啟動參數(shù)–fail-swap-on=false更改這個限制阿纤。方法二,關(guān)閉系統(tǒng)的Swap句灌。
swapoff -a
并修改/etc/fstab文件,注釋掉SWAP的自動掛載欠拾,使用free -m確認(rèn)swap已經(jīng)關(guān)閉胰锌。
1.4 安裝docker
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum makecache fast
sudo yum -y install docker-ce
systemctl enable docker.service
systemctl restart docker
2 使用kubeadm部署Kubernetes
2.1 安裝kubeadm和kubelet
在各節(jié)點(diǎn)安裝kubeadm和kubelet:
# 配置源
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache fast
查看可安裝的版本
yum list kubelet --showduplicates | sort -r
yum list kubeadm --showduplicates | sort -r
安裝kubeadm、kubectl藐窄、kubelet:
本次部署使用1.12.2版本
# 安裝指定版本
$ yum install -y kubelet-1.12.2-0 kubeadm-1.12.2-0 kubectl-1.12.2-0 ipvsadm
Kubernetes 1.8開始要求關(guān)閉系統(tǒng)的Swap资昧,如果不關(guān)閉,默認(rèn)配置下kubelet將無法啟動枷邪。
swapoff -a
vm.swappiness=0
修改/etc/sysconfig/kubelet榛搔,加入:
KUBELET_EXTRA_ARGS=--fail-swap-on=false
# 啟動
$ systemctl daemon-reload
$ systemctl enable kubelet && systemctl restart kubelet
安裝完成查看需要的images
kubeadm config images list
根據(jù)查看images,可以看到所需鏡像东揣,由于網(wǎng)絡(luò)原因践惑,請下載我在docker hub上的鏡像。
docker pull hyxxy/kube-apiserver:v1.12.2
docker pull hyxxy/kube-controller-manager:v1.12.2
docker pull hyxxy/kube-scheduler:v1.12.2
docker pull hyxxy/kube-proxy:v1.12.2
docker pull hyxxy/pause:3.1
docker pull hyxxy/etcd:3.2.24
docker pull hyxxy/coredns:1.2.2
docker pull hyxxy/coreos/flannel:v0.10.0-amd64
docker pull hyxxy/defaultbackend:1.4
docker pull hyxxy/kubernetes-dashboard-amd64:v1.10.0
docker tag hyxxy/kube-apiserver:v1.12.2 k8s.gcr.io/kube-apiserver:v1.12.2
docker tag hyxxy/kube-controller-manager:v1.12.2 k8s.gcr.io/kube-controller-manager:v1.12.2
docker tag hyxxy/kube-scheduler:v1.12.2 k8s.gcr.io/kube-scheduler:v1.12.2
docker tag hyxxy/kube-proxy:v1.12.2 k8s.gcr.io/kube-proxy:v1.12.2
docker tag hyxxy/pause:3.1 k8s.gcr.io/pause:3.1
docker tag hyxxy/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag hyxxy/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2
docker tag hyxxy/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag hyxxy/defaultbackend:1.4 k8s.gcr.io/defaultbackend:1.4
docker tag hyxxy/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
配置
# 配置轉(zhuǎn)發(fā)相關(guān)參數(shù)嘶卧,否則可能會出錯
$ cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
# 使配置生效
$ sysctl --system
# 如果net.bridge.bridge-nf-call-iptables報錯尔觉,加載br_netfilter模塊
$ modprobe br_netfilter
$ sysctl -p /etc/sysctl.d/k8s.conf
# 加載ipvs相關(guān)內(nèi)核模塊
# 如果重新開機(jī),需要重新加載(可以寫在 /etc/rc.local 中開機(jī)自動加載)
$ modprobe ip_vs
$ modprobe ip_vs_rr
$ modprobe ip_vs_wrr
$ modprobe ip_vs_sh
$ modprobe nf_conntrack_ipv4
# 查看是否加載成功
$ lsmod | grep ip_vs
2.2 初始化master節(jié)點(diǎn)
直接使用命令:
kubeadm init \
--kubernetes-version=v1.12.2 \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=10.1.44.147 \
--ignore-preflight-errors=Swap
初始化成功之后根據(jù)提示可以看到加入集群的命令,請保存好
kubeadm join 10.1.44.147:6443 --token 8yuzkk.syj7fwf0lrc1kw65 --discovery-token-ca-cert-hash sha256:39122274dbb31b89dffb55be2f58e94abf07197d67b9bb734b7c11838fdd7cd7
#修改端口限制
vim /etc/kubernetes/manifests/kube-apiserver.yaml
在--service-cluster-ip-range與insecure-port間添加如下node port配置
...
- --service-cluster-ip-range=10.96.0.0/12
- --service-node-port-range=0-32767
- --insecure-port=0
....
# 重啟
systemctl restart kubelet
# 如果初始化過程出現(xiàn)問題芥吟,使用如下命令重置:
kubeadm reset
rm -rf /var/lib/cni/ $HOME/.kube/config
# 重置kubernetes服務(wù)侦铜,重置網(wǎng)絡(luò)专甩。刪除網(wǎng)絡(luò)配置,link(看情況是否重置钉稍,一般上兩句命令即可)
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker
下面的命令是配置常規(guī)用戶如何使用kubectl訪問集群(初始化成功后執(zhí)行):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
安裝Pod Network
創(chuàng)建 kube-flannel.yml
然后執(zhí)行
kubectl apply -f kube-flannel.yml
master node參與工作負(fù)載
kubectl describe node node1 | grep Taint
kubectl taint nodes node1 node-role.kubernetes.io/master-
安裝 dashboard
新建一個空目錄:certs涤躲,然后執(zhí)行下面命令:
kubectl create secret generic kubernetes-dashboard-certs --from-file=certs -n kube-system
進(jìn)入certs目錄 創(chuàng)建下面兩個文件
kubernetes-dashboard.yaml
kubernetes-rbac.yaml
安裝啟動
# 讀取當(dāng)前目錄配置文件進(jìn)行安裝啟動(certs目錄下)
kubectl apply -f .
開啟代理訪問
nohup kubectl proxy --address='0.0.0.0' --accept-hosts='^*$' &
若代理端口被占用,請查看 netstat -nap | grep 8001
dashboard 訪問地址
http://10.1.44.147:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
# 獲取token
kubectl -n kube-system get secret | grep kubernetes-dashboard-token
xxxxxx
kubectl describe -n kube-system secret/xxxxxx
2.3 node節(jié)點(diǎn)加入集群(node節(jié)點(diǎn)操作)
Node節(jié)點(diǎn)執(zhí)行完 2.1后,并在mater執(zhí)行完2.2后
在node節(jié)點(diǎn)執(zhí)行加入集群命令(mater初始化成功后保存的那條命令)
kubeadm join 10.1.44.147:6443 --token 8yuzkk.syj7fwf0lrc1kw65 --discovery-token-ca-cert-hash sha256:39122274dbb31b89dffb55be2f58e94abf07197d67b9bb734b7c11838fdd7cd7
加入成功之后可在master節(jié)點(diǎn)查看節(jié)點(diǎn)狀態(tài)
kubectl get nodes
如果需要從集群中移除node2這個Node執(zhí)行下面的命令
在master節(jié)點(diǎn)上執(zhí)行:
kubectl drain node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node node2
在node2上執(zhí)行:
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
node節(jié)點(diǎn)重置同mater(注意重置完后也需啟動)贡未。
Pod fannel 問題
mkdir -p /etc/cni/net.d/
cat <<EOF> /etc/cni/net.d/10-flannel.conf
{"name":"cbr0","type":"flannel","delegate": {"isDefaultGateway": true}}
EOF
mkdir /usr/share/oci-umount/oci-umount.d -p
mkdir /run/flannel/
cat <<EOF> /run/flannel/subnet.env
FLANNEL_NETWORK=172.100.0.0/16
FLANNEL_SUBNET=172.100.1.0/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
EOF
可參考
https://blog.csdn.net/qq_34857250/article/details/82562514
其他命令
查看日志
journalctl -f -u kubelet
查看node
kubectl get nodes -n kube-system
查看pod
kubectl get pods -n kube-system
刪除pod
kubectl delete pod tiller-deploy-6f6fd74b68-hmwzp -n kube-system