為了方便區(qū)分啃匿,所有執(zhí)行的命令都用粗斜體標(biāo)記
參考文章
https://www.kubernetes.org.cn/4948.html
https://www.kubernetes.org.cn/5025.html
環(huán)境準(zhǔn)備
服務(wù)器
Virtual IP:192.168.3.88
k8s-master,192.168.3.80
k8s-node1,192.168.3.81
k8s-node2,192.168.3.82
k8s-node3,192.168.3.83
k8s-storage1,192.168.3.86
docker-registry,192.168.3.89基礎(chǔ)環(huán)境
基于CentOS-7-x86_64-Minimal-1810最小安裝
需要做的工作包括如下內(nèi)容
- 更新系統(tǒng)
- 關(guān)閉 SELINUX
- 關(guān)閉交換分區(qū)
- 調(diào)整時區(qū)并同步時間
- 升級內(nèi)核
系統(tǒng)安裝完成后,執(zhí)行以下命令配置基礎(chǔ)環(huán)境
yum update -y
yum install wget net-tools yum-utils vim -y
修改源為阿里云
-- 先備份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
-- 下載
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
-- 禁用c7-media庫
yum-config-manager --disable c7-media
-- 或者 vim /etc/yum.repos.d/CentOS-Media.repo 修改enabled的值為0
時鐘同步
rm -rf /etc/localtime
vim /etc/sysconfig/clock
-- 文件中添加 Zone=Asia/Shanghai
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
reboot
使用
date -R
確認(rèn)是+8時區(qū)洪乍,如果不是就重來一次上面的操作。
-- 安裝ntp服務(wù)
yum install ntp -y
-- 修改成國內(nèi)時區(qū)并同步
timedatectl set-timezone Asia/Shanghai
timedatectl set-ntp yes
-- 查看時間確保同步
timedatectl
或者執(zhí)行以下命令也可以完成時鐘同步
yum install -y ntpdate
ntpdate -u ntp.api.bz關(guān)閉SELINUX
vim /etc/sysconfig/selinux
SELINUX=permissive 修改為 SELINUX=disabled關(guān)閉Selinux/firewalld
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/configmaster節(jié)點到其他節(jié)點的ssh免登陸
首先在每臺機(jī)器上執(zhí)行
ssh 本機(jī)IP
exit
在master節(jié)點執(zhí)行如下名稱
ssh-keygen -t rsa
ssh-copy-id 192.168.3.81
ssh-copy-id 192.168.3.82
ssh-copy-id 192.168.3.83
這里要注意拆讯,每臺機(jī)器都要保證能訪問自己也是免密的關(guān)閉交換分區(qū)
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab設(shè)置網(wǎng)橋包經(jīng)IPTables集币,core文件生成路徑
echo """
vm.swappiness = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
""" > /etc/sysctl.conf
sysctl -p查看內(nèi)核版本
lsb_release -a
如果提示命令不存在則安裝
yum install -y redhat-lsb要求集群中所有機(jī)器具有不同的Mac地址、產(chǎn)品uuid勾拉、Hostname
cat /sys/class/dmi/id/product_uuid
ip link解決“cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 沒有那個文件或目錄”的問題
先安裝相關(guān)庫
yum install -y epel-release
yum install -y conntrack ipvsadm ipset jq sysstat curl iptables
執(zhí)行如下配置命令
安裝模塊
modprobe br_netfilter
modprobe ip_vs
添加配置項
cat > /etc/rc.sysinit <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
EOF
sysctl -p
成功!
升級內(nèi)核
導(dǎo)入KEY文件
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
安裝yum源盗温,使用elrepo源
rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
查看可用內(nèi)核版本
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
升級內(nèi)核到最新版
yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y
或者從https://elrepo.org/linux/kernel/el7/x86_64/RPMS/下載4.20的主線穩(wěn)定版
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-ml-4.20.13-1.el7.elrepo.x86_64.rpm
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.20.13-1.el7.elrepo.x86_64.rpm
yum install -y kernel-ml-4.20.13-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.20.13-1.el7.elrepo.x86_64.rpm
檢查默認(rèn)內(nèi)核版本是否大于4.14藕赞,否則請調(diào)整默認(rèn)啟動參數(shù)
grub2-editenv list
查看當(dāng)前有幾個內(nèi)核
cat /boot/grub2/grub.cfg |grep menuentry
設(shè)置默認(rèn)啟動內(nèi)核
grub2-set-default "CentOS Linux (4.20.13-1.el7.elrepo.x86_64) 7 (Core)"
或直接更改內(nèi)核啟動順序
grub2-set-default 0
重啟以更換內(nèi)核
reboot
查看內(nèi)核
uname -r備份虛擬機(jī)
將以上操作結(jié)束后的虛擬機(jī)狀態(tài)備份一下,后續(xù)如果搞亂了環(huán)境可以直接使用此狀態(tài)進(jìn)行恢復(fù)肌访。
安裝docker環(huán)境
- 所有主機(jī)都要安裝docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum makecache fast
yum install -y docker-ce
編輯systemctl的Docker啟動文件
sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service
啟動docker
systemctl daemon-reload
systemctl enable docker
systemctl start docker
搭建docker-registry私服
配置docker加速器(僅限于私服機(jī)器找默,本文是192.168.3.89)
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'{"registry-mirrors": ["https://7471d7b2.mirror.aliyuncs.com"]} EOF
systemctl daemon-reload
systemctl restart docker
安裝 registry
docker pull registry:latest
下載鏈接:鏈接:https://pan.baidu.com/s/1ZdmgnrYGVobc22FX__vYwg 提取碼:69gq ,隨后將該文件放置到registry機(jī)器上吼驶,并在registry主機(jī)上加載惩激、啟動該鏡像(嘉定該鏡像在/var/lib/docker目錄下)
docker load -i /var/lib/docker/k8s-repo-1.13.0
運行 docker images查看鏡像
docker run --restart=always -d -p 80:5000 --name repo harbor.io:1180/system/k8s-repo:v1.13.0
或者
docker run --restart=always -d -p 80:5000 --privileged=true --log-driver=none --name registry -v /home/registrydata:/tmp/registry harbor.io:1180/system/k8s-repo:v1.13.0
在瀏覽器輸入http://192.168.3.89/v2/_catalog
- 所有非registry主機(jī)配置私有源
mkdir -p /etc/docker
echo -e '{\n"insecure-registries":["k8s.gcr.io", "gcr.io", "quay.io"]\n}' > /etc/docker/daemon.json
systemctl restart docker
此處應(yīng)當(dāng)修改為registry所在機(jī)器的IP
REGISTRY_HOST="192.168.3.89"
設(shè)置Hosts
yes | cp /etc/hosts /etc/hosts_bak
cat /etc/hosts_bak|grep -vE '(gcr.io|harbor.io|quay.io)' > /etc/hosts
echo """ $REGISTRY_HOST gcr.io harbor.io k8s.gcr.io quay.io """ >> /etc/hosts
安裝配置kubernetes(master & worker)
首先下載鏈接:鏈接:https://pan.baidu.com/s/1t3EWAt4AET7JaIVIbz-zHQ 提取碼:djnf ,并放置在k8s各個master和worker主機(jī)上蟹演,我放在/home下
yum install -y socat keepalived ipvsadm
cd /home/
scp k8s-v1.13.0-rpms.tgz 192.168.3.81:/home
scp k8s-v1.13.0-rpms.tgz 192.168.3.82:/home
scp k8s-v1.13.0-rpms.tgz 192.168.3.83:/home
然后依次在每臺機(jī)器上執(zhí)行如下命令
cd /home
tar -xzvf k8s-v1.13.0-rpms.tgz
cd k8s-v1.13.0
rpm -Uvh * --force
systemctl enable kubelet
kubeadm version -o short
- 部署HA Master
先使用ifconfig -a 查看網(wǎng)卡設(shè)備名风钻,這里是enp0s3
在192.168.3.80上執(zhí)行
cd ~/
echo """
CP0_IP=192.168.3.80
CP1_IP=192.168.3.81
CP2_IP=192.168.3.82
VIP=192.168.3.88
NET_IF=enp0s3
CIDR=10.244.0.0/16
""" > ./cluster-info
bash -c "$(curl -fsSL https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/kubeha-gen.sh)"
該步驟將可能持續(xù)2到10分鐘,在該腳本進(jìn)行安裝部署前酒请,將有一次對安裝信息進(jìn)行檢查確認(rèn)的機(jī)會
執(zhí)行結(jié)束記住輸出的join信息
join command:
kubeadm join 192.168.3.88:6443 --token 29ch4f.0zshhxkh0ii4q9ej --discovery-token-ca-cert-hash sha256:a3cb2e754064b1ea4871a12f2a31dd2a776cc32a6dde57e5009cabca520cb56f - 安裝helm
如果需要安裝helm骡技,請先下載離線包鏈接:鏈接:https://pan.baidu.com/s/1B7WHuomXOmZKhHai4tV5MA 提取碼:kgzi
cd /home/
tar -xzvf helm-v2.12.0-linux-amd64.tar
cd linux-amd64
cp helm /usr/local/bin
helm init --service-account=kubernetes-dashboard-admin --skip-refresh --upgrade
helm version - 加入work node
在要接入集群的節(jié)點主機(jī)執(zhí)行命令
kubeadm join 192.168.3.88:6443 --token 29ch4f.0zshhxkh0ii4q9ej --discovery-token-ca-cert-hash sha256:a3cb2e754064b1ea4871a12f2a31dd2a776cc32a6dde57e5009cabca520cb56f
掛載擴(kuò)展存儲
- kubeha-gen.sh腳本內(nèi)容如下(建議將腳本下載到本地然后修改其中的郵件地址等信息):
#/bin/bash
function check_parm()
{
if [ "${2}" == "" ]; then
echo -n "${1}"
return 1
else
return 0
fi
}
if [ -f ./cluster-info ]; then
source ./cluster-info
fi
check_parm "Enter the IP address of master-01: " ${CP0_IP}
if [ $? -eq 1 ]; then
read CP0_IP
fi
check_parm "Enter the IP address of master-02: " ${CP1_IP}
if [ $? -eq 1 ]; then
read CP1_IP
fi
check_parm "Enter the IP address of master-03: " ${CP2_IP}
if [ $? -eq 1 ]; then
read CP2_IP
fi
check_parm "Enter the VIP: " ${VIP}
if [ $? -eq 1 ]; then
read VIP
fi
check_parm "Enter the Net Interface: " ${NET_IF}
if [ $? -eq 1 ]; then
read NET_IF
fi
check_parm "Enter the cluster CIDR: " ${CIDR}
if [ $? -eq 1 ]; then
read CIDR
fi
echo """
cluster-info:
master-01: ${CP0_IP}
master-02: ${CP1_IP}
master-02: ${CP2_IP}
VIP: ${VIP}
Net Interface: ${NET_IF}
CIDR: ${CIDR}
"""
echo -n 'Please print "yes" to continue or "no" to cancel: '
read AGREE
while [ "${AGREE}" != "yes" ]; do
if [ "${AGREE}" == "no" ]; then
exit 0;
else
echo -n 'Please print "yes" to continue or "no" to cancel: '
read AGREE
fi
done
mkdir -p ~/ikube/tls
IPS=(${CP0_IP} ${CP1_IP} ${CP2_IP})
PRIORITY=(100 50 30)
STATE=("MASTER" "BACKUP" "BACKUP")
HEALTH_CHECK=""
for index in 0 1 2; do
HEALTH_CHECK=${HEALTH_CHECK}"""
real_server ${IPS[$index]} 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
"""
done
for index in 0 1 2; do
ip=${IPS[${index}]}
echo """
global_defs {
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state ${STATE[{NET_IF}
virtual_router_id 80
priority ${PRIORITY[{VIP}
}
}
virtual_server ${VIP} 6443 {
delay_loop 6
lb_algo loadbalance
lb_kind DR
nat_mask 255.255.255.0
persistence_timeout 0
protocol TCP
${HEALTH_CHECK}
}
""" > ~/ikube/keepalived-${index}.conf
scp ~/ikube/keepalived-${index}.conf ${ip}:/etc/keepalived/keepalived.conf
ssh ${ip} "
systemctl stop keepalived
systemctl enable keepalived
systemctl start keepalived
kubeadm reset -f
rm -rf /etc/kubernetes/pki/"
done
echo """
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.0
controlPlaneEndpoint: "${VIP}:6443"
apiServer:
certSANs:
- ${CP0_IP}
- ${CP1_IP}
- ${CP2_IP}
- ${VIP}
networking:
# This CIDR is a Calico default. Substitute or remove for your CNI provider.
podSubnet: ${CIDR}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
""" > /etc/kubernetes/kubeadm-config.yaml
kubeadm init --config /etc/kubernetes/kubeadm-config.yaml
mkdir -p $HOME/.kube
cp -f /etc/kubernetes/admin.conf ${HOME}/.kube/config
kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/calico/rbac.yaml
curl -fsSL https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/calico/calico.yaml | sed "s!8.8.8.8!${CP0_IP}!g" | sed "s!10.244.0.0/16!${CIDR}!g" | kubectl apply -f -
JOIN_CMD=kubeadm token create --print-join-command
for index in 1 2; do
ip=${IPS[${index}]}
ssh $ip "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/"
scp /etc/kubernetes/pki/ca.crt $ip:/etc/kubernetes/pki/ca.crt
scp /etc/kubernetes/pki/ca.key $ip:/etc/kubernetes/pki/ca.key
scp /etc/kubernetes/pki/sa.key $ip:/etc/kubernetes/pki/sa.key
scp /etc/kubernetes/pki/sa.pub $ip:/etc/kubernetes/pki/sa.pub
scp /etc/kubernetes/pki/front-proxy-ca.crt $ip:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.key $ip:/etc/kubernetes/pki/front-proxy-ca.key
scp /etc/kubernetes/pki/etcd/ca.crt $ip:/etc/kubernetes/pki/etcd/ca.crt
scp /etc/kubernetes/pki/etcd/ca.key $ip:/etc/kubernetes/pki/etcd/ca.key
scp /etc/kubernetes/admin.conf $ip:/etc/kubernetes/admin.conf
scp /etc/kubernetes/admin.conf $ip:~/.kube/config
ssh ${ip} "${JOIN_CMD} --experimental-control-plane"
done
echo "Cluster create finished."
echo """
[req]
distinguished_name = req_distinguished_name
prompt = yes
[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_value = CN
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_value = Dalian
localityName = Locality Name (eg, city)
localityName_value = Haidian
organizationName = Organization Name (eg, company)
organizationName_value = Channelsoft
organizationalUnitName = Organizational Unit Name (eg, section)
organizationalUnitName_value = R & D Department
commonName = Common Name (eg, your name or your server's hostname)
commonName_value = *.multi.io
emailAddress = Email Address
emailAddress_value = lentil1016@gmail.com
""" > ~/ikube/tls/openssl.cnf
openssl req -newkey rsa:4096 -nodes -config ~/ikube/tls/openssl.cnf -days 3650 -x509 -out ~/ikube/tls/tls.crt -keyout ~/ikube/tls/tls.key
kubectl create -n kube-system secret tls ssl --cert ~/ikube/tls/tls.crt --key ~/ikube/tls/tls.key
kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/plugin/traefik.yaml
kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/plugin/metrics.yaml
kubectl apply -f https://raw.githubusercontent.com/Lentil1016/kubeadm-ha/1.13.0/plugin/kubernetes-dashboard.yaml
echo "Plugin install finished."
echo "Waiting for all pods into 'Running' status. You can press 'Ctrl + c' to terminate this waiting any time you like."
POD_UNREADY=`kubectl get pods -n kube-system 2>&1|awk '{print $3}'|grep -vE 'Running|STATUS'`
NODE_UNREADY=`kubectl get nodes 2>&1|awk '{print $2}'|grep 'NotReady'`
while [ "${POD_UNREADY}" != "" -o "${NODE_UNREADY}" != "" ]; do
sleep 1
POD_UNREADY=`kubectl get pods -n kube-system 2>&1|awk '{print $3}'|grep -vE 'Running|STATUS'`
NODE_UNREADY=`kubectl get nodes 2>&1|awk '{print $2}'|grep 'NotReady'`
done
echo
kubectl get cs
kubectl get nodes
kubectl get pods -n kube-system
echo """
join command:
`kubeadm token create --print-join-command`"""