基于kubeadm+離線方式部署kubernetes v1.9.0
參考文獻(xiàn)如下:
http://blog.51cto.com/bestlope/2151855?source=dra
http://www.reibang.com/p/a4847af544de
https://segmentfault.com/a/1190000011764684
一、部署背景
????由于近期要研究分析Service Catalog尺锚, 需要搭建一個(gè)對(duì)應(yīng)的k8s集群珠闰, 選擇的版本號(hào)是v1.9.0
二、環(huán)境介紹
系統(tǒng)類型 | IP | role | cpu | memory | hostname |
---|---|---|---|---|---|
CentOS 7.4.1708 | 172.16.91.155 | master | 4 | 2G | master |
CentOS 7.4.1708 | 172.16.91.156 | worker | 2 | 1G | slave1 |
CentOS 7.4.1708 | 172.16.91.157 | worker | 2 | 1G | slave2 |
三瘫辩、組件部署方式說(shuō)明
-
組件部署說(shuō)明
組件名稱 版本 部署方式 維護(hù)方式 kubeadm v1.9.0 rpm kubelet v1.9.9 rpm systemd kubectl v1.9.0 rpm kube-apiserver v1.9.0 kubeadm pod kube-scheduler-master v1.9.0 kubeadm pod kube-controller-manager-master v1.9.0 kubeadm pod
- 整體部署過(guò)程介紹(做到心里有底)
安裝主要過(guò)程:- 部署docker
- 導(dǎo)入/下載k8s鏡像
- 部署kubeadm, kubelet, kubectl
- 初始化集群(master節(jié)點(diǎn))
- 部署k8s網(wǎng)絡(luò)(采用Calico方案)
- 增加節(jié)點(diǎn)(擴(kuò)容)
四伏嗜、安裝環(huán)境準(zhǔn)備工作(所有節(jié)點(diǎn))
4.1 添加基礎(chǔ)相關(guān)依賴包(所有節(jié)點(diǎn))
yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl
4.2 主機(jī)映射(所有節(jié)點(diǎn))
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.91.215 master
172.16.91.216 slave1
172.16.91.217 slave2
4.3 ssh免密碼登陸(在master節(jié)點(diǎn)上 )
- ssh-keygen
- ssh-copy-id root@slave1
- ssh-copy-id root@slave2
4.4 關(guān)閉防火墻(所有節(jié)點(diǎn))
- systemctl stop firewalld
- systemctl disable firewalld
4.5 關(guān)閉Swap(所有節(jié)點(diǎn))
- swapoff -a
- sed -i 's/.swap./#&/' /etc/fstab
swap
防止kubeadm初始化時(shí),報(bào)如下錯(cuò)誤信息:
kubeadm init error Swap
4.6 設(shè)置內(nèi)核(所有節(jié)點(diǎn))
4.6.1 設(shè)置netfilter模塊
modprobe br_netfilter
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
ls /proc/sys/net/bridge
設(shè)置目的:防止kubeadm報(bào)路由警告信息
4.6.2 打開(kāi)ipv4的轉(zhuǎn)發(fā)功能 (所有節(jié)點(diǎn))
# 執(zhí)行下面的命令
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
# 保存執(zhí)行
sysctl -p
如果不打開(kāi)的話伐厌,在將從節(jié)點(diǎn)加入到集群時(shí)承绸,會(huì)報(bào)以下的問(wèn)題?
[圖片上傳失敗...(image-ee39fe-1540898547431)]
4.6.3 更新內(nèi)核參數(shù)
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 65536" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
或者
echo "* soft nofile 65536" >> /etc/security/limits.conf &&
echo "* hard nofile 65536" >> /etc/security/limits.conf &&
echo "* soft nproc 65536" >> /etc/security/limits.conf &&
echo "* hard nproc 65536" >> /etc/security/limits.conf &&
echo "* soft memlock unlimited" >> /etc/security/limits.conf &&
echo "* hard memlock unlimited" >> /etc/security/limits.conf
4.6.4 關(guān)閉Selinux(所有節(jié)點(diǎn))
setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config
4.7 配置ntp(所有節(jié)點(diǎn))
systemctl enable ntpdate.service
echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
crontab /tmp/crontab2.tmp
systemctl start ntpdate.service
4.7.1 發(fā)現(xiàn)系統(tǒng)時(shí)間跟實(shí)際時(shí)間不對(duì)挣轨,如何解決
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
ntpdate us.pool.ntp.org
date
五军熏、下載部署包
-
下載部署包(物理機(jī)操作)
鏈接:https://pan.baidu.com/s/1fwBxEzOdtD5WpFlo_kMmCw 密碼:zfup
本人是下載到/root目錄下
k8s部署包 -
將部署包傳輸?shù)狡渌麖墓?jié)點(diǎn)上去(master節(jié)點(diǎn))
scp k8s.tar.gz slave1:/root/ scp k8s.tar.gz slave2:/root/
-
解壓(所有節(jié)點(diǎn))
tar -zxvf k8s.tar.gz
傳輸部署包
六、 部署
6.1 部署docker
具體可以參考其他文章,
目前使用的版本是:
[root@slave2 ~]# docker version
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 02:21:36 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 02:21:36 2017
OS/Arch: linux/amd64
Experimental: false
6.1.1 添加鏡像加速器(所有節(jié)點(diǎn))
如果沒(méi)有的話,可以在阿里云上注冊(cè)涡相,獲取自己的鏡像加速器湃交;
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://xxxxx.mirror.aliyuncs.com"]
}
EOF
6.1.2 啟動(dòng)docker服務(wù)(所有節(jié)點(diǎn))
sudo systemctl daemon-reload
sudo systemctl restart docker
6.2 導(dǎo)入鏡像(所有節(jié)點(diǎn))
cd /root/k8s/image
find . -name "*.tar" -exec docker image load -i {} \;
find . -name "*.tar.gz" -exec docker image load -i {} \;
6.3 部署kubeadm,kubectl, kubelet通過(guò)安裝RPM包(所有節(jié)點(diǎn))
cd /root/k8s/rpm
rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm
rpm -ivh kubernetes-cni-0.6.0-0.x86_64.rpm kubelet-1.9.9-9.x86_64.rpm kubectl-1.9.0-0.x86_64.rpm
rpm -ivh kubeadm-1.9.0-0.x86_64.rpm
6.3.1 更新kubelet配置文件(所有節(jié)點(diǎn))
- 查看一下docker的Cgroup Driver的值 ?
docker info | grep "Cgroup Driver"
-
更新配置文件(所有節(jié)點(diǎn))
sed -i 's#systemd#cgroupfs#g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-
重新啟動(dòng)kubelet服務(wù)(所有節(jié)點(diǎn))
systemctl daemon-reload systemctl enable kubelet
更新kubelet配置文件 -
命令部署效果:(master節(jié)點(diǎn)上部署即可)(選做)
yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc
6.4 初始化集群(master節(jié)點(diǎn))
kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.224.0.0/16 --token-ttl=0 --ignore-preflight-errors=all
初始化正確結(jié)果薇宠,打印信息如下:
6.4.1 初始化時(shí)偷办,報(bào)[kubelet-check] It seems like the kubelet isn't running or healthy.
如果初始化時(shí),始終報(bào)這個(gè)錯(cuò)澄港;
- 方法一:可以參考下面的文章
https://segmentfault.com/a/1190000011707194 - 方法二:查看master節(jié)點(diǎn)上kubelet進(jìn)程是否正常啟動(dòng)(master節(jié)點(diǎn)操作)
- journalctl -u kubelet -n100
- rm -rf /etc/kubernetes/pki
- systemctl restart kubelet
- kubeadm reset
- kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.224.0.0/16 --token-ttl=0 --ignore-preflight-errors=all
kubelet-check
image
image
6.4.2 若初始化失敗時(shí)的解決措施(2種方式) (master節(jié)點(diǎn))
-
方式一(推薦這種方式簡(jiǎn)單明了):
kubeadm reset
-
方式二:
rm -rf /etc/kubernetes/.conf
rm -rf /etc/kubernetes/manifests/.yaml
docker ps -a |awk '{print $1}' |xargs docker rm -f
systemctl stop kubelet
6.4.2 配置kubectl的認(rèn)證信息(master節(jié)點(diǎn))
配置kubectl的配置文件
- 若是非root用戶
mkdir -p
HOME/.kube/config
sudo chown(id -g) $HOME/.kube/config
- 若是root用戶
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
創(chuàng)建kubectl配置文件
做了這一步操作后椒涯,就不會(huì)報(bào)類似這樣的錯(cuò)誤了:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
6.4.3 簡(jiǎn)單測(cè)試下
- 查看master節(jié)點(diǎn)狀態(tài)
kubectl get node - 查看pod資源情況
kubectl get pod -n kube-system -o wide - 查看組件運(yùn)行狀態(tài)
kubectl get componentstatus - 查看kubelet運(yùn)行狀況
systemctl status kubelet
集群狀態(tài)
6.5 k8s網(wǎng)絡(luò)部署,安裝calico插件回梧, 從而實(shí)現(xiàn)pod間的網(wǎng)絡(luò)通信
-
修改calico.yaml中的CALICO_IPV4POOL_CIDR值
更新CALICO_IPV4POOL_CIDR -
由于提供的etcd鏡像跟calico.yaml中定義的不同废岂,需要重新打一個(gè)tag(master節(jié)點(diǎn))
docker tag 1406502a6459 quay.io/coreos/etcd:v3.1.10
-
部署calico服務(wù)
kubectl create -f calico.yaml
-
查看pod狀態(tài)
kubectl get pod -n kube-system
-
查看節(jié)點(diǎn)狀態(tài)
kubectl get node
集群狀態(tài)
6.6 擴(kuò)容集群節(jié)點(diǎn)(將其他從節(jié)點(diǎn)slave1,slave2添加到集群里)
- 分別登陸到slave1, slave2上狱意,運(yùn)行下面的命令即可了(注意湖苞,要改成自己的)
假如:忘記上面的token,可以使用下面的命令详囤,找回(master節(jié)點(diǎn)上執(zhí)行)kubeadm join 172.16.91.135:6443 --token yj2qxf.s4fjen6wgcmo506w --discovery-token-ca-cert-hash sha256:6d7d90a6ce931a63c96dfe9327691e0e6caa3f69082a9dc374c3643d0d685eb9
kubeadm token create --print-join-command
join slave1 - 再次查看pod的狀態(tài)
kubectl get pods --all-namespaces -owide
- 查看節(jié)點(diǎn)狀態(tài)
kubectl get node
node status
七财骨、dns服務(wù)測(cè)試
- 準(zhǔn)備測(cè)試用的yaml, pod-for-dns.yaml
注意:busybox的版本號(hào)藏姐,有些版本號(hào)測(cè)試失敗apiVersion: v1 kind: Pod metadata: name: dns-test namespace: default spec: containers: - image: busybox:1.28.4 command: - sleep - "3600" imagePullPolicy: IfNotPresent name: dns-test restartPolicy: Always
- 創(chuàng)建pod
kubectl create -f pod-for-dns.yaml
- 測(cè)試dns服務(wù)
[root@master ~]# kubectl exec dns-test -- nslookup kubernetes Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.96.0.1 webapp.default.svc.cluster.local
test dns