k8s 部署現(xiàn)在支持一鍵部署伤溉,可參考rancher一鍵安裝https://github.com/qxl1231/2019-k8s-centos/blob/master/rancher-k8s-install.md 本人主要敘述centos7搭建k8s
建議先看粗略一遍再跟著一步步按步驟操作
準(zhǔn)備材料
- 一個裝有VMware的Windows系統(tǒng) 配置內(nèi)存(16G或以上)
- 三臺帶有centos7虛擬機(jī)(同一網(wǎng)段 如果是家用路由器就是192.168.0.x或192.168.0.x ,注意關(guān)閉防火墻)
配置hosts
#node1 機(jī)器上執(zhí)行
hostnamectl set-hostname node1
#node2 機(jī)器上執(zhí)行
hostnamectl set-hostname node2
#master 機(jī)器上執(zhí)行
hostnamectl set-hostname master
#每臺機(jī)子上的hosts文件
vim /etc/hosts
192.168.0.158 master
192.168.0.159 node1
192.168.0.160 node2
安裝docker-ce
Master榆鼠、Node節(jié)點(diǎn)都需要安裝、配置Docker(此操作屬于公共部分二鳄,每一臺虛擬機(jī)蛇摸,可以先安裝好一臺虛擬機(jī)然后再copy出來)
# 卸載原來的docker
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
# 安裝依賴
sudo yum update -y && sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
# 添加官方y(tǒng)um庫
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
# 安裝docker
sudo yum install docker-ce docker-ce-cli containerd.io
# 查看docker版本
docker --version
# 開機(jī)啟動
systemctl enable --now docker
或者使用腳本一鍵安裝
curl -fsSL "https://get.docker.com/" | sh
systemctl enable --now docker
修改docker cgroup驅(qū)動,與k8s一致垄琐,使用systemd
# 修改docker cgroup驅(qū)動:native.cgroupdriver=systemd
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
systemctl restart docker # 重啟使配置生效
安裝 kubelet kubeadm kubectl
master、node節(jié)點(diǎn)都需要安裝kubelet kubeadm kubectl经柴。(這里也是可以先安裝好一臺虛擬機(jī)狸窘,然后copy)
安裝kubernetes的時候,需要安裝kubelet, kubeadm等包坯认,但k8s官網(wǎng)給的yum源是http://packages.cloud.google.com翻擒,國內(nèi)訪問不了氓涣,此時我們可以使用阿里云的yum倉庫鏡像。
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 關(guān)閉SElinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# 安裝kubelet kubeadm kubectl
yum install -y kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0 --disableexcludes=kubernetes
systemctl enable --now kubelet # 開機(jī)啟動kubelet
# centos7用戶還需要設(shè)置路由:
yum install -y bridge-utils.x86_64
modprobe br_netfilter # 加載br_netfilter模塊陋气,使用lsmod查看開啟的模塊
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 重新加載所有配置文件
systemctl disable --now firewalld # 關(guān)閉防火墻
# k8s要求關(guān)閉swap (qxl)
swapoff -a && sysctl -w vm.swappiness=0 # 關(guān)閉swap
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab # 取消開機(jī)掛載swap
使用虛擬機(jī)的可以做完以上步驟后劳吠,進(jìn)行克隆。實(shí)驗(yàn)環(huán)境為1 Master巩趁,2 Node
創(chuàng)建集群準(zhǔn)備工作(到這里就要開始區(qū)分開來了)
# Master端:
kubeadm config images pull # 拉取集群所需鏡像痒玩,這個需要翻墻
# --- 不能翻墻可以嘗試以下辦法 ---
kubeadm config images list # 列出所需鏡像
#(不是一定是下面的,根據(jù)實(shí)際情況來)
# 根據(jù)所需鏡像名字先拉取國內(nèi)資源
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.0
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.0
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.0
docker pull registry.aliyuncs.com/google_containers/etcd:3.4.9-1
docker pull registry.aliyuncs.com/google_containers/coredns:1.7.0
docker pull registry.aliyuncs.com/google_containers/pause:3.2
# 修改鏡像tag
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0 k8s.gcr.io/kube-proxy:v1.19.3
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.0 k8s.gcr.io/kube-apiserver:v1.19.3
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.0 k8s.gcr.io/kube-controller-manager:v1.19.3
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.0 k8s.gcr.io/kube-scheduler:v1.19.3
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.9-1 k8s.gcr.io/etcd:3.4.9-1
docker tag registry.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
docker tag registry.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
# 把所需的鏡像下載好,init的時候就不會再拉鏡像议慰,由于無法連接google鏡像庫導(dǎo)致出錯
# --- 不能翻墻可以嘗試使用 ---
# Node端:
# 根據(jù)所需鏡像名字先拉取國內(nèi)資源
docker pull kry1702/kube-proxy:v1.15.0
docker pull kry1702/pause:3.1
# 修改鏡像tag
docker tag kry1702/kube-proxy:v1.15.0 k8s.gcr.io/kube-proxy:v1.15.12
docker tag kry1702/pause:3.1 k8s.gcr.io/pause:3.1
使用kubeadm創(chuàng)建集群
# 第一次初始化過程中/etc/kubernetes/admin.conf該文件存在凰荚,是空文件(我自己手動創(chuàng)建的),
#會報錯:panic: runtime error: invalid memory address or nil pointer dereference
ls /etc/kubernetes/admin.conf && mv /etc/kubernetes/admin.conf.bak # 移走備份
# 初始化Master(Master需要至少2核)此處會各種報錯,異常...成功與否就在此 使用阿里云的源會較少很多因?yàn)榫W(wǎng)絡(luò)導(dǎo)致的錯誤 192.168.0.158 是master所使用的ip地址
kubeadm init --kubernetes-version=v1.15.0 --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.0.158 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
# --apiserver-advertise-address 指定與其它節(jié)點(diǎn)通信的接口
# --pod-network-cidr 指定pod網(wǎng)絡(luò)子網(wǎng)褒脯,使用fannel網(wǎng)絡(luò)必須使用這個CIDR
- 運(yùn)行初始化,程序會檢驗(yàn)環(huán)境一致性缆毁,可以根據(jù)實(shí)際錯誤提示進(jìn)一步修復(fù)問題番川。
- 程序會訪問https://dl.k8s.io/release/stable-1.txt獲取最新的k8s版本,訪問這個連接需要FQ脊框,如果無法訪問颁督,則會使用kubeadm client的版本作為安裝的版本號,使用kubeadm version查看client版本浇雹。也可以使用--kubernetes-version明確指定版本沉御。
查看結(jié)果
···
初始化結(jié)果:
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.503375 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: w2i0mh.5fxxz8vk5k8db0wq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
#每個機(jī)器創(chuàng)建的master以下部分都不同,需要自己保存好-qxl
kubeadm join 192.168.200.25:6443 --token our9a0.zl490imi6t81tn5u \
--discovery-token-ca-cert-hash sha256:b93f710eb9b389a69f0cd0d6dcf7c82e389a68f009eb6b2028f69d54b099de16
普通用戶設(shè)置權(quán)限
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
應(yīng)用flannel網(wǎng)絡(luò)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
node加入機(jī)器
# node1:
kubeadm join 192.168.0.158:6443 --token w2i0mh.5fxxz8vk5k8db0wq \
--discovery-token-ca-cert-hash sha256:65e82e987f50908f3640df7e05c7a91f390a02726c9142808faa739d4dc24252
# node2:
kubeadm join 192.168.0.158:6443 --token w2i0mh.5fxxz8vk5k8db0wq \
--discovery-token-ca-cert-hash sha256:65e82e987f50908f3640df7e05c7a91f390a02726c9142808faa739d4dc24252
查看結(jié)果
# master:
kubectl get pods --all-namespaces
# ---輸出信息---
AMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d56c8448f-p2r27 0/1 Pending 0 26m
kube-system coredns-6d56c8448f-q25cq 0/1 Pending 0 26m
kube-system kube-proxy-qn6db 1/1 Running 0 26m
# ---輸出信息---
kubectl get nodes
image.png
文章轉(zhuǎn)自https://zhuanlan.zhihu.com/p/62814079 并進(jìn)行部分修改