今天一早到公司看日常新聞發(fā)現(xiàn)kubernetes更新到1.15.0
版本怕敬,更新了不少的功能,具體功能見kubernetes的blog:https://kubernetes.io/blog/2019/06/19/kubernetes-1-15-release-announcement/
重點關注的更新:
- kubeadm證書管理在1.15中變得更加強大躺酒,kubeadm現(xiàn)在可以在它們到期之前無縫轉(zhuǎn)動所有證書(升級時)蹂析。有關如何管理證書的信息,請查看kubeadm文檔
- kubeadm配置文件API在
1.15
中從v1beta1
移動到v1beta2
- 在 Kubernetes Core 中支持 Go 模塊
- 繼續(xù)為云供應商的提取與代碼組織需求做好準備驶社。云服務供應商的代碼已經(jīng)被移動至 kubernetes/legacy-cloud-providers初坠,旨在降低后續(xù)刪除與外部使用難度
- Kubectl的get與describe現(xiàn)可與各擴展成功協(xié)作
- 節(jié)點現(xiàn)可支持第三方監(jiān)控插件
- 發(fā)布新的alpha測試版本調(diào)度框架查排,用于管理各調(diào)度插件
- 用于在不同容器用例當中觸發(fā) hook 命令的 ExecutionHook API 現(xiàn)在進入 alpha 測試階段
- 繼續(xù)棄用 extensions/v1beta1躯舔、apps/v1beta1 以及 apps/v1beta2 APIs驴剔;這些擴展將在 1.16 版本中被徹底淘汰
官方更新了功能,我也迫不及待的去升級了我的kubernetes環(huán)境粥庄。
檢查群集
檢查群集可用于升級的版本和當前群集是否可升級
kubeadm upgrade plan
這里需要先升級kubeadm
kubelet
kubectl
升級kubelet kubeadm kubectl
yum clean all // 如果yum查找不到1.15.0版本丧失,先清理一下yum的本地緩存
yum install -y kubelet kubeadm kubectl
其他節(jié)點也需要執(zhí)行
下載對應的鏡像
- kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: "v1.15.0"
...
imageRepository: registry.aliyuncs.com/google_containers
在kubeadm初始化的配置中指定要更新的版本和鏡像倉庫
- 下載鏡像
$ kubeadm config images pull --config=kubeadm-config.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.15.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.3.10
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.3.1
鏡像下載成功
升級群集組件
kubeadm upgrade apply v1.15.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.15.0"
[upgrade/versions] Cluster version: v1.14.2
[upgrade/versions] kubeadm version: v1.15.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.15.0"...
Static pod: kube-apiserver-k8s-11 hash: 2e138075197b77cbc857ed6c45d3e0a3
Static pod: kube-controller-manager-k8s-11 hash: d4e699449cae3b28f9f657d0eabfef0e
Static pod: kube-scheduler-k8s-11 hash: a29556bf1d34f898bf5d0ce3c15a5948
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests653407144"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-21-09-39-05/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-11 hash: 2e138075197b77cbc857ed6c45d3e0a3
Static pod: kube-apiserver-k8s-11 hash: a0b1f68dcbfbbb58b72942275ea6e8c8
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-21-09-39-05/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-11 hash: d4e699449cae3b28f9f657d0eabfef0e
Static pod: kube-controller-manager-k8s-11 hash: e421c8900f2987ad26251124112ccba8
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-21-09-39-05/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-11 hash: a29556bf1d34f898bf5d0ce3c15a5948
Static pod: kube-scheduler-k8s-11 hash: b778c0dffa2d3c4049df6a82b96ea2c4
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.0". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
可以看到提示成功升級到v1.15.0
版本
- 重啟kubelet
systemctl daemon-reload
systemctl restart kubelet
所有節(jié)點升級后都需要重啟kubelet
- 升級其它master節(jié)點,如果有
kubeadm upgrade node control-plane
- 升級工作節(jié)點
kubeadm upgrade node
驗證群集升級
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-11 Ready master 27d v1.15.0
k8s-12 Ready <none> 27d v1.15.0
k8s-13 Ready <none> 27d v1.15.0
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
參考文檔:https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/