Helm可以幫助kubernetes進(jìn)行打包的管理羹膳,是重要的工具之一睡互,因?yàn)榍懊姘惭b的kubernetes是1.18版本,在部署Helm的時(shí)候遇到了一些問題,特此記錄一下
首先查一下kubernetes的版本
[root@k8s-master k8s]# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-08T17:38:50Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-08T17:30:47Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
安裝Helm
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.12.1-linux-amd64.tar.gz
tar -zxvf helm-v2.12.1-linux-amd64.tar.gz
cd linux-amd64/
# 拷貝helm到 /usr/local/bin
cp helm /usr/local/bin
驗(yàn)證Helm
helm version
Client: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
Error: cannot connect to Tiller
出現(xiàn)問題就珠,鏈接不到Tiller寇壳,這時(shí)Tiller還沒有安裝,是正常的現(xiàn)象
安裝Tiller
參考其他文檔或博客妻怎,安裝Tiller壳炎,so easy,只需要執(zhí)行helm init
就可以了逼侦。當(dāng)執(zhí)行以下命令時(shí)匿辩,出現(xiàn)了錯(cuò)誤
$ helm init --service-account tiller
$HELM_HOME has been configured at /Users/xxxx/.helm.
Error: error installing: the server could not find the requested resource
各種查詢Google下,找到了原因榛丢,helm 2.xx版本對(duì)于kubernetes 1.16以上的支持有些問題铲球,主要是1.16以后yaml文件的格式發(fā)生了變化,在查到的討論中提到以下命令可以安裝晰赞。(查到的文獻(xiàn)鏈接:Helm init fails on Kubernetes 1.16.0 #6374)
helm init --service-account tiller --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | sed 's@ replicas: 1@ replicas: 1\n selector: {"matchLabels": {"app": "helm", "name": "tiller"}}@' | kubectl apply -f -
執(zhí)行以上命令稼病,發(fā)現(xiàn)確實(shí)提示tiller-deploy已經(jīng)安裝,這時(shí)去查看deployments發(fā)現(xiàn)
[root@k8s-master k8s]# kubectl get deployments --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system calico-kube-controllers 1/1 1 1 3d4h
kube-system coredns 2/2 2 2 3d4h
kube-system tiller-deploy 0/1 0 0 17m
雖然安裝上了掖鱼,但始終不能ready然走,為什么呢?繼續(xù)查看pods锨用,根本就沒有till-deploy-xxx的pod生成
[root@k8s-master k8s]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default mypod 1/1 Running 0 143m
default node-exporter-daemonset-s4j7s 1/1 Running 1 15h
default node-exporter-daemonset-swhbz 1/1 Running 1 15h
kube-system calico-kube-controllers-57546b46d6-nt89k 1/1 Running 6 3d4h
kube-system calico-node-24mxg 1/1 Running 4 41h
kube-system calico-node-ldgcf 1/1 Running 5 2d14h
kube-system calico-node-lxwl7 1/1 Running 6 3d4h
kube-system coredns-7ff77c879f-5xxgt 1/1 Running 6 3d4h
kube-system coredns-7ff77c879f-m6g58 1/1 Running 6 3d4h
kube-system etcd-k8s-master 1/1 Running 6 3d4h
kube-system kube-apiserver-k8s-master 1/1 Running 11 3d4h
kube-system kube-controller-manager-k8s-master 1/1 Running 7 3d4h
kube-system kube-proxy-lv6p4 1/1 Running 5 2d14h
kube-system kube-proxy-t4vtw 1/1 Running 6 3d4h
kube-system kube-proxy-xlzvk 1/1 Running 4 41h
kube-system kube-scheduler-k8s-master 1/1 Running 7 3d4h
中間嘗試了各種查找,沒有頭緒隘谣≡鲇担回過頭來(lái),把上面的sed命令產(chǎn)生的yaml文件輸出
helm init --service-account tiller --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | sed 's@ replicas: 1@ replicas: 1\n selector: {"matchLabels": {"app": "helm", "name": "tiller"}}@' > tiller.yaml
查看這個(gè)yaml文件寻歧,發(fā)現(xiàn)image用的是image: gcr.io/kubernetes-helm/tiller:v2.12.1
掌栅,覺得可能和這個(gè)相關(guān),是不是沒能下載到image码泛,換成以下阿里云的鏡像猾封。 結(jié)果還是一樣,其實(shí)這時(shí)pod還沒有生成噪珊,還沒有到下載鏡像的步驟晌缘。
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
name: tiller-deploy
namespace: kube-system
spec:
replicas: 1
selector: {"matchLabels": {"app": "helm", "name": "tiller"}}
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
spec:
automountServiceAccountToken: true
containers:
- env:
- name: TILLER_NAMESPACE
value: kube-system
- name: TILLER_HISTORY_MAX
value: "0"
image: registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.12.1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /liveness
port: 44135
initialDelaySeconds: 1
timeoutSeconds: 1
name: tiller
ports:
- containerPort: 44134
name: tiller
- containerPort: 44135
name: http
readinessProbe:
httpGet:
path: /readiness
port: 44135
initialDelaySeconds: 1
timeoutSeconds: 1
resources: {}
serviceAccountName: tiller
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
name: tiller-deploy
namespace: kube-system
spec:
ports:
- name: tiller
port: 44134
targetPort: tiller
selector:
app: helm
name: tiller
type: ClusterIP
status:
loadBalancer: {}
思考一下kubernetes的構(gòu)架,下載deployment已經(jīng)生成痢站,但pod還沒有生成磷箕,這個(gè)應(yīng)該是deployment control manager的職責(zé)范圍,在對(duì)應(yīng)的日志中是否能夠看到端倪呢阵难? 先去看看
[root@k8s-master k8s]# kubectl logs kube-controller-manager-k8s-master --namespace=kube-system
I0718 17:21:59.039703 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"tiller-deploy-7566c65bf6", UID:"8f94a829-cc7d-4f87-80fa-329c0e3fde58", APIVersion:"apps/v1", ResourceVersion:"230088", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "tiller-deploy-7566c65bf6-" is forbidden: error looking up service account kube-system/tiller: serviceaccount "tiller" not found
E0718 17:22:40.031087 1 replica_set.go:535] sync "kube-system/tiller-deploy-7566c65bf6" failed with pods "tiller-deploy-7566c65bf6-" is forbidden: error looking up service account kube-system/tiller: serviceaccount "tiller" not found
I0718 17:22:40.031520 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"tiller-deploy-7566c65bf6", UID:"8f94a829-cc7d-4f87-80fa-329c0e3fde58", APIVersion:"apps/v1", ResourceVersion:"230088", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "tiller-deploy-7566c65bf6-" is forbidden: error looking up service account kube-system/tiller: serviceaccount "tiller" not found
E0718 17:24:01.982900 1 replica_set.go:535] sync "kube-system/tiller-deploy-7566c65bf6" failed with pods "tiller-deploy-7566c65bf6-" is forbidden: error looking up service account kube-system/tiller: serviceaccount "tiller" not found
I0718 17:24:01.983226 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"tiller-deploy-7566c65bf6", UID:"8f94a829-cc7d-4f87-80fa-329c0e3fde58", APIVersion:"apps/v1", ResourceVersion:"230088", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "tiller-deploy-7566c65bf6-" is forbidden: error looking up service account kube-system/tiller: serviceaccount "tiller" not found
E0718 17:26:45.835530 1 replica_set.go:535] sync "kube-system/tiller-deploy-7566c65bf6" failed with pods "tiller-deploy-7566c65bf6-" is forbidden: error looking up service account kube-system/tiller: serviceaccount "tiller" not found
輸出很多岳枷,只拿到后面幾行,可以看到tiller-deploy-7566c65bf6-
pod創(chuàng)建失敗的原因是serviceaccount "tiller" not found
】辗保可以通過kubectl get serviceaccount --all-namespaces去查看殿衰,系統(tǒng)里面有很多serviceaccount,也確實(shí)沒有tiller盛泡。
拿著”tiller service account yaml“闷祥,去Google一下,發(fā)現(xiàn)了創(chuàng)建serviceaccount的yaml饭于, Example: Service account with cluster-admin role蜀踏,在本地使用示例創(chuàng)建yaml文件并apply。
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
[root@k8s-master k8s]# kubectl apply -f rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
可以看到這個(gè)serviceaccount已經(jīng)創(chuàng)建了掰吕,可以再通過kubectl get serviceaccount --all-namespaces
查詢一下果覆,可以看到已經(jīng)有了。(只顯示了部分)
[root@k8s-master k8s]# kubectl get serviceAccount --all-namespaces
NAMESPACE NAME SECRETS AGE
default default 1 3d4h
kube-system statefulset-controller 1 3d4h
kube-system tiller 1 57s
kube-system token-cleaner 1 3d4h
kube-system ttl-controller 1 3d4h
這時(shí)再去查看居然還沒有產(chǎn)生pod殖熟。日志和上面一樣局待,還在說(shuō)serviceaccount不存在的問題。將deployment刪除再重新安裝
[root@k8s-master k8s]# kubectl delete -f newTiller.ali.yaml
[root@k8s-master k8s]# kubectl apply -f newTiller.ali.yaml
deployment.apps/tiller-deploy created
service/tiller-deploy created
#查詢
[root@k8s-master k8s]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default mypod 1/1 Running 0 162m
default node-exporter-daemonset-s4j7s 1/1 Running 1 15h
default node-exporter-daemonset-swhbz 1/1 Running 1 15h
kube-system calico-kube-controllers-57546b46d6-nt89k 1/1 Running 6 3d4h
kube-system calico-node-24mxg 1/1 Running 4 41h
kube-system calico-node-ldgcf 1/1 Running 5 2d15h
kube-system calico-node-lxwl7 1/1 Running 6 3d4h
kube-system coredns-7ff77c879f-5xxgt 1/1 Running 6 3d4h
kube-system coredns-7ff77c879f-m6g58 1/1 Running 6 3d4h
kube-system etcd-k8s-master 1/1 Running 6 3d4h
kube-system kube-apiserver-k8s-master 1/1 Running 11 3d4h
kube-system kube-controller-manager-k8s-master 1/1 Running 7 3d4h
kube-system kube-proxy-lv6p4 1/1 Running 5 2d15h
kube-system kube-proxy-t4vtw 1/1 Running 6 3d4h
kube-system kube-proxy-xlzvk 1/1 Running 4 41h
kube-system kube-scheduler-k8s-master 1/1 Running 7 3d4h
kube-system tiller-deploy-7566c65bf6-6l9dx 1/1 Running 0 23s
終于pod啟動(dòng)起來(lái)了菱属,而且進(jìn)入的ready的狀態(tài)钳榨。
[root@k8s-master k8s]# helm version
Client: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
至此,可以進(jìn)行后面的實(shí)驗(yàn)了纽门。