要求
- You must have Kubernetes installed. We recommend version 1.4.1 or later.
- You should also have a local configured copy of kubectl.
Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses.
下載
HELM_VERSION=${K8S_VERSION:-"2.5.0"}
HELM="helm-v${HELM_VERSION}-linux-amd64"
curl -L https://storage.googleapis.com/kubernetes-helm/$HELM.tar.gz -o $HELM.tar.gz
tar -xvzf $HELM.tar.gz -C /tmp
mv /tmp/linux-amd64/helm /usr/local/bin/helm
各release版本:
https://github.com/kubernetes/helm/releases
tiller(helm服務(wù)端)
安裝
-
in-cluster安裝(安裝在k8s集群上)
helm init
正常的話,會(huì)在k8s集群的kube-system安裝一個(gè)tiller pod.
默認(rèn)使用的是~/.kube/config中的CurrentContext來指定部署的k8s集群.可以通過設(shè)置環(huán)境變量$KUBECONFIG指定kubectl配置文件以及使用--context指定context來指定部署的集群.
-
local 安裝
/bin/tiller
這種情況下默認(rèn)會(huì)訪問kubectl默認(rèn)配置文件($HOME/.kube/conf)的CurrentContext關(guān)聯(lián)的k8s集群(用于存放數(shù)據(jù)等等).
也可以通過$KUBECONFIG來指定連接的k8s集群的配置文件
必須通知helm不要連接集群上的tiller,而連接到本地安裝的tiller.兩種方式
- helm --host=<ip>
- export HELM_HOST=localhost:44134
指定集群安裝說明
As with the rest of the Helm commands, 'helm init' discovers Kubernetes clusters
by reading $KUBECONFIG (default '~/.kube/config') and using the default context.
helm指定特定的kubectl配置中特定的context dev描述的集群去部署:
export KUBECONFIG="/path/to/kubeconfig"
helm init --kube-context="dev"
存儲(chǔ)
tiller支持兩種存儲(chǔ):
- memory
- storage.
無論使用哪種部署方式,這兩種存儲(chǔ)都可以使用.memory存儲(chǔ)在tiller重啟后,release等數(shù)據(jù)會(huì)丟失.
現(xiàn)象
執(zhí)行helm init后,會(huì)
root@node01:~# helm init
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/repository/repositories.yaml
$HELM_HOME has been configured at /root/.helm.
Tiller (the helm server side component) has been installed into your Kubernetes Cluster.
在k8s集群kube-system namespace下安裝了deployment tiller-deploy和service tiller-deploy.
補(bǔ)充 :
- 如果執(zhí)行
helm init --client-only
,不會(huì)安裝tiller,只會(huì)創(chuàng)建helm home中目錄中的文件,并配置$HELM_HOME環(huán)境變量 - 如果$HELM_HOME目錄下已經(jīng)存在欲創(chuàng)建的文件,不會(huì)重新創(chuàng)建或者修改.不存在的文件/目錄則會(huì)創(chuàng)建.
TroubleShooting
Context deadline exceeded
root@node01:~# helm version --debug
[debug] SERVER: "localhost:44134"
Client: &version.Version{SemVer:"v2.5.0", GitCommit:"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6", GitTreeState:"clean"}
[debug] context deadline exceeded
Error: cannot connect to Tiller
https://github.com/kubernetes/helm/issues/2409
未解決.
嘗試了幾次,又成功了.
-
unset HELM_HOST
(之前設(shè)置了HELM_HOST為127.0.0.1:44134,而且更改了svc tiller-deploy為NodePort) - 卸載后(移除tiller相關(guān)svc,deploy以及/root/.helm目錄),重新安裝
- 正常了.
socat not found
root@node01:~# helm version
Client: &version.Version{SemVer:"v2.5.0", GitCommit:"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6", GitTreeState:"clean"}
E0711 10:09:50.160064 10916 portforward.go:332] an error occurred forwarding 33491 -> 44134: error forwarding port 44134 to pod tiller-deploy-542252878-15h67_kube-system, uid : unable to do port forwarding: socat not found.
Error: cannot connect to Tiller
已解決:
在kubelet node上安裝socat即可.https://github.com/kubernetes/helm/issues/966
卸載
helm reset
將會(huì)移除tiller在k8s集群上創(chuàng)建的pod當(dāng)出現(xiàn)上面的
context deadline exceeded
時(shí),helm reset
同樣會(huì)報(bào)該錯(cuò)誤.執(zhí)行heml reset -f
強(qiáng)制刪除k8s集群上的pod.當(dāng)要移除helm init創(chuàng)建的目錄等數(shù)據(jù)時(shí),執(zhí)行
helm reset --remove-helm-home
補(bǔ)充
2.5版本安裝的tiller,在出現(xiàn)context deadline exceeded
時(shí),使用2.4版本的helm執(zhí)行heml reset --remove-helm-home --force
并不能移除tiller創(chuàng)建的pod和配置.這是2.4版本的問題.
測(cè)試環(huán)境
本地tiller
tiller
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./tiller
[main] 2017/07/26 14:59:54 Starting Tiller v2.5+unreleased (tls=false)
[main] 2017/07/26 14:59:54 GRPC listening on :44134
[main] 2017/07/26 14:59:54 Probes listening on :44135
[main] 2017/07/26 14:59:54 Storage driver is ConfigMap
參考
https://docs.helm.sh/using_helm/#running-tiller-locally
When Tiller is running locally, it will attempt to connect to the Kubernetes cluster that is configured by kubectl. (Run kubectl config view to see which cluster that is.)
-
kubectl config view
讀的就是~/.kube/config文件
helm
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ export HELM_HOST=localhost:44134
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm init --client-only
Creating /home/wwh/.helm
Creating /home/wwh/.helm/repository
Creating /home/wwh/.helm/repository/cache
Creating /home/wwh/.helm/repository/local
Creating /home/wwh/.helm/plugins
Creating /home/wwh/.helm/starters
Creating /home/wwh/.helm/cache/archive
Creating /home/wwh/.helm/repository/repositories.yaml
$HELM_HOME has been configured at /home/wwh/.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!
必須要執(zhí)行helm init --client-only
來初始化helm home下的目錄結(jié)構(gòu).否則helm repo list會(huì)報(bào)以下的錯(cuò)誤:
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm repo list
Error: open /home/wwh/.helm/repository/repositories.yaml: no such file or directory
警告
這種方法如果k8s集群,沒有辦法測(cè)試helm install ./testChart --dry-run
類似的命令,
即使通過./tiller -storage=memory
配置存儲(chǔ)為內(nèi)存
本地tiller,但指定后端的K8s集群
tiller
在本地運(yùn)行tiller,但指定后端運(yùn)行的k8s集群
//指定后端k8s集群的路徑,tiller在初始化kube client的時(shí)候會(huì)使用該配置文件
//作為kube client的配置文件.
export KUBECONFIG=/tmp/k8sconfig-688597196
./tiller
helm
helm還是和之前的一樣.
tiller存儲(chǔ)測(cè)試
實(shí)驗(yàn)configmap
# 用local方安裝tiller,指定后端k8s集群, 存儲(chǔ)為configmap(默認(rèn))
export KUBECONFIG=/tmp/k8sconfig-688597196
./tiller
# 啟動(dòng)另一個(gè)shell
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ export HELM_HOST=localhost:44134
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm init --client-only
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm install stable/wordpress --debug
# release安裝成功
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm list
NAME REVISION UPDATED STATUS CHART NAMESPACE
tinseled-warthog 1 Fri Aug 25 17:13:53 2017 DEPLOYED wordpress-0.6.8 default
# 查看集群的configmap
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ kubectl --kubeconfig=/tmp/k8sconfig-688597196 get configmap --all-namespaces
NAMESPACE NAME DATA AGE
kube-public cluster-info 2 6d
kube-system calico-config 3 6d
kube-system extension-apiserver-authentication 6 6d
kube-system kube-proxy 1 6d
kube-system tinseled-warthog.v1 1 1m
# 刪除release, 這時(shí)候這個(gè)configmap仍然存在
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm delete tinseled-warthog
release "tinseled-warthog" deleted
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ kubectl --kubeconfig=/tmp/k8sconfig-688597196 get configmap --all-namespaces
NAMESPACE NAME DATA AGE
kube-public cluster-info 2 6d
kube-system calico-config 3 6d
kube-system extension-apiserver-authentication 6 6d
kube-system kube-proxy 1 6d
kube-system tinseled-warthog.v1
# 執(zhí)行helm delete <release> --purge
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm delete tinseled-warthog --purge
release "tinseled-warthog" deleted
# configmap上的數(shù)據(jù)已經(jīng)被清除.
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ kubectl --kubeconfig=/tmp/k8sconfig-688597196 get configmap --all-namespaces
NAMESPACE NAME DATA AGE
kube-public cluster-info 2 6d
kube-system calico-config 3 6d
kube-system extension-apiserver-authentication 6 6d
kube-system kube-proxy 1 6d