安裝步驟:
系統(tǒng)是CentOS 7.4
1. 禁用swap文件
然后需要禁用swap文件凹联,這是Kubernetes的強制步驟液荸。實現(xiàn)它很簡單蘸泻,編輯/etc/fstab文件琉苇,注釋掉引用swap的行,保存并重啟后輸入sudo swapoff -a即可
對于禁用swap
內(nèi)存悦施,你可能會有點不解并扇,具體原因可以查看Github上的Issue:Kubelet/Kubernetes should work with Swap Enabled。
2. 配置/etc/hosts
cat >> /etc/hosts << EOF
192.168.56.101 master
192.168.56.102 node1
192.168.56.103 node2
EOF
(1)關(guān)閉swap
swapoff -a
(2)關(guān)閉SELinux抡诞,修改SELINUX屬性
#setenforce 0
#vi /etc/sysconfig/selinux
SELINUX=disabled
(3)設置iptables
#vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
同iptables
sysctl --system
3. 需要安裝一下yum包:
yum install kubelet-1.11.1 kubeadm-1.11.1 kubectl-1.11.1 kubernetes-cni
配置kubernetes源, 使用阿里云的源
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
安裝yum包
yum install kubelet-1.11.1 kubeadm-1.11.1 kubectl-1.11.1
4.下載安裝k8s依賴鏡像
注意:此步驟非常關(guān)鍵穷蛹,kubenetes初始化啟動會依賴這些鏡像,天朝的網(wǎng)絡肯定是拉不下來google的鏡像的昼汗,一般人過了上一關(guān)肴熏,這一關(guān)未必過的去,一定要提前把鏡像下載到本地顷窒,kubeadm安裝才會繼續(xù)蛙吏,下面我會列出來master節(jié)點和node依賴的鏡像列表。
有個技術(shù)大牛把gcr.io的鏡像,
每天同步到https://github.com/anjia0532/gcr.io_mirror這個站點鞋吉,因此鸦做,如果需要用到gcr.io的鏡像,可以執(zhí)行如下的腳本進行鏡像拉取
Master node需要的服務組件:
$ kubeadm config images list
k8s.gcr.io/kube-apiserver-amd64:v1.11.4
k8s.gcr.io/kube-controller-manager-amd64:v1.11.4
k8s.gcr.io/kube-scheduler-amd64:v1.11.4
k8s.gcr.io/kube-proxy-amd64:v1.11.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd-amd64:3.2.18
k8s.gcr.io/coredns:1.1.3
vim pullimages.sh
#!/bin/bash
images=(
kube-apiserver-amd64:v1.11.4
kube-controller-manager-amd64:v1.11.4
kube-scheduler-amd64:v1.11.4
kube-proxy-amd64:v1.11.4
pause:3.1
etcd-amd64:3.2.18
coredns:1.1.3
)
for imageName in ${images[@]} ; do
docker pull anjia0532/google-containers.$imageName
docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName
docker rmi anjia0532/google-containers.$imageName
done
sh pullimages.sh
或者使用下面的源:
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.12.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/*
kubernetes集群不允許開啟swap谓着,所以我們需要忽略這個錯誤
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
5.使用kubeadm int, 安裝Kubenates v1.11.4
編寫kubeadm.yaml
[root@master ~]# cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
controllerManagerExtraArgs:
horizontal-pod-autoscaler-use-rest-clients: "true"
horizontal-pod-autoscaler-sync-period: "10s"
node-monitor-grace-period: "10s"
apiServerExtraArgs:
runtime-config: "api/all=true"
kubernetesVersion: "v1.11.4"
[root@master ~]# kubeadm init --config kubeadm.yaml
[init] using Kubernetes version: v1.11.4
[preflight] running pre-flight checks
I1113 23:10:20.954974 3516 kernel_validator.go:81] Validating kernel version
I1113 23:10:20.955095 3516 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 41.501829 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[bootstraptoken] using token: jhla37.mllhf316c5q7b9lk
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.0.2.15:6443 --token jhla37.mllhf316c5q7b9lk --discovery-token-ca-cert-hash sha256:f760528cd6221deee37025376101c58d493b745ef3ef4fc9ee996106657e0095
配置認證的配置文件:
root用戶:
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
普通用戶:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
echo "export KUBECONFIG=$HOME/.kube/config" >> $HOME/.bash_profile
source $HOME/.bash_profile
如果token失效泼诱,使用以下命令重新創(chuàng)建一個token。 token默認24小時后過期
[stadmin@master ~]$ kubeadm token create
j2oyxt.rb8ei1avfmkltnls
kubeadm token list命令也可以查看token:
[root@master ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
jhla37.mllhf316c5q7b9lk 22h 2018-11-14T23:11:09+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
獲取ca證書sha256編碼hash值, 這個跟init安裝時的值是一樣的:
[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
cc16dc7e829c136e45db13cbd18753a938594e3894f9f399ff64bc50243328be
重新生產(chǎn)加入集群的命令:
[root@node1 ~]# kubeadm join 192.168.56.101:6443 --token j2oyxt.rb8ei1avfmkltnls --discovery-token-ca-cert-hash sha256:cc16dc7e829c136e45db13cbd18753a938594e3894f9f399ff64bc50243328be
部署完成Kubenates了赊锚, 檢查一下狀態(tài):
kubectl get cs
kubectl get nodes
kubectl describe node master
[stadmin@master ~]$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-m8nd8 0/1 Pending 0 21m
coredns-78fcdf6894-vq884 0/1 Pending 0 21m
etcd-master 1/1 Running 0 3s
kube-apiserver-master 1/1 Running 0 3s
kube-controller-manager-master 1/1 Running 0 3s
kube-proxy-pp2lk 1/1 Running 0 21m
kube-scheduler-master 1/1 Running 0 3s
Master節(jié)點治筒,最后還要安裝網(wǎng)絡插件:
$ kubectl apply -f https://git.io/weave-kube-1.6
或者先下載下來:
$kubectl apply -f weave-kube-1.6.yaml
到這里,完成了Master的部署舷蒲,但是最后我發(fā)現(xiàn)耸袜, API-Server的IP,居然使用的是NAT網(wǎng)卡的IP牲平, 不能用于集群間的通訊句灌,只要重裝一遍,并指定API-Server的IP使用Host-Only網(wǎng)卡的IP欠拾。
官方文檔說明如下:
Kubernets API Server進程提供Kuvernetes API。通常情況下骗绕,有一個進程運行在單一kubernetes-master節(jié)點上藐窄。
默認情況,Kubernetes API Server提供HTTP的兩個端口:
1.本地主機端口
HTTP服務
默認端口8080酬土,修改標識–insecure-port
默認IP是本地主機荆忍,修改標識—insecure-bind-address
在HTTP中沒有認證和授權(quán)檢查
主機訪問受保護
2.Secure Port
默認端口6443,修改標識—secure-port
默認IP是首個非本地主機的網(wǎng)絡接口,修改標識—bind-address
HTTPS服務刹枉。設置證書和秘鑰的標識叽唱,–tls-cert-file,–tls-private-key-file
認證方式微宝,令牌文件或者客戶端證書
使用基于策略的授權(quán)方式
3.移除:只讀端口
基于安全考慮棺亭,會移除只讀端口,使用Service Account代替蟋软。
所以镶摘,就需要重新配置原來的Kubenates, 修改kubeadm.yaml
加入api: advertiseAddress: 192.168.56.101
如果是使用參數(shù)來配置的岳守,就需要加--apiserver-advertise-address=192.168.56.101
[root@master ~]# cat kubeadm.yaml
api:
advertiseAddress: 192.168.56.101
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
controllerManagerExtraArgs:
horizontal-pod-autoscaler-use-rest-clients: "true"
horizontal-pod-autoscaler-sync-period: "10s"
node-monitor-grace-period: "10s"
apiServerExtraArgs:
runtime-config: "api/all=true"
kubernetesVersion: "v1.11.4"
刪除原來的Kubenates集群:
[root@master ~]# kubeadm reset
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] removing kubernetes-managed containers
[reset] cleaning up running containers using crictl with socket /var/run/dockershim.sock
[reset] failed to list running pods using crictl: exit status 1. Trying to use docker instead[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
最后再重新init和安裝網(wǎng)絡插件:
[root@master ~]# kubeadm init --config kubeadm.yaml
[init] using Kubernetes version: v1.11.4
[preflight] running pre-flight checks
I1114 23:27:38.266833 6682 kernel_validator.go:81] Validating kernel version
I1114 23:27:38.266938 6682 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 43.502879 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[bootstraptoken] using token: k3uv7z.nbu8jzxdfl3gs4ui
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.56.101:6443 --token k3uv7z.nbu8jzxdfl3gs4ui --discovery-token-ca-cert-hash sha256:cc16dc7e829c136e45db13cbd18753a938594e3894f9f399ff64bc50243328be
kubectl apply -f https://git.io/weave-kube-1.6
或者先下載下來:
$kubectl apply -f weave-kube-1.6.yaml
部署Worker節(jié)點:
Worker節(jié)點需要下載的鏡像比較少凄敢,只需要如下幾個:
[root@node1 ~]# cat pullimages.sh
#!/bin/bash
images=(
kube-proxy-amd64:v1.11.4
pause:3.1
coredns:1.1.3
)
for imageName in ${images[@]} ; do
docker pull anjia0532/google-containers.$imageName
docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName
docker rmi anjia0532/google-containers.$imageName
done
[root@node1 ~]# kubeadm join 192.168.56.101:6443 --token k3uv7z.nbu8jzxdfl3gs4ui --discovery-token-ca-cert-hash sha256:cc16dc7e829c136e45db13cbd18753a938594e3894f9f399ff64bc50243328be
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I1114 23:37:24.655522 1713 kernel_validator.go:81] Validating kernel version
I1114 23:37:24.655617 1713 kernel_validator.go:96] Validating kernel config
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[discovery] Trying to connect to API Server "192.168.56.101:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.56.101:6443"
[discovery] Requesting info from "https://192.168.56.101:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.56.101:6443"
[discovery] Successfully established connection with API Server "192.168.56.101:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
work節(jié)點此時運行kubectl get nodes還會報錯,需要配置以下的環(huán)境變量
# cp /etc/kubernetes/kubelet.conf $HOME/
# chown $(id -u):$(id -g) $HOME/kubelet.conf
# export KUBECONFIG=$HOME/kubelet.conf
Node節(jié)點湿痢,最后還要安裝網(wǎng)絡插件:
$ kubectl apply -f https://git.io/weave-kube-1.6
或者先下載下來:
$kubectl apply -f weave-kube-1.6.yaml
在Master節(jié)點上移除污點涝缝,使其可以被調(diào)度,安裝存儲組件
kubectl taint nodes --all node-role.kubernetes.io/master-
在Master節(jié)點上譬重,部署可視化組件
# wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
docker pull anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0
docker tag anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker rmi anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0
修改kubernetes-dashboard.yaml拒逮,可以直接token認證進入
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
# 添加Service的type為NodePort
type: NodePort
ports:
- port: 443
targetPort: 8443
# 添加映射到虛擬機的端口,k8s只支持30000以上的端口
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
kubectl apply -f kubernetes-dashboard.yaml
查看安裝情況
kubectl get pods -n rook-ceph-system
kubectl get pods -n rook-ceph
開啟服務
nohup kubectl proxy --address='0.0.0.0' --accept-hosts='^*$' --disable-filter=true &
獲取token命令
kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
訪問dashboard
通過node節(jié)點的ip,加剛剛我們設置的nodePort就可以訪問了害幅。
https://192.168.56.101:30001/
備忘:
查看全部節(jié)點
kubectl get pods --all-namespaces
查看pods
kubectl describe pod -n kube-system
查看具體問題
kubectl describe pod kubernetes-dashboard-767dc7d4d-mg5gw -n kube-system
kubectl get pods --all-namespaces
pod啟動失敗的log
如果出現(xiàn)Error消恍、Pending、ImagePullBackOff以现、CrashLoopBackOff都屬于啟動失敗的Pod狠怨,原因需要仔細排除
a、查看 /var/log/messages系統(tǒng)日志
b邑遏、kubectl describe pod kube-flannel-ds-2wk55 --namespace=kube-system
c佣赖、kubectl logs -f kube-dns-2425271678-37lf7 -n kube-system kubedns
部署存儲插件:
kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/operator.yaml
kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/cluster.yaml
查看安裝情況:
kubectl get pods -n rook-ceph-system
kubectl get pods -n rook-ceph
下面是安裝1.12.4失敗的記錄, 后面又換低版本记盒,安裝1.11.1
kubeadm init \
> --kubernetes-version=v1.12.2 \
> --pod-network-cidr=10.244.0.0/16 \
> --apiserver-advertise-address=192.168.56.101
[root@master containers]# kubeadm init
--kubernetes-version=v1.12.2
--pod-network-cidr=10.244.0.0/16
--apiserver-advertise-address=192.168.56.101
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
打開 kubelet
開機自啟動
systemctl enable kubelet
打開kubelet服務憎蛤, 因為有墻,下載鏡像時會報錯纪吮,需要把提示需要使用的鏡像俩檬,使用國內(nèi)鏡像,先下載到本地
systemctl start kubelet
使用kubelet的啟動參數(shù)–fail-swap-on=false去掉必須關(guān)閉Swap的限制碾盟。 修改/etc/sysconfig/kubelet棚辽,加入:
KUBELET_EXTRA_ARGS=--fail-swap-on=false
參考文章:
https://www.datayang.com/article/45
https://www.kubernetes.org.cn/4619.html