基于kubeadm方式部署kubernetes v1.12.1

<center>基于kubeadm方式部署kubernetes v1.12.1</center>

參考文獻如下:
https://blog.csdn.net/jq0123/article/details/80471625
https://www.kubernetes.org.cn/3808.html
http://blog.51cto.com/devingeng/2096495
https://www.linuxidc.com/Linux/2018-10/154548.htm

說明:

  1. 目前k8s組件并沒有全部安裝成功,dns目前處于running狀態(tài)武契,但募判,無法提供服務。
  2. 其他服務可以正常使用
  3. 僅供參考

============
2018.10.16 更新咒唆,
按照當前博文可以安裝成功兰伤,上面提到的dns服務異常,
其實钧排,dns服務一開始就成功了敦腔,是測試工具的問題
============

一、 初始化環(huán)境介紹

系統(tǒng)類型 IP role cpu memory hostname
CentOS 7.4.1708 172.16.91.135 master 4 2G master
CentOS 7.4.1708 172.16.91.136 worker 2 1G slave1
CentOS 7.4.1708 172.16.91.137 worker 2 1G slave2

二恨溜、 安裝環(huán)境準備工作

2.1 主機映射(所有節(jié)點)

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

172.16.91.135   master
172.16.91.136   slave1
172.16.91.137   slave2

2.2 ssh免密碼登陸(在master節(jié)點上 )

  • ssh-keygen
  • ssh-copy-id root@slave1
  • ssh-copy-id root@slave2

2.3 關閉防火墻(所有節(jié)點)

  • systemctl stop firewalld
  • systemctl disable firewalld

2.4 關閉Swap(所有節(jié)點)

  • swapoff -a
  • sed -i 's/.swap./#&/' /etc/fstab

2.5 設置內核(所有節(jié)點)

2.5.1 設置netfilter模塊

modprobe br_netfilter
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
ls /proc/sys/net/bridge

2.5.2 打開ipv4的轉發(fā)功能 (所有節(jié)點)

如果不打開的話符衔,在將從節(jié)點加入到集群時,會報以下的問題糟袁?

image

解決措施判族?

# 執(zhí)行下面的命令
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf  
# 保存執(zhí)行
sysctl -p  

2.5.3 更新內核參數(shù)

echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536"  >> /etc/security/limits.conf
echo "* hard nproc 65536"  >> /etc/security/limits.conf
echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
echo "* hard memlock  unlimited"  >> /etc/security/limits.conf  
或者  
echo "* soft nofile 65536" >> /etc/security/limits.conf && 
echo "* hard nofile 65536" >> /etc/security/limits.conf && 
echo "* soft nproc 65536"  >> /etc/security/limits.conf &&  
echo "* hard nproc 65536"  >> /etc/security/limits.conf && 
echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf && 
echo "* hard memlock  unlimited"  >> /etc/security/limits.conf  

2.5.4 關閉Selinux(所有節(jié)點)

setenforce  0 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config  

2.6 更新yum源 并 安裝相關依賴包(所有節(jié)點)

2.6.1 更新yum源(添加k8s源)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.6.2 添加相關依賴包

yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl 

2.7 配置ntp(所有節(jié)點)

systemctl enable ntpdate.service
echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
crontab /tmp/crontab2.tmp
systemctl start ntpdate.service

2.7.1 發(fā)現(xiàn)系統(tǒng)時間跟實際時間不對,如何解決

cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime  
ntpdate us.pool.ntp.org  
date

三项戴、 部署

3.1 部署docker

具體可以參考其他文章形帮,
目前使用的版本是:

[root@slave2 ~]# docker version
Client:
 Version:      17.03.2-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   f5ec1e2
 Built:        Tue Jun 27 02:21:36 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.2-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   f5ec1e2
 Built:        Tue Jun 27 02:21:36 2017
 OS/Arch:      linux/amd64
 Experimental: false

3.1.1 添加鏡像加速器(所有節(jié)點)

如果沒有的話,可以在阿里云上注冊周叮,獲取自己的鏡像加速器辩撑;

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://xxxxx.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

3.2 部署以rpm包方式部署kubelet,kubedam, kubectl, kubernetes-cni(所有節(jié)點)

yum install -y kubelet kubeadm kubectl kubernetes-cni
systemctl enable kubelet   

安裝成功后仿耽,顯示安裝的版本:

kubeadm.x86_64 0:1.12.0-0              kubectl.x86_64 0:1.12.0-0              kubelet.x86_64 0:1.12.0-0              kubernetes-cni.x86_64 0:0.6.0-0

通過下面的方式合冀,可以安裝制定版本的二進制文件

yum install -y kubelet-1.12.0 kubeadm-1.12.0 kubectl-1.12.0 kubernetes-cni-0.6.0

3.3 更新kubelet配置文件(所有節(jié)點)

  1. 更新配置文件(選做)
    vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    #修改這一行
    Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
    #添加這一行
    Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.0"
    

????說明:
????為什么要添加pause-amd64:3.0?
????kubelet就不會在啟動pod的時候去墻外的k8s倉庫拉取pause-amd64:3.0鏡像了
????可以先在master節(jié)上更新项贺,然后通過scp命令君躺,傳遞給slave1峭判,slave2

scp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf root@slave1:/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
scp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf root@slave2:/etc/systemd/system/kubelet.service.d/10-kubeadm.conf

說明:
pause鏡像已經下載到本地的話,可以不用這么配置的棕叫,

  1. 重新啟動kubelet服務
    systemctl daemon-reload  
    systemctl enable kubelet  
    
  2. 命令部署效果:(master節(jié)點上部署即可)(選做)
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc 

3.4 初始化集群(master節(jié)點)

3.4.1 關于鏡像

國內某種原因林螃,對于鏡像要特別處理

  1. 查看kubeadm用到的鏡像
    kubeadm config images list

    image

  2. 在阿里云或者docker hub,或者其他。。丹擎。。侮邀。 下載所需要的鏡像
    我用的阿里云鏡像(圖片顯示的版本是v1.12.0)

    image

  3. 相關鏡像下載地址
    已經上傳到阿里云上了,可以直接下載

    名稱 地址
    kube-proxy registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-proxy:v1.12.1
    kube-scheduler registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-scheduler:v1.12.1
    kube-controller-manage registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-controller-manager:v1.12.1
    kube-apiserver registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-apiserver:1.12.1
    coredns registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/coredns:1.2.2
    pause registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/pause:3.1
    etcd registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/etcd:3.2.24
    calico-cni registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/calico-cni:v3.1.3
    calico-node registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/calico-node:v3.1.3

    如:
    docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/pause:1.12.1
    即可下載

  4. 將下載好的鏡像贝润,修改成正確的鏡像名稱

    image

    最后刪除掉沒用的鏡像標簽(選做)

    image

    在master節(jié)點上绊茧,操作
    其中:
    etcd-master、apiserver打掘、controller-manager华畏、scheduler 這些都是部署在master節(jié)點上的,鏡像不用copy到其他節(jié)點

#!/bin/bash

docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-proxy:v1.12.1
docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-scheduler:v1.12.1
docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-controller-manager:v1.12.1
docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-apiserver:1.12.1
docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/coredns:1.2.2
docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/pause:3.1
docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/etcd:3.2.24
docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/calico-cni:v3.1.3
docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/calico-node:v3.1.3

docker tag registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-proxy:v1.12.1 k8s.gcr.io/kube-proxy:v1.12.1
docker tag registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-scheduler:v1.12.1  k8s.gcr.io/kube-scheduler:v1.12.1
docker tag registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-controller-manager:v1.12.1 k8s.gcr.io/kube-controller-manager:v1.12.1
docker tag registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-apiserver:1.12.1 k8s.gcr.io/kube-apiserver:v1.12.1
docker tag registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2
docker tag registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/pause:3.1 k8s.gcr.io/pause:3.1 
docker tag registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24


docker rmi -f registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-controller-manager:v1.12.1
docker rmi -f registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-proxy:v1.12.1
docker rmi -f registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-scheduler:v1.12.1
docker rmi -f registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/kube-apiserver:1.12.1
docker rmi -f registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/etcd:3.2.24
docker rmi -f registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/coredns:1.2.2
docker rmi -f registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/pause:3.1

  1. 需要將pause鏡像尊蚁、kube-proxy拷貝到其他從節(jié)點上去亡笑,
    拷貝鏡像,重新打tag為k8s.gcr.io/pause:1.12.0横朋、k8s.gcr.io/kube-proxy:v1.12.1
    因為創(chuàng)建pod時需要這些鏡像仑乌, 尤其是pause鏡像

3.4.2 更新kubeadm配置文件config.yaml (master節(jié)點)

配置文件名字隨便起的;
vim kubeadm-config-v1.12.1.yaml

apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
etcd:
  external: 
    endpoints:
    - https://172.16.91.222:2379
    caFile: /root/ca-k8s/ca.pem
    certFile: /root/ca-k8s/etcd.pem
    keyFile: /root/ca-k8s/etcd-key.pem
    dataDir: /var/lib/etcd
networking:
  podSubnet: 192.168.0.0/16
kubernetesVersion: v1.12.1
api:
  advertiseAddress: "172.16.91.135"
token: "ivxcdu.wub0bx69mk91qo6w"
tokenTTL: "0s"
apiServerCertSANs:
- master
- slave1
- slave2
- 172.16.91.135
- 172.16.91.136
- 172.16.91.137
- 172.16.91.222
featureGates:
  CoreDNS: true

注意:

  • 不同kubeadm的版本 對應的配置文件的屬性是不一樣的琴锭,要注意
  • 上面配置文件里晰甚,etcd需要根據(jù)自己的實際情況進行配置, 由于我本地環(huán)境有etcd决帖,因此就不需要kubeadm自帶的etcd了厕九,因此這里需要配置一下,設置成external屬性地回,

3.4.3 初始化master節(jié)點(master節(jié)點)

方案一: 不使用配置文件(最后采納了這種方案)

kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12 

指定鏡像倉庫方式
如鏡像如下:
registry.aliyuncs.com/google_containers/kube-proxy:v1.15.1時扁远,可以指定鏡像倉庫

 kubeadm init --kubernetes-version="v1.15.1" --pod-network-cidr=192.168.0.0/16 --image-repository=registry.aliyuncs.com/google_containers | tee kubeadm-init.log

https://www.cnblogs.com/AutoSmart/p/11230268.html
方案二: 使用配置文件(這種方案可能有問題)

kubeadm init --config kubeadm-config-v1.12.1.yaml

報錯如下:

error ensuring dns addon: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "10.96.0.10": field is immutable

原因?
很明顯值的類型不符合要求
解決措施刻像?
需要將kind的類型畅买,改成InitConfiguration
最終配置文件的內容如下所示:
vim kubeadm-config-v1.12.1.yaml

apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
etcd:
  external:
    endpoints:
    - https://172.16.91.222:2379
    caFile: /root/ca-k8s/ca.pem
    certFile: /root/ca-k8s/etcd.pem
    keyFile: /root/ca-k8s/etcd-key.pem
    dataDir: /var/lib/etcd
networking:
  podSubnet: 192.168.0.0/16
token: "ivxcdu.wub0bx69mk91qo6w"
tokenTTL: "0"
apiServerCertSANs:
- master
- slave1
- slave2
- 172.16.91.135
- 172.16.91.136
- 172.16.91.137
- 172.16.91.222
- 127.0.0.1

初始化正確結果,打印信息如下:

[init] using Kubernetes version: v1.12.1
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.16.91.135 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.91.135]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 25.504521 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[bootstraptoken] using token: yj2qxf.s4fjen6wgcmo506w
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.16.91.135:6443 --token yj2qxf.s4fjen6wgcmo506w --discovery-token-ca-cert-hash sha256:6d7d90a6ce931a63c96dfe9327691e0e6caa3f69082a9dc374c3643d0d685eb9

3.4.4 初始化失敗時的解決措施(2種方式) (master節(jié)點)

  • 方式一(推薦這種方式簡單明了):

    kubeadm reset

  • 方式二:

    rm -rf /etc/kubernetes/.conf
    rm -rf /etc/kubernetes/manifests/
    .yaml
    docker ps -a |awk '{print $1}' |xargs docker rm -f
    systemctl stop kubelet

3.4.5 配置kubectl的認證信息(master節(jié)點)

配置kubectl的配置文件

  • 若是非root用戶

    mkdir -p HOME/.kube sudo cp -i /etc/kubernetes/admin.confHOME/.kube/config
    sudo chown (id -u):(id -g) $HOME/.kube/config

  • 若是root用戶

    echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
    source ~/.bash_profile

做了這一步操作后绎速,就不會報類似這樣的錯誤了:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

3.4.6 簡單測試下

  • 查看master節(jié)點狀態(tài)
    kubectl get node

    image

  • 查看pod資源情況
    kubectl get pod -n kube-system -o wide

    image

  • 查看組件運行狀態(tài)
    kubectl get componentstatus

    image

  • 查看kubelet運行狀況
    systemctl status kubelet

    image

3.4.7 讓master也運行pod(默認master不運行pod)(選做)

kubectl taint nodes --all node-role.kubernetes.io/master-

3.4.8 將其他從節(jié)點slave1,slave2添加到集群里

分別登陸到slave1皮获, slave2上焙蚓,運行下面的命令即可了(注意纹冤,要改成自己的)

  kubeadm join 172.16.91.135:6443 --token yj2qxf.s4fjen6wgcmo506w --discovery-token-ca-cert-hash sha256:6d7d90a6ce931a63c96dfe9327691e0e6caa3f69082a9dc374c3643d0d685eb9

假如:忘記上面的token洒宝,可以使用下面的命令,找回(master節(jié)點上執(zhí)行)

kubeadm token create --print-join-command

3.4.9 再次查看pod的狀態(tài)

kubectl get pods --all-namespaces -owide

image

通過下面的命令萌京,查看原因雁歌?
kubectl describe pod coredns-576cbf47c7-4nd5t -nkube-system

3.4.10 安裝calico插件, 從而實現(xiàn)pod間的網絡通信

進入k8s官網知残,獲取calico yaml

image

注意:
kubernetes v1.12.1對應的calico版本是Calico Version v3.1.3
直接運行下面的命令

kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml  
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
image

四靠瞎、 問題說明:

  1. 查看pod狀態(tài)? 發(fā)現(xiàn)只有coredns pod 一直處于containercreating狀態(tài)求妹,沒有分配到ip?
    查看kubelet日志乏盐,發(fā)現(xiàn)network cni插件狀態(tài)為false
    可能是因為,在初始化master節(jié)點時制恍, 配置文件里定義的屬性podpodSubnet 沒有生效
    因此父能,直接使用命令行的方式,
  2. 如果發(fā)現(xiàn)coredns pod 一直處于running净神,error等狀態(tài)的話何吝,
    查看日志
[root@master /]#kubectl logs coredns-55f86bf584-7sbtj -n kube-system
.:53
2018/10/09 10:20:15 [INFO] CoreDNS-1.2.2
2018/10/09 10:20:15 [INFO] linux/amd64, go1.11, eb51e8b
CoreDNS-1.2.2
linux/amd64, go1.11, eb51e8b
2018/10/09 10:20:15 [INFO] plugin/reload: Running configuration MD5 = 86e5222d14b17c8b907970f002198e96
2018/10/09 10:20:15 [FATAL] plugin/loop: Seen "HINFO IN 2050421060481615995.5620656063561519376." more than twice, loop detected

如何解決?

image

image

通過下面的命令鹃唯,進行編輯刪除
kubectl edit cm coredns -oyaml -nkube-system

具體原因爱榕,暫時不再跟蹤,目前主要想研究calico的network policy坡慌,此問題先放一邊

跟此問題相關的文章如下:
https://github.com/coredns/coredns/issues/2087
https://github.com/kubernetes/kubeadm/issues/998
這里面有解決方案黔酥,不知道為什么我的集群不起作用

補充:測試dns插件的方法

  1. 準備測試用的yaml, pod-for-dns.yaml
    apiVersion: v1
    kind: Pod
    metadata:
    name: dns-test
    namespace: default
    spec:
    containers:
    - image: busybox:1.28.4
        command:
        - sleep
        - "3600"   
        imagePullPolicy: IfNotPresent
        name: dns-test
    restartPolicy: Always
    
    
    注意:busybox的版本號洪橘,有些版本號測試失敗
  2. 創(chuàng)建pod
    kubectl create -f pod-for-dns.yaml
    
  3. 測試dns服務
    [root@master ~]# kubectl exec dns-test -- nslookup kubernetes
    Server:    10.96.0.10
    Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
    
    Name:      kubernetes
    Address 1: 10.96.0.1 webapp.default.svc.cluster.local
    
    
最后編輯于
?著作權歸作者所有,轉載或內容合作請聯(lián)系作者
  • 序言:七十年代末絮爷,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子梨树,更是在濱河造成了極大的恐慌坑夯,老刑警劉巖,帶你破解...
    沈念sama閱讀 216,651評論 6 501
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件抡四,死亡現(xiàn)場離奇詭異柜蜈,居然都是意外死亡,警方通過查閱死者的電腦和手機指巡,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,468評論 3 392
  • 文/潘曉璐 我一進店門淑履,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人藻雪,你說我怎么就攤上這事秘噪。” “怎么了勉耀?”我有些...
    開封第一講書人閱讀 162,931評論 0 353
  • 文/不壞的土叔 我叫張陵指煎,是天一觀的道長蹋偏。 經常有香客問我,道長至壤,這世上最難降的妖魔是什么威始? 我笑而不...
    開封第一講書人閱讀 58,218評論 1 292
  • 正文 為了忘掉前任,我火速辦了婚禮像街,結果婚禮上黎棠,老公的妹妹穿的比我還像新娘。我一直安慰自己镰绎,他們只是感情好脓斩,可當我...
    茶點故事閱讀 67,234評論 6 388
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著畴栖,像睡著了一般俭厚。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上驶臊,一...
    開封第一講書人閱讀 51,198評論 1 299
  • 那天挪挤,我揣著相機與錄音,去河邊找鬼关翎。 笑死扛门,一個胖子當著我的面吹牛,可吹牛的內容都是我干的纵寝。 我是一名探鬼主播论寨,決...
    沈念sama閱讀 40,084評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼爽茴!你這毒婦竟也來了葬凳?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 38,926評論 0 274
  • 序言:老撾萬榮一對情侶失蹤室奏,失蹤者是張志新(化名)和其女友劉穎火焰,沒想到半個月后,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體胧沫,經...
    沈念sama閱讀 45,341評論 1 311
  • 正文 獨居荒郊野嶺守林人離奇死亡昌简,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 37,563評論 2 333
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了绒怨。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片纯赎。...
    茶點故事閱讀 39,731評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖南蹂,靈堂內的尸體忽然破棺而出犬金,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 35,430評論 5 343
  • 正文 年R本政府宣布晚顷,位于F島的核電站峰伙,受9級特大地震影響,放射性物質發(fā)生泄漏音同。R本人自食惡果不足惜词爬,卻給世界環(huán)境...
    茶點故事閱讀 41,036評論 3 326
  • 文/蒙蒙 一秃嗜、第九天 我趴在偏房一處隱蔽的房頂上張望权均。 院中可真熱鬧,春花似錦锅锨、人聲如沸叽赊。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,676評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽必指。三九已至,卻和暖如春恕洲,著一層夾襖步出監(jiān)牢的瞬間塔橡,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 32,829評論 1 269
  • 我被黑心中介騙來泰國打工霜第, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留葛家,地道東北人。 一個月前我還...
    沈念sama閱讀 47,743評論 2 368
  • 正文 我出身青樓泌类,卻偏偏與公主長得像癞谒,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子刃榨,可洞房花燭夜當晚...
    茶點故事閱讀 44,629評論 2 354

推薦閱讀更多精彩內容

  • 1. 組件版本和配置策略 組件版本: Kubernetes 1.10.4 Docker 18.03.1-ce Et...
    Anson前行閱讀 5,770評論 0 11
  • 一弹砚、組件版本和配置策略 1、組件版本 Kubernetes 1.10.4Docker 18.03.1-ceEtcd...
    Horne閱讀 3,581評論 1 50
  • 星期一學習了“活法"中的:“不在現(xiàn)場流汗什么也學不到枢希、拼搏在當下這一刻和“喜歡"燃起熱情"這3小節(jié)桌吃,這其實就是我們...
    徐沖群閱讀 582評論 0 0
  • 常常出差的人,就會知道獨自入住酒店有很多禁忌苞轿,比如:不要入住走廊盡頭的房間读存;不要入住全酒店最后一間空房;刷完門卡不...
    亦花閱讀 914評論 18 10
  • 1.今天一早滿懷喜悅呕屎,期待爸媽和葉生的到來让簿。上午,爸媽如約而至秀睛,我陪他們在店里聊聊后又到公園里走走尔当,吃過午飯后,把...
    MayMay2018閱讀 130評論 0 0