k8s證書配置照皆,dns重绷,dashboard

一.簽發(fā)證書

TLS雙向認知需要預先自建CA簽發(fā)證書,權威CA機構(gòu)的證書應該不可用膜毁,因為大部分k8s都是在內(nèi)網(wǎng)中部署昭卓,而內(nèi)網(wǎng)應該都會采用私有IP地址通訊,權威CA好像只能簽署域名證書瘟滨,對簽署到IP可能無法實現(xiàn).

master:192.168.23.128
node1:192.168.23.129
node2:192.168.23.131
node3:192.168.23.130

1.自簽CA

對于私有證書簽發(fā)首先要自簽署一個CA根證書
創(chuàng)建證書存放的目錄,
創(chuàng)建CA私鑰
自簽CA

[root@master ~]# mkdir /etc/kubernetes/ssl && cd /etc/kubernetes/ssl
openssl genrsa -out ca-key.pem 2048
[root@master ssl]# openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
[root@master ssl]# ls
ca-key.pem  ca.pem

2.簽署apiserver 證書

自簽 CA 后就需要使用這個根 CA 簽署 apiserver 相關的證書了候醒,首先先修改 openssl 的配置。

# vim openssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
IP.1 = 10.254.0.1   #k8s 集群service ip
IP.2 = 192.168.23.128  #k8s master ip

然后開始簽署apiserver相關的證書

# 生成 apiserver 私鑰
openssl genrsa -out apiserver-key.pem 2048
# 生成簽署請求
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
# 使用自建 CA 簽署
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf

3.生成集群管理證書

openssl genrsa -out admin-key.pem 2048
openssl req -new -key admin-key.pem -out admin.csr -subj "/CN=kube-admin"
openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out admin.pem -days 365

4.簽署node證書

先修改一下 openssl 配置

[root@master ssl]#  cp openssl.cnf worker-openssl.cnf
[root@master ssl]# cat worker-openssl.cnf 
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = 192.168.23.129
IP.2 = 192.168.23.131
IP.3 = 192.168.23.130

生成各個結(jié)點的證書并且拷貝到每個節(jié)點的目錄下

[root@master ssl]# for i in {node1,node2,node3}
> do
> openssl  genrsa -out $i-worker-key.pem 2048
>  openssl req -new -key $i-worker-key.pem -out $i-worker.csr -subj "/CN=$i" -config worker-openssl.cnf
> openssl x509 -req -in $i-worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out $i-worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf
> ssh root@$i "mkdir /etc/kubernetes/ssl;chown kube:kube -R /etc/kubernetes/ssl"
> scp /etc/kubernetes/ssl/ca.pem /etc/kubernetes/ssl/$i* root@$i:/etc/kubernetes/ssl
> done

二.配置k8s

1.配置master

apiserver文件

KUBE_API_ADDRESS="--bind-address=192.168.23.128 --insecure-bind-address=127.0.0.1 "

# The port on the local server to listen on.
KUBE_API_PORT=="--secure-port=6443 --insecure-port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.23.128:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"

# Add your own!
KUBE_API_ARGS="--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem"

config文件

KUBE_MASTER="--master=https://192.168.23.128:6443"

scheduler文件

KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/cm-kubeconfig.yaml --master=http://127.0.0.1:8080"

controller-manager

KUBE_CONTROLLER_MANAGER_ARGS="--service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem  --root-ca-file=/etc/kubernetes/ssl/ca.pem --master=http://127.0.0.1:8080 --kubeconfig=/etc/kubernetes/cm-kubeconfig.yaml"

創(chuàng)建一個/etc/kubernetes/cm-kubeconfig.yaml 文件

apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: controllermanager
  user:
    client-certificate: /etc/kubernetes/ssl/apiserver.pem
    client-key: /etc/kubernetes/ssl/apiserver-key.pem
contexts:
- context:
    cluster: local
    user: controllermanager
  name: kubelet-context
current-context: kubelet-context

重啟服務

systemctl  restart  etcd kube-apiserver.service kube-controller-manager.service kube-scheduler.service

2.配置node結(jié)點(以node1為例子)

config文件

KUBE_MASTER="--master=https://192.168.23.128:6443"

kubelet文件

KUBELET_ADDRESS="--address=192.168.23.129"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=node1"

# location of the api-server
KUBELET_API_SERVER="--api-servers=https://192.168.23.128:6443"

# pod infrastructure container
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=kubernetes/pause:latest"
# Add your own!
KUBELET_ARGS="--cluster_dns=10.254.0.3 --cluster_domain=cluster.local --tls-cert-file=/etc/kubernetes/ssl/node1-worker.pem --tls-private-key-file=/etc/kubernetes/ssl/node1-worker-key.pem --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml --allow-privileged=true"

proxy文件

KUBE_PROXY_ARGS="--proxy-mode=iptables --master=https://192.168.23.128:6443 --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml"

創(chuàng)建一個文件worker-kubeconfig.yaml

apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    server: https://192.168.23.128:6443
    certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ssl/node1-worker.pem
    client-key: /etc/kubernetes/ssl/node1-worker-key.pem
contexts:
- context:
    cluster: local
    user: kubelet
  name: kubelet-context
current-context: kubelet-context

node1

#重啟服務
systemctl  restart kubelet kube-proxy
#查看 狀態(tài)
systemctl  status  kubelet kube-proxy  -l
#驗證證書
curl https://192.168.23.128:6443/api/v1/nodes --cert /etc/kubernetes/ssl/node1-worker.pem --key /etc/kubernetes/ssl/node1-worker-key.pem --cacert /ec/kubernetes/ssl/ca.pem

curl這行要手打出來杂瘸。不然會顯示ca有問題
PS 如果顯示certificate signed by unknown authority檢查kubelet文件里面的配置倒淫。

三.配置dns

skydns-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: kube-dns-v9
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    version: v9
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  selector:
    k8s-app: kube-dns
    version: v9
  template:
    metadata:
      labels:
        k8s-app: kube-dns
        version: v9
        kubernetes.io/cluster-service: "true"
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
    spec:
      containers:
      - name: etcd
        image: test-registry:5000/etcd
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
        command:
        - /usr/local/bin/etcd
        - -data-dir
        - /var/etcd/data
        - -listen-client-urls
        - http://127.0.0.1:2379,http://127.0.0.1:4001
        - -advertise-client-urls
        - http://127.0.0.1:2379,http://127.0.0.1:4001
        - -initial-cluster-token
        - skydns-etcd
        volumeMounts:
        - name: etcd-storage
          mountPath: /var/etcd/data
      - name: kube2sky
        image: test-registry:5000/kube2sky
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
        args:
        - -domain=cluster.local                        #設置k8s集群中Service所屬的域名
        - -kube_master_url=https://192.168.23.128:6443   #k8s中master的ip地址和apiserver中配置的端口號
        - -kubecfg_file=/etc/kubernetes/worker-kubeconfig.yaml                      
        volumeMounts:
        - mountPath: /etc/kubernetes/ssl
          name: ssl-certs-kubernetes
        - mountPath: /etc/ssl/certs
          name: ssl-certs-host
        - mountPath: /etc/kubernetes/worker-kubeconfig.yaml
          name: config

      - name: skydns
        image: test-registry:5000/skydns
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
        args:
        - -machines=http://localhost:4001
        - -addr=0.0.0.0:53
        - -domain=cluster.local
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
      volumes:
      - name: etcd-storage
        emptyDir: {}
      - hostPath:
          path: /etc/kubernetes/ssl
        name: ssl-certs-kubernetes
      - hostPath:
          path: /etc/pki/tls/certs
        name: ssl-certs-host
      - hostPath:
          path: /etc/kubernetes/worker-kubeconfig.yaml
        name: config


skydns-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.254.0.3           #/etc/kubernetes/kubelet中已經(jīng)設定好clusterIP
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

啟動rc和svckubectl create -f skydns-svc.yaml,skydns-rc.yaml

排錯

kubectl  describe pod name --namespace=kube-system
kubectl  logs podname -c containersname --namespace=kube-system
docker run  test-registry:5000/kube2sky --help 

檢測

啟動一個帶有nslookup命令的容器解析同一命名空間內(nèi)的service。
進入容器查看etcd里面獲取到service的信息

[root@master build]# kubectl  exec -it kube-dns-v9-b8a4z --namespace=kube-system -c etcd etcdctl ls /skydns/local/cluster
/skydns/local/cluster/default
/skydns/local/cluster/svc
/skydns/local/cluster/kube-system

四.配置dashboard

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubernetes-dashboard
  template:
    metadata:
      labels:
        app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: index.tenxcloud.com/google_containers/kubernetes-dashboard-amd64:v1.4.1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          - --apiserver-host=https://192.168.23.128:6443
          - --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
        volumeMounts:
        - mountPath: /etc/kubernetes/ssl
          name: ssl-certs-kubernetes
        - mountPath: /etc/ssl/certs
          name: ssl-certs-host
        - mountPath: /etc/kubernetes/worker-kubeconfig.yaml
          name: config
      volumes:
      - hostPath:
          path: /etc/kubernetes/ssl
        name: ssl-certs-kubernetes
      - hostPath:
          path: /etc/pki/tls/certs
        name: ssl-certs-host
      - hostPath:
          path: /etc/kubernetes/worker-kubeconfig.yaml
        name: config
kind: Service
apiVersion: v1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 80
    nodePort: 30010
    targetPort: 9090
  selector:
    app: kubernetes-dashboard

訪問方法 http://nodeip:30010

最后編輯于
?著作權歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末胧沫,一起剝皮案震驚了整個濱河市昌简,隨后出現(xiàn)的幾起案子占业,更是在濱河造成了極大的恐慌,老刑警劉巖纯赎,帶你破解...
    沈念sama閱讀 219,539評論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件谦疾,死亡現(xiàn)場離奇詭異,居然都是意外死亡犬金,警方通過查閱死者的電腦和手機念恍,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,594評論 3 396
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來晚顷,“玉大人峰伙,你說我怎么就攤上這事「媚” “怎么了瞳氓?”我有些...
    開封第一講書人閱讀 165,871評論 0 356
  • 文/不壞的土叔 我叫張陵,是天一觀的道長栓袖。 經(jīng)常有香客問我匣摘,道長,這世上最難降的妖魔是什么裹刮? 我笑而不...
    開封第一講書人閱讀 58,963評論 1 295
  • 正文 為了忘掉前任音榜,我火速辦了婚禮,結(jié)果婚禮上捧弃,老公的妹妹穿的比我還像新娘赠叼。我一直安慰自己,他們只是感情好违霞,可當我...
    茶點故事閱讀 67,984評論 6 393
  • 文/花漫 我一把揭開白布嘴办。 她就那樣靜靜地躺著,像睡著了一般葛家。 火紅的嫁衣襯著肌膚如雪户辞。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,763評論 1 307
  • 那天癞谒,我揣著相機與錄音,去河邊找鬼刃榨。 笑死弹砚,一個胖子當著我的面吹牛,可吹牛的內(nèi)容都是我干的枢希。 我是一名探鬼主播桌吃,決...
    沈念sama閱讀 40,468評論 3 420
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼苞轿!你這毒婦竟也來了茅诱?” 一聲冷哼從身側(cè)響起逗物,我...
    開封第一講書人閱讀 39,357評論 0 276
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎瑟俭,沒想到半個月后翎卓,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,850評論 1 317
  • 正文 獨居荒郊野嶺守林人離奇死亡摆寄,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 38,002評論 3 338
  • 正文 我和宋清朗相戀三年失暴,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片微饥。...
    茶點故事閱讀 40,144評論 1 351
  • 序言:一個原本活蹦亂跳的男人離奇死亡逗扒,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出欠橘,到底是詐尸還是另有隱情矩肩,我是刑警寧澤,帶...
    沈念sama閱讀 35,823評論 5 346
  • 正文 年R本政府宣布肃续,位于F島的核電站黍檩,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏痹升。R本人自食惡果不足惜建炫,卻給世界環(huán)境...
    茶點故事閱讀 41,483評論 3 331
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望疼蛾。 院中可真熱鬧肛跌,春花似錦、人聲如沸察郁。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,026評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽皮钠。三九已至稳捆,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間麦轰,已是汗流浹背乔夯。 一陣腳步聲響...
    開封第一講書人閱讀 33,150評論 1 272
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留款侵,地道東北人末荐。 一個月前我還...
    沈念sama閱讀 48,415評論 3 373
  • 正文 我出身青樓,卻偏偏與公主長得像新锈,于是被迫代替她去往敵國和親甲脏。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 45,092評論 2 355

推薦閱讀更多精彩內(nèi)容