k8s源碼部署01

k8s源碼集群搭建

集群搭建:

image-20200506225106712.png

1虑乖、生產(chǎn)環(huán)境k8s平臺(tái)規(guī)劃

老男孩老師文檔 : https://www.cnblogs.com/yanyanqaq/p/12607713.html

講師博客: https://blog.stanley.wang/categories/Kubernetes%E5%AE%B9%E5%99%A8%E4%BA%91%E6%8A%80%E6%9C%AF%E4%B8%93%E9%A2%98/

https://blog.stanley.wang/

https://www.cnblogs.com/yanyanqaq/p/12607713.html#242%E9%85%8D%E7%BD%AE

image-20200506225510549.png
image-20200506225615170.png

生產(chǎn)環(huán)境規(guī)劃:

? master 建議3臺(tái)

? etcd 最少3臺(tái) (3,5,7) 奇數(shù) 解決 1比1的問(wèn)題

image-20200506230558525.png
image-20200506230742207.png

image-20200507113244832.png

1、 實(shí)驗(yàn)環(huán)境規(guī)劃和集群節(jié)點(diǎn)初始化

? 集群規(guī)劃:

? 3臺(tái): 一個(gè)master, 2個(gè) node

? mster: k8s-master1 192.168.31.63

? worker: k8s-node1 192.168.31.65

? worker: k8s-node2 192.168.31.66

? k8s 版本: 1.16

? os 版本: 7.7

? 安裝方式: 二進(jìn)制安裝

? 這里是看老男孩老師講的課做的規(guī)劃

主機(jī)配置

192.168.208.200 cahost
192.168.208.11 lb1
192.168.208.12 lb2
192.168.208.21 node1
192.168.208.22 node2如下:

所有節(jié)點(diǎn)基礎(chǔ)配置的初始化:

? 1、關(guān)閉防火墻

? systemctl stop firewalld

? systemctl enablefirewalld

      2、關(guān)閉 selinux 

? setenforce 0

? vim /etc/selinux/config, :SELINUX=disabled

? 3、配置主機(jī)名

? 4庞萍、名稱(chēng)解析

? /etc/hosts

? 5、時(shí)間同步

? 選擇一個(gè)節(jié)點(diǎn)作為服務(wù)端忘闻,其它的作客戶(hù)端

? master為服務(wù)端:端口號(hào)(123)

? # yum install chrony -y

vim /etc/chrony.config  
 server 127.127.1.0 iburst    #上游服務(wù)器

?                                    allow   192.168.31.0/24       #訪問(wèn)限制

?                                   local stratum 10

?                               # systemctl start chronyd

?                               # systemctl enable chronyd
                            #   

?

?

? node 節(jié)點(diǎn)配置

? # yum install chrony -y

? # vim /etc/chrony.config

? server 192.168.31.63 iburst

? # systemctl start chronyd

? # systemctl enable chronyd

? # chrony sources 查看

6钝计、關(guān)閉交換分區(qū)

? (啟動(dòng)交換分區(qū)可能會(huì)導(dǎo)致服務(wù)無(wú)法啟動(dòng))

? swapoff -a

? vim /etc/fstab, 把最后一行 swap 注釋

? 檢查 free -m 查看

這里使用老男孩機(jī)構(gòu)的環(huán)境

管理主機(jī):192.168.208.200

LB1: 192.168.208.11

LB1: 192.168.208.12

node1: 192.168.208.21

node2: 192.168.208.22

CA證書(shū)機(jī)構(gòu)

CA 證書(shū) (面試)

? 加密方式:

? 對(duì)稱(chēng)加密: 加密和解密用相同的密碼

? 非對(duì)稱(chēng)加密: 發(fā)送時(shí)公鑰加密,相對(duì)應(yīng)的私鑰解密齐佳, 即使用密鑰對(duì)加解密

? 單向加密: 只能加密私恬,不能解密, 如 md5

證書(shū):包含以下部分:

? 端實(shí)體

? 注冊(cè)機(jī)構(gòu)

? 簽證機(jī)構(gòu)

? 證書(shū)撤銷(xiāo)列表

? 證書(shū)存取庫(kù)

ssl 證書(shū)來(lái)源

? 網(wǎng)絡(luò)第三方機(jī)構(gòu)購(gòu)買(mǎi)

? 自簽證書(shū) openssl、 cfsll 構(gòu)建CA來(lái)發(fā)證書(shū)

image-20200507011319529.png

2炼吴、準(zhǔn)備證書(shū)環(huán)境

下載: 192.168.208.200 鏈接https://pkg.cfssl.org/

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo

chmod a+x /usr/bin/cfssl*
mkdir -p /opt/certs

創(chuàng)建成生CA證書(shū)簽名請(qǐng)求 csr 的json配置文件

vim /opt/certs/ca-csr.sjon
{
    "CN":"Tjcom", #域名
    "hosts":[
    
    ],
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",    #國(guó)家
            "ST": "guangdong", #州 省
            "L": "shengzheng",  # 市
            "O": "tj",            # 組織
            "OU": "ops"             #單位
        }
    ],
    "ca":{
        "expiry": "175200h"
    }
}

生成根證書(shū)

[root@CA-Host certs]# pwd
/opt/certs
[root@CA-Host certs]# cfssl gencert -initca ca-csr.sjon  |cfssl-json -bare ca
[root@CA-Host certs]# ll
總用量 16
-rw-r--r--. 1 root root  997 5月   7 14:06 ca.csr
-rw-r--r--. 1 root root  221 5月   7 14:02 ca-csr.sjon
-rw-------. 1 root root 1679 5月   7 14:06 ca-key.pem
-rw-r--r--. 1 root root 1346 5月   7 14:06 ca.pem

3本鸣、準(zhǔn)備 docker環(huán)境: 所有服務(wù)節(jié)點(diǎn)都安裝

curl -fsSL https://get.docker.com |bash -s docker --mirror Aliyun
mkdir -p /data/docker  /etc/docker

node01: 192.168.208.21

vim /etc/docker/daemon.sjon
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","192.168.208.200"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.7.21.1/24",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

node02: 192.168.208.22

vim /etc/docker/daemon.json
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","192.168.208.200"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.7.22.1/24",  #這里地址不一樣
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

管理 節(jié)點(diǎn) ”192.168.208.200

vim /etc/docker/daemon.json
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","192.168.208.200"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.7.200.1/24",  #這里地址不一樣
  "exec-opts": ["vative.cgroupdriver=systemd"],
  "live-restore": true
}

4、在 管理主機(jī)上準(zhǔn)備 harbot私有倉(cāng)庫(kù)

? 建議 1.7.6 以上的版本硅蹦, https://github.com/goharbor/harbor/releases

https://github.com/goharbor/harbor/releases/download/v1.10.0/harbor-offline-installer-v1.10.0.tgz

? [#] yum install docker-compose

解壓荣德,修改配置端口:為180,和hostname, 啟動(dòng)密碼123456童芹,注釋掉 https 443相關(guān)的 啟動(dòng)

啟動(dòng)后使用 nginx反 向代理

/etc/nginx/conf.d/harbor.conf
server {
  listen      80;
  server_name localhost;
  client_max_body_size 1000m,
  location / {
     proxy_pass http://127.0.0.1:180;
  }

}

啟動(dòng) harbor后 創(chuàng)建一個(gè) public 公共庫(kù)

docker pull nginx:v1.7.9

開(kāi)始安裝k8s

1涮瞻、部署etcd集群

主機(jī)名 角色 ip
LB2 lead 192.168.208.12
node01 follow 192.168.208.21
node02 follow 192.168.208.22

etcd下載:https://github.com/etcd-io/etcd/releases 建議不使用超過(guò) 3.3的版本

https://github.com/etcd-io/etcd/releases/download/v3.2.30/etcd-v3.2.30-linux-amd64.tar.gz

步驟1:在 200節(jié)點(diǎn)上創(chuàng)建 etcd 證書(shū)文件
root@CA-Host certs]# pwd
/opt/certs
[root@CA-Host certs]# vim ca-config.json
{
    "signing": {
        "default": {
            "expiry": "175200h"
        },
        "profiles": {
            "server": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer":{
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
         }
    }

}
[root@CA-Host certs]# vim etcd-peer-csr.json
{
    "CN":"k8s-etcd",
    "hosts": [
        "192.168.208.11",
        "192.168.208.12",
        "192.168.208.21",
        "192.168.208.22"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "guangdong",
            "L": "shengzheng",
            "O": "tj",
            "OU": "OPS"
        }
    ]
}

執(zhí)行生成 etcd證書(shū)文件

[root@CA-Host certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json|cfssl-json -bare etcd-peer
[root@CA-Host certs]# ll
總用量 36
-rw-r--r--. 1 root root  654 5月   8 15:24 ca-config.json
-rw-r--r--. 1 root root  997 5月   7 14:06 ca.csr
-rw-r--r--. 1 root root  221 5月   7 14:02 ca-csr.sjon
-rw-------. 1 root root 1679 5月   7 14:06 ca-key.pem
-rw-r--r--. 1 root root 1346 5月   7 14:06 ca.pem
-rw-r--r--. 1 root root 1070 5月   8 15:43 etcd-peer.csr
-rw-r--r--. 1 root root  266 5月   8 15:38 etcd-peer-csr.json
-rw-------. 1 root root 1675 5月   8 15:43 etcd-peer-key.pem
-rw-r--r--. 1 root root 1436 5月   8 15:43 etcd-peer.pem
步驟2:在第一臺(tái)要安裝的機(jī)器上安裝 etcd ,192.168.208.12
解壓至 opt 
[root@k8s-L2 ~]# useradd -s /sbin/nologin -M etcd
[root@k8s-L2 tool]# tar -xzvf etcd-v3.2.30-linux-amd64.tar.gz -C /opt/
[root@k8s-L2 opt]# mv etcd-v3.2.30-linux-amd64 etcd-v3.2.30
[root@k8s-L2 opt]# ln -s /opt/etcd-v3.2.30/ /opt/etcd
[root@k8s-L2 opt]# ll
總用量 0
lrwxrwxrwx. 1 root      root       18 5月   8 16:03 etcd -> /opt/etcd-v3.2.30/
drwxr-xr-x. 3 630384594 600260513 123 4月   2 03:01 etcd-v3.2.30
創(chuàng)建3個(gè)目錄
[root@k8s-L2 opt]# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
[root@k8s-L2 certs]# pwd
/opt/etcd/certs
拷貝證書(shū)
[root@k8s-L2 certs]# scp 192.168.208.200:/opt/certs/ca.pem ./
[root@k8s-L2 certs]# scp 192.168.208.200:/opt/certs/etcd-peer.pem ./
[root@k8s-L2 certs]# scp 192.168.208.200:/opt/certs/etcd-peer-key.pem ./
[root@k8s-L2 certs]# ll
總用量 12
-rw-r--r--. 1 root root 1346 5月   8 16:06 ca.pem
-rw-------. 1 root root 1675 5月   8 16:10 etcd-peer-key.pem
-rw-r--r--. 1 root root 1436 5月   8 16:08 etcd-peer.pem

創(chuàng)建啟動(dòng)腳本
vim /opt/etcd/etcd-server-startup.sh
#/bin/sh
./etcd --name etcd-server-208-12 \
       --data-dir /data/etcd/etcd-server \
       --listen-peer-urls https://192.168.208.12:2380 \
       --listen-client-urls https://192.168.208.12:2379,http://127.0.0.1:2379 \
       --quota-backend-bytes 800000000 \
       --initial-advertise-peer-urls https://192.168.208.12:2380 \
       --advertise-client-urls https://192.168.208.12:2379,http://127.0.0.1:2379 \
       --initial-cluster etcd-server-208-12=https://192.168.208.12:2380,etcd-server-208-21=https://192.168.208.21:2380,etcd-server-208-22=https://192.168.208.22:2380 \
       --ca-file ./certs/ca.pem \
       --cert-file ./certs/etcd-peer.pem \
       --key-file ./certs/etcd-peer-key.pem \
       --client-cert-auth \
       --trusted-ca-file ./certs/ca.pem \
       --peer-ca-file ./certs/ca.pem \
       --peer-cert-file ./certs/etcd-peer.pem \
       --peer-key-file ./certs/etcd-peer-key.pem \
       --peer-client-cert-auth \
       --peer-trusted-ca-file ./certs/ca.pem \
       --log-output stdout
 
 更改權(quán)限
[root@k8s-L2 etcd]# chown -R etcd.etcd /opt/etcd-v3.2.30
[root@k8s-L2 etcd]# chown -R etcd.etcd /data/etcd/
[root@k8s-L2 etcd]# chown -R etcd.etcd /data/logs/

使用 supervisor來(lái)啟動(dòng)文件
[root@k8s-L2 etcd]# yum install supervisor -y
[root@k8s-L2 etcd]# systemctl start supervisord
[root@k8s-L2 etcd]# systemctl enable supervisord
創(chuàng)建 supervisord.ini 的配置文件

[root@k8s-L2 etcd]# vim /etc/supervisord.d/etcd-server.ini
[program:etcd-server-208-12]
command=/opt/etcd/etcd-server-startup.sh
numprocs=1
directory=/opt/etcd
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=etcd
redirect_stderr=true
stdout_logfile=/data/logs/etcd-server/etcd/stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_evens_enabled=false

[root@k8s-L2 etcd]#supervisord update

這里可能會(huì)失敗, 其實(shí)也可以直接啟動(dòng) 這個(gè)腳本假褪,增加了配置文件 一定要重啟 supervsio


啟動(dòng)腳本的另一個(gè)處理方法

服務(wù)管理腳本路徑 
centos7: systemctl
/usr/lib/systemd/system

centos6:
/etc/rd.d/rcN.d

步驟3:將部署好的etcd 拷貝到其它節(jié)點(diǎn)

為了不重啟操作署咽,在這里將部署在 192.168.208.12 上的 /opt/etcd 復(fù)制到 192.168.208.21,192.168.208.22 上

[root@k8s-L2 opt]# scp -r etcd-v3.2.30 192.168.208.21:/opt/
[root@k8s-L2 opt]# scp -r etcd-v3.2.30 192.168.208.22:/opt/

在 192.168.208.21上增加操作如下

[root@node01 ~]# useradd -s /sbin/nologin -M etcd
[root@node01 ~]# cd /opt
[root@node01 opt]# ls
containerd  etcd-v3.2.30  rh
[root@node01 opt]# ln -s /opt/etcd-v3.2.30/ /opt/etcd
[root@node01 opt]# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
[root@node01 opt]# chown -R etcd.etcd /opt/etcd-v3.2.30
[root@node01 opt]# chown -R etcd.etcd /data/etcd/
[root@node01 opt]# chown -R etcd.etcd /data/logs/
更改啟動(dòng)配置文件
vim /opt/etcd/etcd-server-startup.sh
#/bin/sh
./etcd --name etcd-server-208-21 \
       --data-dir /data/etcd/etcd-server \
       --listen-peer-urls https://192.168.208.21:2380 \
       --listen-client-urls https://192.168.208.21:2379,http://127.0.0.1:2379 \
       --quota-backend-bytes 800000000 \
       --initial-advertise-peer-urls https://192.168.208.21:2380 \
       --advertise-client-urls https://192.168.208.21:2379,http://127.0.0.1:2379 \
       --initial-cluster etcd-server-208-12=https://192.168.208.12:2380,etcd-server-208-21=https://192.168.208.21:2380,etcd-server-208-22=https://192.168.208.22:2380 \
       --ca-file ./certs/ca.pem \
       --cert-file ./certs/etcd-peer.pem \
       --key-file ./certs/etcd-peer-key.pem \
       --client-cert-auth \
       --trusted-ca-file ./certs/ca.pem \
       --peer-ca-file ./certs/ca.pem \
       --peer-cert-file ./certs/etcd-peer.pem \
       --peer-key-file ./certs/etcd-peer-key.pem \
       --peer-client-cert-auth \
       --peer-trusted-ca-file ./certs/ca.pem \
       --log-output stdout

在第三臺(tái) etcd 上操作如下 192.168.208.22

[root@node02 ~]# useradd -s /sbin/nologin -M etcd
[root@node02 ~]# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
[root@node02 ~]# 
[root@node02 ~]# cd /opt
[root@node02 opt]# ln -s /opt/etcd-v3.2.30/ /opt/etcd
[root@node02 opt]# chown -R etcd.etcd /opt/etcd-v3.2.30
[root@node02 opt]# chown -R etcd.etcd /data/etcd/
[root@node02 opt]# chown -R etcd.etcd /data/logs/
[root@node02 opt]# 

更改啟動(dòng)配置文件
vim /opt/etcd/etcd-server-startup.sh
#/bin/sh
./etcd --name etcd-server-208-22 \
       --data-dir /data/etcd/etcd-server \
       --listen-peer-urls https://192.168.208.22:2380 \
       --listen-client-urls https://192.168.208.22:2379,http://127.0.0.1:2379 \
       --quota-backend-bytes 800000000 \
       --initial-advertise-peer-urls https://192.168.208.22:2380 \
       --advertise-client-urls https://192.168.208.22:2379,http://127.0.0.1:2379 \
       --initial-cluster etcd-server-208-12=https://192.168.208.12:2380,etcd-server-208-21=https://192.168.208.21:2380,etcd-server-208-22=https://192.168.208.22:2380 \
       --ca-file ./certs/ca.pem \
       --cert-file ./certs/etcd-peer.pem \
       --key-file ./certs/etcd-peer-key.pem \
       --client-cert-auth \
       --trusted-ca-file ./certs/ca.pem \
       --peer-ca-file ./certs/ca.pem \
       --peer-cert-file ./certs/etcd-peer.pem \
       --peer-key-file ./certs/etcd-peer-key.pem \
       --peer-client-cert-auth \
       --peer-trusted-ca-file ./certs/ca.pem \
       --log-output stdout

此時(shí) 所有節(jié)點(diǎn)都執(zhí)行 sh /opt/etcd/etcd-server-startup.sh

[root@k8s-L2 opt]# netstat -luntp |grep etcd
tcp        0      0 192.168.208.12:2379     0.0.0.0:*               LISTEN      13592/./etcd        
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      13592/./etcd        
tcp        0      0 192.168.208.12:2380     0.0.0.0:*               LISTEN      13592/./etcd  
[root@node01 ~]# netstat -luntp |grep etcd
tcp        0      0 192.168.208.21:2379     0.0.0.0:*               LISTEN      13732/./etcd        
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      13732/./etcd        
tcp        0      0 192.168.208.21:2380     0.0.0.0:*               LISTEN      13732/./etcd   

[root@node02 ~]# netstat -luntp |grep etcd
tcp        0      0 192.168.208.22:2379     0.0.0.0:*               LISTEN      14118/./etcd        
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      14118/./etcd        
tcp        0      0 192.168.208.22:2380     0.0.0.0:*               LISTEN      14118/./etcd 

檢查 etcd 集群健康狀態(tài):在任意一臺(tái)節(jié)點(diǎn)上操作都可以

[root@k8s-L2 etcd]# pwd
/opt/etcd
[root@k8s-L2 etcd]# ./etcdctl cluster-health
member 27335ed5e116ecf is healthy: got healthy result from http://127.0.0.1:2379
member 9fa9f37eb6f9bb63 is healthy: got healthy result from http://127.0.0.1:2379
member e00eea0c411d3da4 is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy

這里三臺(tái)機(jī)器都是成功的 true 表示為主節(jié)點(diǎn)

[root@k8s-L2 etcd]# ./etcdctl member list
27335ed5e116ecf: name=etcd-server-208-22 peerURLs=https://192.168.208.22:2380 clientURLs=http://127.0.0.1:2379,https://192.168.208.22:2379 isLeader=false
9fa9f37eb6f9bb63: name=etcd-server-208-12 peerURLs=https://192.168.208.12:2380 clientURLs=http://127.0.0.1:2379,https://192.168.208.12:2379 isLeader=true
e00eea0c411d3da4: name=etcd-server-208-21 peerURLs=https://192.168.208.21:2380 clientURLs=http://127.0.0.1:2379,https://192.168.208.21:2379 isLeader=false
ln -s /opt/etcd/etcdctl /usr/sbin/
supervisor報(bào)錯(cuò)調(diào)式
[root@node001 etcd]# supervisorctl tail -f etcd-server-208-21

2、部署 k8s api-server

下載地址: kubernetes , 很難下載的

https://dl.k8s.io/v1.15.11/kubernetes-server-linux-amd64.tar.gz

步驟1:

? 簽發(fā)client證書(shū) 用于 api-server與 etcd通信使用 etcd集是server端, api-server是客戶(hù)端

[root@CA-Host certs]# vim client-csr.json
vim /opt/certs/client-csr.sjon
{
        "CN":"k8s-node",
        "hosts":[
        
        ],
        "key":{
                "algo":"rsa",
                "size":2048
        },
        "names":[
                {
                        "C":"CN",
                        "ST": "guangdong",
                        "L": "shengzheng",
                        "O": "tj",
                        "OU": "ops"
                }
        ]
}

生成 client 證書(shū)

[root@CA-Host certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client

查看 client 證書(shū) 此證書(shū)是 api-server 與 etcd通信的證書(shū)

[root@CA-Host certs]# ll
總用量 52
-rw-r--r--. 1 root root  654 5月   8 15:24 ca-config.json
-rw-r--r--. 1 root root  997 5月   7 14:06 ca.csr
-rw-r--r--. 1 root root  221 5月   7 14:02 ca-csr.json
-rw-------. 1 root root 1679 5月   7 14:06 ca-key.pem
-rw-r--r--. 1 root root 1346 5月   7 14:06 ca.pem
-rw-r--r--. 1 root root 1001 5月   8 20:40 client.csr
-rw-r--r--. 1 root root  190 5月   8 20:37 client-csr.json
-rw-------. 1 root root 1679 5月   8 20:40 client-key.pem
-rw-r--r--. 1 root root 1371 5月   8 20:40 client.pem

步驟2:

? 生成 api-server 通信的證書(shū)

[root@CA-Host certs]# vim apiserver-csr.json
{
    "CN": "k8s-apiserver",
    "hosts": [
        "127.0.0.1",
        "192.168.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local",
        "192.168.208.10",      #api-server 可能存在的地址生音, 也可以沒(méi)有宁否, 可以寫(xiě)上去,有是一定要寫(xiě)的
        "192.168.208.21",
        "192.168.208.22",
        "192.168.208.23"        
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C":"CN",
            "ST": "guangdong",
            "L": "shengzheng",
            "O": "tj",
            "OU": "ops"
        }
    ]

}

生成 api-server 證書(shū)久锥, 任何節(jié)點(diǎn)和server通信 都用這一套證書(shū) 即與 etcd服務(wù)通信

[root@CA-Host certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver

[root@CA-Host certs]# ll
總用量 68
-rw-r--r--. 1 root root 1257 5月   8 20:54 apiserver.csr
-rw-r--r--. 1 root root  488 5月   8 20:52 apiserver-csr.json
-rw-------. 1 root root 1679 5月   8 20:54 apiserver-key.pem
-rw-r--r--. 1 root root 1606 5月   8 20:54 apiserver.pem
-rw-r--r--. 1 root root  654 5月   8 15:24 ca-config.json
-rw-r--r--. 1 root root  997 5月   7 14:06 ca.csr
-rw-r--r--. 1 root root  221 5月   7 14:02 ca-csr.json
-rw-------. 1 root root 1679 5月   7 14:06 ca-key.pem
-rw-r--r--. 1 root root 1346 5月   7 14:06 ca.pem
步驟3:

? 在node01 解壓 kubernets

[root@node01 tool]# tar -xzvf kubernetes-server-linux-amd64.tar.gz -C /opt   
[root@node01 tool]# cd /opt/
[root@node01 opt]# ls
containerd  etcd  etcd-v3.2.30  kubernetes  rh
[root@node01 opt]# mv kubernetes kubernetes.v1.15.11
[root@node01 opt]# ln -s /opt/kubernetes.v1.15.11/ /opt/kubernetes
[root@node01 opt]# cd /opt/kubernetes/server/bin
[root@node01 bin]# pwd
/opt/kubernetes/server/bin
[root@node01 bin]# mkdir cert
[root@node01 bin]# cd cert
# 拷貝證書(shū)
[root@node01 cert]# scp 192.168.208.200:/opt/certs/ca.pem ./
[root@node01 cert]# scp 192.168.208.200:/opt/certs/ca-key.pem ./
[root@node01 cert]# scp 192.168.208.200:/opt/certs/client.pem ./
[root@node01 cert]# scp 192.168.208.200:/opt/certs/client-key.pem ./
[root@node01 cert]# scp 192.168.208.200:/opt/certs/apiserver.pem ./ 
[root@node01 cert]# scp 192.168.208.200:/opt/certs/apiserver-key.pem ./

#創(chuàng)建 apiserver 配置文件
[root@node01 conf]# pwd
/opt/kubernetes/server/bin/conf
vim audit.yaml
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"
------------------------------------------------
創(chuàng)建 api-server 啟動(dòng)腳本 
vim /opt/kubenetes/server/bin/kube-apiserver.sh
#!/bin/bash
./kube-apiserver \
  --apiserver-count 2 \
  --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
  --audit-policy-file ./conf/audit.yaml \
  --authorization-mode RBAC \
  --client-ca-file ./cert/ca.pem \
  --requestheader-client-ca-file ./cert/ca.pem \
  --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
  --etcd-cafile ./cert/ca.pem \
  --etcd-certfile ./cert/client.pem \
  --etcd-keyfile ./cert/client-key.pem \
  --etcd-servers https://192.168.208.12:2379,https://192.168.208.21:2379,https://192.168.208.22:2379 \
  --service-account-key-file ./cert/ca-key.pem \
  --service-cluster-ip-range 192.168.0.0/16 \
  --service-node-port-range 3000-29999 \
  --target-ram-mb=1024 \
  --kubelet-client-certificate ./cert/client.pem \
  --kubelet-client-key ./cert/client-key.pem \
  --log-dir  /data/logs/kubernetes/kube-apiserver \
  --tls-cert-file ./cert/apiserver.pem \
  --tls-private-key-file ./cert/apiserver-key.pem \
  --v 2

步驟4:

? 將 node01 上配置好的kubernetes 拷貝到 node02中

[root@node01 opt]# scp -r kubernetes.v1.15.11 192.168.208.22:/opt

在 node02上的配置

[root@node02 opt]# ln -s /opt/kubernetes.v1.15.11/ /opt/kubernetes
[root@node02 opt]# cd /opt/kubernetes/server/bin

# 啟動(dòng) apiserver
[root@node02 bin]# sh kube-apiserver.sh

檢查啟動(dòng)服務(wù)

node01

[root@node01 cfg]# netstat -luntp |grep kube-apiser
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      18688/./kube-apiser 
tcp6       0      0 :::6443                 :::*                    LISTEN      18688/./kube-apiser
[root@node02 bin]# netstat -luntp |grep kube-apiser
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      18688/./kube-apiser 
tcp6       0      0 :::6443                 :::*                    LISTEN      18688/./kube-apiser

3家淤、搭建反向代理 4層和 7層

image-20200509013712744.png

etcd: 2379 2380 配置節(jié)點(diǎn)IP: 192.168.208.11, 192.168.208.12

VIP :192.168.208.10 來(lái)訪問(wèn) apiserver 的 6443 端口 , 6443 為 kube-aprse 端口

[root@k8s_lb1]# netstat -luntp |grep 6443
tcp6       0      0 :::6443                 :::*                    LISTEN      18688/./kube-apiser 
步驟1:
[root@k8s_lb1]# yum install nginx
[root@k8s_lb1]# yum install nginx
配置反向代理
[root@k8s_lb1]# vim /etc/nginx/nginx.conf
# 放在nginx.conf 末尾
stream {
    upstream kube-apiserver {
        server 192.168.208.21:6443     max_fails=3 fail_timeout=30s;
        server 192.168.208.22:6443     max_fails=3 fail_timeout=30s;
    }
    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass kube-apiserver;
    }
}
[root@k8s_lb2]# vim /etc/nginx/nginx.conf
# 放在nginx.conf 末尾
stream {
    upstream kube-apiserver {
        server 192.168.208.21:6443     max_fails=3 fail_timeout=30s;
        server 192.168.208.22:6443     max_fails=3 fail_timeout=30s;
    }
    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass kube-apiserver;
    }
}

[root@k8s_lb1]# systemctl restart nginx
[root@k8s_lb2]# systemctl restart nginx
[root@k8s_lb1]# netstat -luntp |grep nginx
tcp        0      0 0.0.0.0:7443            0.0.0.0:*               LISTEN      19031/nginx: master
[root@k8s_lb2]# netstat -luntp |grep nginx
tcp        0      0 0.0.0.0:7443            0.0.0.0:*               LISTEN      19031/nginx: master
步驟2:

安裝 keepalived

[root@k8s_lb1]#  yum install keepalived
[root@k8s_lb2]# yum install keepalived

新建檢查端口的腳本

[root@k8s_lb1]# [root@node01 /]# vi /etc/keepalived/check_port.sh
#!/bin/bash
#keepalived 監(jiān)控端口腳本
#使用方法:
#在keepalived的配置文件中
#vrrp_script check_port {#創(chuàng)建一個(gè)vrrp_script腳本,檢查配置
#    script "/etc/keepalived/check_port.sh 6379" #配置監(jiān)聽(tīng)的端口
#    interval 2 #檢查腳本的頻率,單位(秒)
#}
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
else
        echo "Check Port Cant Be Empty!"
fi

[root@node01 /]# chmod a+x /etc/keepalived/check_port.sh
-------------------------------------------------------------------
[root@k8s_lb2]# vi /etc/keepalived/check_port.sh
#!/bin/bash
#keepalived 監(jiān)控端口腳本
#使用方法:
#在keepalived的配置文件中
#vrrp_script check_port {#創(chuàng)建一個(gè)vrrp_script腳本,檢查配置
#    script "/etc/keepalived/check_port.sh 6379" #配置監(jiān)聽(tīng)的端口
#    interval 2 #檢查腳本的頻率,單位(秒)
#}
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
else
        echo "Check Port Cant Be Empty!"
fi
[root@k8s_lb2]# chmod a+x /etc/keepalived/check_port.sh
步驟3:

? 配置 keepalived 高可用實(shí)例

[root@k8s_lb1]#  vi /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id k8s
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 7443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    mcast_src_ip 192.168.208.21
    nopreempt   # 非搶占機(jī)制
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    track_script {
        chk_nginx
    }
    virtual_ipaddress {
        192.168.208.10

    }
}

-------------------------------------------------
[root@k8s_lb2]# vi /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id k8s
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 7443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 50
    advert_int 1
    mcast_src_ip 192.168.208.22
    nopreempt  # 非搶占機(jī)制
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    track_script {
        chk_nginx
    }
    virtual_ipaddress {
        192.168.208.10

    }
}

[root@k8s_lb1]#  systemctl restart keepalived
[root@k8s_lb2]# systemctl restart keepalived

檢查 虛擬IP 是否啟動(dòng)
[root@k8s_lb1]# ip addr
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 192.168.208.21/24 brd 192.168.208.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.208.10/32 scope global ens33
       valid_lft forever preferred_lft forever

發(fā)現(xiàn) 192.168.208.10 已經(jīng)啟動(dòng)  高可用實(shí)現(xiàn)

要關(guān)閉 selinux

4瑟由、部署kube-controller-manager

集群規(guī)劃絮重,冤寿,注意這兩不用簽證書(shū)

主機(jī)名 角色 ip
node01 controller-manager 192.168.208.21
node02 controller-manager 192.168.208.22

這里部署以 node01 為例, node02 安裝部署方法類(lèi)似

在 node01 上創(chuàng)建啟動(dòng)腳本

[root@node01 bin]# pwd
/opt/kubernetes/server/bin
[root@node01 bin]# ll
總用量 885348
-rwxr-xr-x. 1 root root  43551200 3月  13 05:49 apiextensions-apiserver
drwxr-xr-x. 2 root root       124 5月   8 21:11 cert
-rwxr-xr-x. 1 root root 100655136 3月  13 05:49 cloud-controller-manager
drwxr-xr-x. 2 root root        24 5月   9 01:12 conf
-rwxr-xr-x. 1 root root 200816272 3月  13 05:49 hyperkube
-rwxr-xr-x. 1 root root  40198592 3月  13 05:49 kubeadm
-rwxr-xr-x. 1 root root 164616608 3月  13 05:49 kube-apiserver
-rw-r--r--. 1 root root      1093 5月   9 01:14 kube-apiserver.sh
-rwxr-xr-x. 1 root root 116532256 3月  13 05:49 kube-controller-manager
-rwxr-xr-x. 1 root root  42997792 3月  13 05:49 kubectl
-rwxr-xr-x. 1 root root 119755824 3月  13 05:49 kubelet
-rwxr-xr-x. 1 root root  36995680 3月  13 05:49 kube-proxy
-rwxr-xr-x. 1 root root  38794336 3月  13 05:49 kube-scheduler
-rwxr-xr-x. 1 root root   1648224 3月  13 05:49 mounter


[root@node01 bin]# mkdir -p /data/logs/kubernetes/
[root@node01 bin]# vim kube-controller-manager.sh
#!/bin/sh
./kube-controller-manager \
  --cluster-cidr 172.7.0.0/16 \
  --leader-elect true \
  --log-dir /data/logs/kubernetes/kube-controller-manager \
  --master http://127.0.0.1:8080 \
  --service-account-private-key-file ./cert/ca-key.pem \
  --service-cluster-ip-range 192.168.0.0/16 \
  --root-ca-file ./cert/ca.pem \
  --v 2
  
  [root@node01 bin]#chmod a+x kube-controller-manager.sh
  [root@node02 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager
#將相同的腳本拷貝給 node02
[root@node01 bin]# scp kube-controller-manager.sh 192.168.208.22:/opt/kubernetes/server/bin 

在node02上操作

[root@node02 /]# cd /opt/kubernetes/server/bin/
[root@node02 bin]# pwd
/opt/kubernetes/server/bin
[root@node02 bin]# ll
總用量 885352
-rwxr-xr-x. 1 root root  43551200 5月   9 01:16 apiextensions-apiserver
drwxr-xr-x. 2 root root       124 5月   9 01:17 cert
-rwxr-xr-x. 1 root root 100655136 5月   9 01:17 cloud-controller-manager
drwxr-xr-x. 2 root root        24 5月   9 01:21 conf
-rwxr-xr-x. 1 root root 200816272 5月   9 01:17 hyperkube
-rwxr-xr-x. 1 root root  40198592 5月   9 01:17 kubeadm
-rwxr-xr-x. 1 root root 164616608 5月   9 01:17 kube-apiserver
-rwxr-xr-x. 1 root root      1093 5月   9 01:17 kube-apiserver.sh
-rwxr-xr-x. 1 root root 116532256 5月   9 01:17 kube-controller-manager
-rwxr-xr-x. 1 root root       334 5月   9 12:14 kube-controller-manager.sh
-rwxr-xr-x. 1 root root  42997792 5月   9 01:17 kubectl
-rwxr-xr-x. 1 root root 119755824 5月   9 01:17 kubelet
-rwxr-xr-x. 1 root root  36995680 5月   9 01:17 kube-proxy
-rwxr-xr-x. 1 root root  38794336 5月   9 01:17 kube-scheduler
-rwxr-xr-x. 1 root root   1648224 5月   9 01:17 mounter
[root@node02 bin]# mkdir -p /data/logs/kubernetes/
[root@node02 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager

然后啟動(dòng)這兩腳本 就可以了

supervsorctl 啟動(dòng)

[program:kube-controller-manager-208-22]
command=/usr/bin/sh /opt/kubernetes/server/bin/kube-controller-manager.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                              ; directory to cwd to before exec (def no cwd)
autostart=true                                                                    ; start at supervisord start (default: true)
autorestart=true                                                                  ; retstart at unexpected quit (default: true)
startsecs=30                                                                      ; number of secs prog must stay running (def. 1)
startretries=3                                                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                         ; setuid to this UNIX account to run the program
redirect_stderr=true                                                              ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log  ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                                       ; emit events on stdout writes (default false)
[root@lb002 bin]# supervisorctl update
kube-controller-manager-208-22: added process group
[root@lb002 bin]# 
[root@lb002 bin]# supervisorctl status
etcd-server-208-22               RUNNING   pid 13331, uptime 3:23:13
kube-apiserver-7-21              RUNNING   pid 13998, uptime 2:44:22
kube-controller-manager-208-22   STARTING

5青伤、部署 kube-scheduler

和 controller-manager 一樣 不需要簽證書(shū) 因?yàn)槭钦冶緳C(jī)的 api-server

主機(jī)名 角色 ip
node01 controller-scheduler 192.168.208.21
node02 controller-scheduler 192.168.208.22

這里部署以 node01 為例督怜, node02 安裝部署方法類(lèi)似

root@node01 ~]# cd /opt/kubernetes/server/bin/
[root@node01 bin]# ll
總用量 885352
-rwxr-xr-x. 1 root root  43551200 3月  13 05:49 apiextensions-apiserver
drwxr-xr-x. 2 root root       124 5月   8 21:11 cert
-rwxr-xr-x. 1 root root 100655136 3月  13 05:49 cloud-controller-manager
drwxr-xr-x. 2 root root        24 5月   9 01:12 conf
-rwxr-xr-x. 1 root root 200816272 3月  13 05:49 hyperkube
-rwxr-xr-x. 1 root root  40198592 3月  13 05:49 kubeadm
-rwxr-xr-x. 1 root root 164616608 3月  13 05:49 kube-apiserver
-rw-r--r--. 1 root root      1093 5月   9 01:14 kube-apiserver.sh
-rwxr-xr-x. 1 root root 116532256 3月  13 05:49 kube-controller-manager
-rwxr-xr-x. 1 root root       334 5月   9 12:11 kube-controller-manager.sh
-rwxr-xr-x. 1 root root  42997792 3月  13 05:49 kubectl
-rwxr-xr-x. 1 root root 119755824 3月  13 05:49 kubelet
-rwxr-xr-x. 1 root root  36995680 3月  13 05:49 kube-proxy
-rwxr-xr-x. 1 root root  38794336 3月  13 05:49 kube-scheduler
-rwxr-xr-x. 1 root root   1648224 3月  13 05:49 mounter

# 創(chuàng)建啟動(dòng)腳本
[root@node01 bin]# vim kube-scheduler.sh
#!/bin/sh
./kube-scheduler \
  --leader-elect  \
  --log-dir /data/logs/kubernetes/kube-scheduler \
  --master http://127.0.0.1:8080 \   
  --v 2
  
[root@node01 bin]#mkdir -p  /data/logs/kubernetes/kube-scheduler
[root@node01 bin]# 
[root@node01 bin]# chmod a+x kube-scheduler.sh 

#拷貝到 node02節(jié)點(diǎn)
[root@node01 bin]# scp kube-scheduler.sh 192.168.208.22:/opt/kubernetes/server/bin/

在 node02 上的操作

root@node02 ~]# cd /opt/kubernetes/server/bin/
[root@node02 bin]# ll
總用量 885352
-rwxr-xr-x. 1 root root  43551200 3月  13 05:49 apiextensions-apiserver
drwxr-xr-x. 2 root root       124 5月   8 21:11 cert
-rwxr-xr-x. 1 root root 100655136 3月  13 05:49 cloud-controller-manager
drwxr-xr-x. 2 root root        24 5月   9 01:12 conf
-rwxr-xr-x. 1 root root 200816272 3月  13 05:49 hyperkube
-rwxr-xr-x. 1 root root  40198592 3月  13 05:49 kubeadm
-rwxr-xr-x. 1 root root 164616608 3月  13 05:49 kube-apiserver
-rw-r--r--. 1 root root      1093 5月   9 01:14 kube-apiserver.sh
-rwxr-xr-x. 1 root root 116532256 3月  13 05:49 kube-controller-manager
-rwxr-xr-x. 1 root root       334 5月   9 12:11 kube-controller-manager.sh
-rwxr-xr-x. 1 root root  42997792 3月  13 05:49 kubectl
-rwxr-xr-x. 1 root root 119755824 3月  13 05:49 kubelet
-rwxr-xr-x. 1 root root  36995680 3月  13 05:49 kube-proxy
-rwxr-xr-x. 1 root root  38794336 3月  13 05:49 kube-scheduler
-rwxr-xr-x. 1 root root       143 5月   9 12:30 kube-scheduler.sh
-rwxr-xr-x. 1 root root   1648224 3月  13 05:49 mounter

[root@node02 bin]#mkdir -p  /data/logs/kubernetes/kube-scheduler

啟動(dòng)腳本 sh kube-scheduler.sh

這里使用 supervsiorctl

啟動(dòng)

[root@lb002 bin]# vi /etc/supervisord.d/kube-scheduler.ini      

[program:scheduler-208-22]
command=/usr/bin/sh /opt/kubernetes/server/bin/kube-scheduler.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                               ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                     ; directory to cwd to before exec (def no cwd)
autostart=true                                                           ; start at supervisord start (default: true)
autorestart=true                                                         ; retstart at unexpected quit (default: true)
startsecs=30                                                             ; number of secs prog must stay running (def. 1)
startretries=3                                                           ; max # of serial start failures (default 3)
exitcodes=0,2                                                            ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                          ; signal used to kill process (default TERM)
stopwaitsecs=10                                                          ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                                ; setuid to this UNIX account to run the program
redirect_stderr=true                                                     ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                             ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                                 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                              ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                              ; emit events on stdout writes (default false)
[root@node001 bin]# supervisorctl status
etcd-server-208-21               RUNNING   pid 15493, uptime 3:37:12
kube-apiserver-208-21            RUNNING   pid 16089, uptime 2:52:09
kube-controller-manager-208-22   RUNNING   pid 17631, uptime 0:07:01
scheduler-208-21                 STARTING

此時(shí)可以檢查集群狀態(tài)了

創(chuàng)建命令軟鏈接

[root@node01 ~]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
[root@node02 ~]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl

檢查集群健康狀態(tài)

[root@node01 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
etcd-2               Healthy   {"health": "true"}   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
controller-manager   Healthy   ok                   
etcd-1               Healthy   {"health": "true"} 
[root@node02 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
etcd-2               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   
scheduler            Healthy   ok                   
controller-manager   Healthy   ok 

到這里 集群的就部署完成了

計(jì)算節(jié)點(diǎn)服務(wù)搭建

1、部署 node節(jié)點(diǎn)kubelet

集群規(guī)則

主機(jī)名 角色 ip
node01 kubelet 192.168.208.21
node02 kubelet 192.168.208.22

先在 node01上部署 kubelet服務(wù) 要求有 ca.pem server.pem kubelte.pem

步驟1:在CA服務(wù)器上簽發(fā)證書(shū)

生成證書(shū):

vi kubelet-csr.json
{
    "CN": "k8s-kubelet",
    "hosts": [
    "127.0.0.1",
    "192.168.208.21",
    "192.168.208.22",
    "192.168.208.23",
    "192.168.208.24"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
[ certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet

1狠角、生成kubectl.kubeconfig 配置文件

? 先拷貝證書(shū)

[root@CA-Host certs]# scp kubelet.pem kubelet-key.pem node001:/opt/kubernetes/server/bin/cert
root@node001's password: 
kubelet.pem                                                                                                                 100% 1476    68.0KB/s   00:00    
kubelet-key.pem                                                                                                             100% 1679   211.0KB/s   00:00    
[root@CA-Host certs]# scp kubelet.pem kubelet-key.pem node002:/opt/kubernetes/server/bin/cert
root@node002's password: 
kubelet.pem                                                                                                                 100% 1476     1.6MB/s   00:00    
kubelet-key.pem                                                                                                             100% 1679     1.4MB/s   00:00

? 進(jìn)入到 conf 目錄 生成配置文件 4步驟

#1号杠、set-cluster
[root@lb002 conf]# pwd
/opt/kubernetes/server/bin/conf
[root@lb002 conf]# kubectl config set-cluster myk8s \
   --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
   --embed-certs=true \
   --server=https://192.168.208.10:7443 \
   --kubeconfig=kubelet.kubeconfig
Cluster "myk8s" set.
[root@lb002 conf]# ls
audit.yaml  kubelet.kubeconfig
#2 、set-credentials

[root@lb002 conf]# kubectl config set-credentials k8s-node --client-certificate=/opt/kubernetes/server/bin/cert/client.pem --client-key=/opt/kubernetes/server/bin/cert/client-key.pem --embed-certs=true --kubeconfig=kubelet.kubeconfig 

User "k8s-node" set.
#3丰歌、set-context
 
[root@lb002 conf]# kubectl config set-context myk8s-context \
  --cluster=myk8s \
  --user=k8s-node \
  --kubeconfig=kubelet.kubeconfig
#4姨蟋、use-context

[root@lb002 conf]#kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig

2、創(chuàng)建 k8s-node.yaml 資源配置文件

[root@lb002 conf]# vi k8s-node.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node

生成 k8s-node 的用戶(hù)  讓他擁有集群計(jì)算權(quán)限(clusterrolebinding)
[root@lb002 conf]# kubectl create -f k8s-node.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-node created

查詢(xún)用戶(hù)
[root@lb002 conf]# kubectl get clusterrolebinding k8s-node
NAME       AGE
k8s-node   93s

......

6立帖、將上面生成的 kubelet.kubeconfig 拷貝到 node02節(jié)點(diǎn)目錄下

[root@node01 conf]# scp kubelet.kubeconfig 192.168.208.22://opt/kubernetes/server/bin/conf

7眼溶、在 node02上查看

[root@node02 conf]# ll
總用量 12
-rw-r--r--. 1 root root 2289 5月   9 01:17 audit.yaml
-rw-------. 1 root root 6212 5月   9 13:38 kubelet.kubeconfig

這里不需要再創(chuàng)建 k8s-node 資源了, 已經(jīng)存在了

8晓勇、創(chuàng)建 kubelet啟動(dòng)腳本

? 1.1: 準(zhǔn)備 pause基礎(chǔ)鏡像

? 主運(yùn)維主機(jī)上 CA-Host 192.168.208.200 主機(jī)上操作

? 下載pause基礎(chǔ)鏡像 重新打標(biāo)簽上傳到 harbor

[root@CA-Host certs]# docker pull kubernetes/pause
[root@CA-Host certs]# docker images |grep kubernetes/pause
[root@CA-Host certs]# docker tag kubernetes/pause:latest 192.168.208.200/public/pause:latest
[root@CA-Host certs]# docker login http://192.168.208.200/public  #admin/123456
[root@CA-Host certs]# docker push 192.168.208.200/public/pause:latest
image-20200509135618865.png

? 用于業(yè)務(wù)容器初始化 網(wǎng)絡(luò)空間 ipc空間 utf 空間

? 1.2堂飞、node01創(chuàng)建 kubelte 啟動(dòng)腳本

[root@node01 bin]# pwd
/opt/kubernetes/server/bin
[root@node01 bin]# vim kubelet.sh
#!/bin/sh
./kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 192.168.0.2 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice \
  --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on="false" \
  --client-ca-file ./cert/ca.pem \
  --tls-cert-file ./cert/kubelet.pem \
  --tls-private-key-file ./cert/kubelet-key.pem \
  --hostname-override node01 \    #指定本機(jī)主機(jī)名
  --image-gc-high-threshold 20 \
  --image-gc-low-threshold 10 \
  --kubeconfig ./conf/kubelet.kubeconfig \
  --log-dir /data/logs/kubernetes/kube-kubelet \
  --pod-infra-container-image 192.168.208.200/public/pause:latest \  #指定啟動(dòng)基礎(chǔ)鏡像
  --root-dir /data/kubelet

? 改權(quán)限 創(chuàng)建配置文件啟動(dòng)需要的目錄

[root@node01 bin]# chmod +x /opt/kubernetes/server/bin/kubelet.sh 
[root@node01 bin]# mkdir -p /data/logs/kubernetes/kube-kubelet   /data/kubelet

? 1.3 創(chuàng)建 node02的 kubelet 啟動(dòng)腳本

[root@node02 bin]# pwd
/opt/kubernetes/server/bin
[root@node02 bin]# vim kubelet.sh
#!/bin/sh
./kubelet \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 192.168.0.2 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on="false" \
  --client-ca-file ./cert/ca.pem \
  --tls-cert-file ./cert/kubelet.pem \
  --tls-private-key-file ./cert/kubelet-key.pem \
  --hostname-override 192.168.208.22 \
  --image-gc-high-threshold 20 \
  --image-gc-low-threshold 10 \
  --kubeconfig ./conf/kubelet.kubeconfig \
  --log-dir /data/logs/kubernetes/kube-kubelet \
  --pod-infra-container-image harbor.od.com/public/pause:latest \
  --root-dir /data/kubelet

改權(quán)限 創(chuàng)建配置文件啟動(dòng)需要的目錄

[root@node02 bin]# chmod +x /opt/kubernetes/server/bin/kubelet.sh 
[root@node02 bin]# mkdir -p /data/logs/kubernetes/kube-kubelet   /data/kubelet

? 1.4 啟動(dòng) kubelet.sh

這里使用 supervsiorctl 啟動(dòng)

[root@node002 bin]# cat /etc/supervisord.d/kube-kubelet.ini 
[program:kube-kubelet-208-22]
command=/usr/bin/sh /opt/kubernetes/server/bin/kubelet.sh     ; the program (relative uses PATH, can take args)
numprocs=1                                        ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin              ; directory to cwd to before exec (def no cwd)
autostart=true                                    ; start at supervisord start (default: true)
autorestart=true                                  ; retstart at unexpected quit (default: true)
startsecs=30                                      ; number of secs prog must stay running (def. 1)
startretries=3                                    ; max # of serial start failures (default 3)
exitcodes=0,2                                     ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                   ; signal used to kill process (default TERM)
stopwaitsecs=10                                   ; max num secs to wait b4 SIGKILL (default 10)
user=root                                         ; setuid to this UNIX account to run the program
redirect_stderr=true                              ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log   ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                      ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                          ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                       ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                       ; emit events on stdout writes (default false)

啟動(dòng)

[root@node002 bin]# supervisorctl update
[root@node002 bin]# supervisorctl status
etcd-server-208-22               RUNNING   pid 13331, uptime 4:20:24
kube-apiserver-7-21              RUNNING   pid 13998, uptime 3:41:33
kube-controller-manager-208-22   RUNNING   pid 15506, uptime 0:57:15
kube-kubelet-208-22              STARTING  
scheduler-208-22                 RUNNING   pid 15626, uptime 0:49:14

另一個(gè)節(jié)點(diǎn)一樣 ,將啟動(dòng)配置文件拷過(guò)去绑咱,執(zhí)行啟動(dòng)

-rwxr-xr-x. 1 root root   1648224 3月  13 05:49 mounter
[root@node002 bin]# scp kubelet.sh node001:/opt/kubernetes/server/bin/
root@node001's password: 
kubelet.sh         

修改 kubelet.sh 的主機(jī)IP地址

[root@node002 bin]# scp -r /etc/supervisord.d/kube-kubelet.ini node001:/etc/supervisord.d/ 
root@node001's password: 
kube-kubelet.ini    

修改名 ini 的名稱(chēng)信息
[root@node001 bin]# vim /etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-208-21]

啟動(dòng)
[root@node001 bin]# supervisorctl update
kube-kubelet-208-21: added process group
[root@node001 bin]#
[root@node001 bin]# supervisorctl status
etcd-server-208-21               RUNNING   pid 15493, uptime 4:30:39
kube-apiserver-208-21            RUNNING   pid 16089, uptime 3:45:36
kube-controller-manager-208-22   RUNNING   pid 17631, uptime 1:00:28
kube-kubelet-208-21              STARTING
scheduler-208-21                 RUNNING   pid 17743, uptime 0:53:34

? 驗(yàn)證集群節(jié)點(diǎn):

[root@node01 ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
node01   Ready    <none>   10m     v1.15.11
node02   Ready    <none>   9m22s   v1.15.11
[root@node02 ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
node01   Ready    <none>   10m     v1.15.11
node02   Ready    <none>   9m22s   v1.15.11

? 更改 ROLES 信息 這里是個(gè) labe 隨便改的

[root@node01 ~]# kubectl label node node01 node-role.kubernetes.io/master=
node/node01 labeled
[root@node01 ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
node01   Ready    master   15m   v1.15.11
node02   Ready    <none>   15m   v1.15.11

[root@node01 ~]# kubectl label node node01 node-role.kubernetes.io/node=  
node/node01 labeled
[root@node01 ~]# kubectl get nodes
NAME     STATUS   ROLES         AGE   VERSION
node01   Ready    master,node   16m   v1.15.11
node02   Ready    <none>        15m   v1.15.11

啟動(dòng)報(bào)錯(cuò)如下:

failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

解決方法:
檢查 /etc/docker/daemon.json
native.cgroupdriver=systemd  配置  報(bào)錯(cuò)原因是 native 寫(xiě)成了 vative
{
  "graph": "/data/docker",
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","192.168.208.200"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.7.21.1/24",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}

2绰筛、部署node節(jié)點(diǎn)的kube-proxy

? proxy 用于連接 pod 和集群網(wǎng)絡(luò)

主機(jī)名 角色 IP地址
node01 kube-proxy 192.168.208.21
node02 kube-proxy 192.168.208..22

? 通信需要使用 證書(shū)

? 在 ca-host 上操作 192.168.208.200

1、創(chuàng)建證書(shū)文件

[root@CA-Host certs]# pwd
/opt/certs
[root@CA-Host certs]# vi kube-proxy-csr.json
{
    "CN": "system:kube-proxy",    #  這里就是這樣寫(xiě)的描融, 不能改
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "guangdong",
            "L": "shengzheng",
            "O": "tj",
            "OU": "ops"
        }
    ]
}

2铝噩、執(zhí)行創(chuàng)建證書(shū)

[root@CA-Host certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client
[root@CA-Host certs]# ll
總用量 100
-rw-r--r--. 1 root root 1257 5月   8 20:54 apiserver.csr
-rw-r--r--. 1 root root  488 5月   8 20:52 apiserver-csr.json
-rw-------. 1 root root 1679 5月   8 20:54 apiserver-key.pem
-rw-r--r--. 1 root root 1606 5月   8 20:54 apiserver.pem
-rw-r--r--. 1 root root  654 5月   8 15:24 ca-config.json
-rw-r--r--. 1 root root  997 5月   7 14:06 ca.csr
-rw-r--r--. 1 root root  221 5月   7 14:02 ca-csr.json
-rw-------. 1 root root 1679 5月   7 14:06 ca-key.pem
-rw-r--r--. 1 root root 1346 5月   7 14:06 ca.pem
-rw-r--r--. 1 root root 1001 5月   8 20:40 client.csr
-rw-r--r--. 1 root root  190 5月   8 20:37 client-csr.json
-rw-------. 1 root root 1679 5月   8 20:40 client-key.pem
-rw-r--r--. 1 root root 1371 5月   8 20:40 client.pem
-rw-r--r--. 1 root root 1070 5月   8 15:43 etcd-peer.csr
-rw-r--r--. 1 root root  266 5月   8 15:38 etcd-peer-csr.json
-rw-------. 1 root root 1675 5月   8 15:43 etcd-peer-key.pem
-rw-r--r--. 1 root root 1436 5月   8 15:43 etcd-peer.pem
-rw-r--r--. 1 root root 1123 5月   9 12:59 kubelet.csr
-rw-r--r--. 1 root root  502 5月   9 12:57 kubelet-csr.json
-rw-------. 1 root root 1679 5月   9 12:59 kubelet-key.pem
-rw-r--r--. 1 root root 1476 5月   9 12:59 kubelet.pem
-rw-r--r--. 1 root root 1013 5月   9 15:28 kube-proxy-client.csr
-rw-------. 1 root root 1675 5月   9 15:28 kube-proxy-client-key.pem
-rw-r--r--. 1 root root 1383 5月   9 15:28 kube-proxy-client.pem
-rw-r--r--. 1 root root  272 5月   9 15:26 kube-proxy-csr.json

3、分發(fā)證書(shū)

將 kube-proxy-client.pem kube-proxy-client-key.pem 分發(fā)到節(jié)點(diǎn)中 /opt/kubernetes/server/bin/cert

[root@CA-Host#]scp kube-proxy-client.pem 192.168.208.21:/opt/kubernetes/server/bin/cert   
[root@CA ] scp kube-proxy-client-key.pem 192.168.208.21:/opt/kubernetes/server/bin/cert
[root@CA ] scp kube-proxy-client.pem 192.168.208.22:/opt/kubernetes/server/bin/cert
[root@CA ] scp kube-proxy-client-key.pem 192.168.208.22:/opt/kubernetes/server/bin/cert

4窿克、在 node01 node02 節(jié)點(diǎn)上創(chuàng)建 配置文件

node01

[root@node01 conf]# pwd
/opt/kubernetes/server/bin/conf

#set-cluster
[root@node01 conf]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.208.10:7443 \
--kubeconfig=kube-proxy.kubeconfig

#set-credentials
[root@node01 conf]# kubectl config set-credentials kube-proxy \
  --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
  --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

#set-context
[root@node01 conf]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

#use-context
[root@node01 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig

查看生成的 kube-proxy.kubeconfig 配置文件
[root@node01 conf]# ll
總用量 24
-rw-r--r--. 1 root root 2289 5月   9 01:11 audit.yaml
-rw-r--r--. 1 root root  258 5月   9 13:31 k8s-node.yaml
-rw-------. 1 root root 6212 5月   9 13:28 kubelet.kubeconfig
-rw-------. 1 root root 6228 5月   9 15:45 kube-proxy.kubeconfig

5薄榛、將 生成的 kube-proxy.kubeconfig 配置文件 拷貝到 node02

[root@node01 conf]# pwd
/opt/kubernetes/server/bin/conf
[root@node01 conf]# 
scp kube-proxy.kubeconfig 192.168.208.22:/opt/kubernetes/server/bin/conf

6、使用 ipvs 調(diào)度流量

[root@node01 ~]# vim ipvs.sh

#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
  /sbin/modinfo -F filename $i &>/dev/null
  if [ $? -eq 0 ];then
    /sbin/modprobe $i
  fi
done
[root@node01 ~]# chmod a+x ipvs.sh 
[root@node01 ~]# sh ipvs.sh 
[root@node01 ~]# lsmod |grep ip_vs
ip_vs_wrr              12697  0 
ip_vs_wlc              12519  0 
ip_vs_sh               12688  0 
ip_vs_sed              12519  0 
ip_vs_rr               12600  0 
ip_vs_pe_sip           12740  0 
nf_conntrack_sip       33860  1 ip_vs_pe_sip
ip_vs_nq               12516  0 
ip_vs_lc               12516  0 
ip_vs_lblcr            12922  0

7让歼、將腳本拷到 192.168.208.22 node02

[root@node01 ~]# scp ipvs.sh 192.168.208.22:/root/

node02
[root@node02 ~]# sh ipvs.sh 
[root@node02 ~]# lsmod |grep ip_vs
ip_vs_wrr              12697  0     #加權(quán)輪詢(xún)
ip_vs_wlc              12519  0     #最短連接
ip_vs_sh               12688  0 
ip_vs_sed              12519  0 
ip_vs_rr               12600  0 
ip_vs_pe_sip           12740  0 
nf_conntrack_sip       33860  1 ip_vs_pe_sip
ip_vs_nq               12516  0 
ip_vs_lc               12516  0 

8、 最后 創(chuàng)建啟動(dòng)腳本文件

node01:
[root@node01 bin]# pwd
/opt/kubernetes/server/bin
[root@node01 bin]# vim kube-proxy.sh

#!/bin/sh
./kube-proxy \
  --cluster-cidr 172.7.0.0/16 \
  --hostname-override node01 \
  --proxy-mode=ipvs \
  --ipvs-scheduler=nq \
  --kubeconfig ./conf/kube-proxy.kubeconfig
[root@node01 bin]#chmod a+x kube-proxy.sh
[root@node01 bin]#mkdir -p /data/logs/kubernetes/kube-proxy
node02:
[root@node02 bin]# pwd
/opt/kubernetes/server/bin
[root@node02 bin]# vim kube-proxy.sh
#!/bin/sh
./kube-proxy \
  --cluster-cidr 172.7.0.0/16 \
  --hostname-override node02 \
  --proxy-mode=ipvs \
  --ipvs-scheduler=nq \
  --kubeconfig ./conf/kube-proxy.kubeconfig
[root@node02 bin]#chmod a+x kube-proxy.sh
[root@node02 bin]#mkdir -p /data/logs/kubernetes/kube-proxy

啟動(dòng) 執(zhí)行腳本

sh kube-proxy.sh

使用 supervsorctl啟動(dòng)

[root@node002 bin]# cat /etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-208-22]
command=/usr/bin/sh /opt/kubernetes/server/bin/kube-proxy.sh                     ; the program (relative uses PATH, can take args)
numprocs=1                                                           ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin                                 ; directory to cwd to before exec (def no cwd)
autostart=true                                                       ; start at supervisord start (default: true)
autorestart=true                                                     ; retstart at unexpected quit (default: true)
startsecs=30                                                         ; number of secs prog must stay running (def. 1)
startretries=3                                                       ; max # of serial start failures (default 3)
exitcodes=0,2                                                        ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                      ; signal used to kill process (default TERM)
stopwaitsecs=10                                                      ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                            ; setuid to this UNIX account to run the program
redirect_stderr=true                                                 ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log     ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                         ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                             ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                          ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                          ; emit events on stdout writes (default false)

啟動(dòng)

[root@node002 bin]# supervisorctl update
kube-proxy-208-22: added process group
[root@node002 bin]# supervisorctl status
etcd-server-208-22               RUNNING   pid 13331, uptime 4:41:52
kube-apiserver-7-21              RUNNING   pid 13998, uptime 4:03:01
kube-controller-manager-208-22   RUNNING   pid 15506, uptime 1:18:43
kube-kubelet-208-22              RUNNING   pid 16520, uptime 0:21:31
kube-proxy-208-22                STARTING  
scheduler-208-22                 RUNNING   pid 15626, uptime 1:10:42

將啟動(dòng)配置文件拷到另一個(gè)節(jié)點(diǎn)

更改配置文件名稱(chēng)

[root@node001 bin]# vim /etc/supervisord.d/kube-proxy.ini
[root@node001 bin]# cat /etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-208-21]

執(zhí)行第二個(gè)節(jié)點(diǎn)的啟動(dòng)

[root@node001 bin]# supervisorctl update
kube-proxy-208-21: added process group
[root@node001 bin]# supervisorctl status
etcd-server-208-21               RUNNING   pid 15493, uptime 4:48:58
kube-apiserver-208-21            RUNNING   pid 16089, uptime 4:03:55
kube-controller-manager-208-22   RUNNING   pid 17631, uptime 1:18:47
kube-kubelet-208-21              RUNNING   pid 20696, uptime 0:08:32
kube-proxy-208-21                RUNNING   pid 22309, uptime 0:01:29
scheduler-208-21                 RUNNING   pid 17743, uptime 1:11:53

9丽啡、安裝 ipvsadm 使用 ipvs 調(diào)度流量

[root@node02 ~]# yum install ipvsadm
[root@node02 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq
  -> 192.168.208.21:6443          Masq    1      0          0         
  -> 192.168.208.22:6443          Masq    1      0          0         

[root@node01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq
  -> 192.168.208.21:6443          Masq    1      0          0         
  -> 192.168.208.22:6443          Masq    1      0          0         
[root@node01 ~]

已經(jīng)對(duì) 21,22 節(jié)點(diǎn)上的 6443 進(jìn)行調(diào)度

10谋右、驗(yàn)證集群運(yùn)行狀態(tài)

[root@node002 bin]# kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
192.168.208.21   Ready    <none>   10m   v1.15.11
192.168.208.22   Ready    <none>   25m   v1.15.11

? 集群節(jié)點(diǎn)需要登陸下 harbor ,docker login 192.168.208.200/public # admin/123456

1补箍、下載nginx鏡像  上傳到 harbor
[root@CA-Host certs]# docker pull nginx
[root@CA-Host certs]# docker tag 602e111c06b6 192.168.208.200/public/nginx:latest
[root@CA-Host certs]# docker push 192.168.208.200/public/nginx:latest

2改执、編寫(xiě)資源配置清單
[root@node01 ~]# vim nginx-ds.yml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ds
spec:
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: 192.168.208.200/public/nginx:latest
        ports:
        - containerPort: 80
        
[root@node01 ~]# kubectl create -f nginx-ds.yml 
daemonset.extensions/nginx-ds created        
[root@node01 ~]# kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
nginx-ds-zf7hv   1/1     Running   0          11m
nginx-ds-ztcgn   1/1     Running   0          11m  
[root@node01 ~]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-ds-zf7hv   1/1     Running   0          12m   172.7.21.2   node01   <none>           <none>
nginx-ds-ztcgn   1/1     Running   0          12m   172.7.22.2   node02   <none>           <none>

集群驗(yàn)證成功了 nginx 跑起來(lái)了

總結(jié):

基礎(chǔ)環(huán)境: centos 7.6 內(nèi)核 3.8以上

關(guān)閉selinux

時(shí)間同步

調(diào)整epeo源

內(nèi)核優(yōu)化 (文件描述符大小 內(nèi)核轉(zhuǎn)發(fā))

安裝 bind dns

安裝 docker 和 harbor

k8s:

? etcd集群

? apisever

? contorller-manager

? scheduler

? kubelet

? kube-proxy

證書(shū)相關(guān), 查看證書(shū)過(guò)期時(shí)間

[root@CA-Host certs]# cfssl-certinfo -cert apiserver.pem
{
  "subject": {
    "common_name": "k8s-apiserver",
    "country": "CN",
    "organization": "tj",
    "organizational_unit": "ops",
    "locality": "shengzheng",
    "province": "guangdong",
    "names": [
      "CN",
      "guangdong",
      "shengzheng",
      "tj",
      "ops",
      "k8s-apiserver"
    ]
  },
  "issuer": {
    "common_name": "Tjcom",
    "country": "CN",
    "organization": "tj",
    "organizational_unit": "ops",
    "locality": "shengzheng",
    "province": "guangdong",
    "names": [
      "CN",
      "guangdong",
      "shengzheng",
      "tj",
      "ops",
      "Tjcom"
    ]
  },
  "serial_number": "406292899824335029439592092319769160797618648263",
  "sans": [
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local",
    "127.0.0.1",
    "192.168.0.1",
    "192.168.208.10",
    "192.168.208.21",
    "192.168.208.22",
    "192.168.208.23"
  ],
  "not_before": "2020-05-08T12:50:00Z",      #簽發(fā)時(shí)間   
  "not_after": "2040-05-03T12:50:00Z",       #證書(shū)過(guò)期時(shí)間
  "sigalg": "SHA256WithRSA",
  "authority_key_id": "FC:F5:C0:6E:ED:50:8F:51:FF:93:FB:8D:29:C2:AD:D7:8E:78:1B:43",
  "subject_key_id": "D4:8E:CA:0:58:1E:5D:F5:D4:6D:1B:68:C9:2F:A4:31:B2:75:7E:F6",

}

查看其它的域名證書(shū)
[root@CA-Host certs]# cfssl-certinfo -domain www.baidu.com

從配置文件中 反解 證書(shū)

/opt/kubernetes/server/bin/conf
[root@node01 conf]# ll
總用量 24
-rw-r--r--. 1 root root 2289 5月   9 01:11 audit.yaml
-rw-r--r--. 1 root root  258 5月   9 13:31 k8s-node.yaml
-rw-------. 1 root root 6212 5月   9 13:28 kubelet.kubeconfig
-rw-------. 1 root root 6228 5月   9 15:45 kube-proxy.kubeconfig
cat kubelet.kubeconfig
把證書(shū)單獨(dú)拿出來(lái) client-certificate-data
echo client-certificate-data 的證書(shū) |base64 -d  >123.pem
拷到 200的機(jī)器上可以查看
[root@CA-Host tool]# cfssl-certinfo -cert 123.pem  


證書(shū)過(guò)期坑雅, kubeconfig kube-proxy 配置文件要重新生成 證書(shū)也要替換

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末辈挂,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子裹粤,更是在濱河造成了極大的恐慌终蒂,老刑警劉巖,帶你破解...
    沈念sama閱讀 216,496評(píng)論 6 501
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場(chǎng)離奇詭異拇泣,居然都是意外死亡噪叙,警方通過(guò)查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,407評(píng)論 3 392
  • 文/潘曉璐 我一進(jìn)店門(mén)霉翔,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)睁蕾,“玉大人,你說(shuō)我怎么就攤上這事债朵∽涌簦” “怎么了?”我有些...
    開(kāi)封第一講書(shū)人閱讀 162,632評(píng)論 0 353
  • 文/不壞的土叔 我叫張陵序芦,是天一觀的道長(zhǎng)臭杰。 經(jīng)常有香客問(wèn)我,道長(zhǎng)芝加,這世上最難降的妖魔是什么硅卢? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 58,180評(píng)論 1 292
  • 正文 為了忘掉前任,我火速辦了婚禮藏杖,結(jié)果婚禮上将塑,老公的妹妹穿的比我還像新娘。我一直安慰自己蝌麸,他們只是感情好点寥,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,198評(píng)論 6 388
  • 文/花漫 我一把揭開(kāi)白布。 她就那樣靜靜地躺著来吩,像睡著了一般敢辩。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上弟疆,一...
    開(kāi)封第一講書(shū)人閱讀 51,165評(píng)論 1 299
  • 那天戚长,我揣著相機(jī)與錄音,去河邊找鬼怠苔。 笑死同廉,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的柑司。 我是一名探鬼主播迫肖,決...
    沈念sama閱讀 40,052評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼攒驰!你這毒婦竟也來(lái)了蟆湖?” 一聲冷哼從身側(cè)響起,我...
    開(kāi)封第一講書(shū)人閱讀 38,910評(píng)論 0 274
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤玻粪,失蹤者是張志新(化名)和其女友劉穎隅津,沒(méi)想到半個(gè)月后诬垂,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,324評(píng)論 1 310
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡饥瓷,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,542評(píng)論 2 332
  • 正文 我和宋清朗相戀三年剥纷,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片呢铆。...
    茶點(diǎn)故事閱讀 39,711評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡晦鞋,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出棺克,到底是詐尸還是另有隱情悠垛,我是刑警寧澤,帶...
    沈念sama閱讀 35,424評(píng)論 5 343
  • 正文 年R本政府宣布娜谊,位于F島的核電站确买,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏纱皆。R本人自食惡果不足惜湾趾,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,017評(píng)論 3 326
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望派草。 院中可真熱鬧搀缠,春花似錦、人聲如沸近迁。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 31,668評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)鉴竭。三九已至歧譬,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間搏存,已是汗流浹背瑰步。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 32,823評(píng)論 1 269
  • 我被黑心中介騙來(lái)泰國(guó)打工, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留璧眠,地道東北人面氓。 一個(gè)月前我還...
    沈念sama閱讀 47,722評(píng)論 2 368
  • 正文 我出身青樓,卻偏偏與公主長(zhǎng)得像蛆橡,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子掘譬,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,611評(píng)論 2 353