Kubernetes二進(jìn)制安裝

架構(gòu)圖

實(shí)驗(yàn)架構(gòu)圖

準(zhǔn)備工作

準(zhǔn)備虛擬機(jī)

  • 5臺服務(wù)器

操作系統(tǒng)

  • centos7.6 1810 mini安裝

調(diào)整yum源

安裝epel-release

# yum install -y epel-relase

關(guān)閉SElinux和firewalld
  1. # sed -i '/SELINUX/{s/permissive/disabled/}' /etc/selinux/config
  2. # setenforce 0 #查看狀態(tài)(getenforce)
  3. # systemctl stop firewalld

安裝必要工具

# yum install -y wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils

DNS服務(wù)初始化

pl66-240.host.com上:

安裝bind9軟件

# yum install -y bind

配置bind9

主配置文件

# vim /etc/named.conf

listen-on port 53 { 10.10.66.240; };
~~listen-on-v6 port 53 { ::1; };~~
allow-query     { localhost; }; >>allow-query     { any; }; 
forwarders      { 8.8.8.8; };
dnssec-enable no;
dnssec-validation no;
檢查配置页屠,是否配置格式正確蚓聘。

# named-checkconf

區(qū)域配置文件

# vim /etc/named.rfc1912.zones

zone "host.com" IN {
type master;
file "host.com.zone";
allow-update { 10.10.66.240; };
};
zone "yw.com" IN {
type master;
file "yw.com.zone";
allow-update { 10.10.66.240; };
};

配置區(qū)域數(shù)據(jù)文件

  • 配置主機(jī)域數(shù)據(jù)文件

# vim /var/named/host.com.zone

$ORIGIN host.com.
$TTL 600        ; 10 minutes
@   IN SOA dns.host.com. dnsadmin.host.com. (
                20200306    ; serial
                10800       ; refresh (3 hours)
                900     ; retry (15 minutes)
                604800      ; expire (1 week)
                86400       ; minimum (1 day)
                )
            NS  dns.host.com.
$TTL 60 ; 1 minute
dns     A   10.10.66.240
pl66-240        A   10.10.66.240
pl66-241        A   10.10.66.241
pl66-242        A   10.10.66.242
pl66-243        A   10.10.66.243
pl66-245        A   10.10.66.245

# vim /var/named/yw.com.zone

$ORIGIN yw.com.
$TTL 600        ; 10 minutes
@   IN SOA dns.yw.com. dnsadmin.yw.com. (
                20200306    ; serial
                10800       ; refresh (3 hours)
                900     ; retry (15 minutes)
                604800      ; expire (1 week)
                86400       ; minimum (1 day)
                )
            NS  dns.yw.com.
$TTL 60 ; 1 minute
dns     A   10.10.66.240
檢查配置婶肩,是否配置格式正確荷腊。

# named-checkconf

啟動(dòng)bind9

# named-checkconf
# systemctl start named
# systemctl enable named

檢查

[root@pl66-240 ~]# dig -t A pl66-242.host.com @10.10.66.240 +short
10.10.66.242
[root@pl66-240 ~]# dig -t A dns.yw.com @10.10.66.240 +short
10.10.66.240

批量修改其他主機(jī)dns

[root@pl66-240 ~]# cat /etc/resolv.conf
search host.com
nameserver 10.10.66.240

將240主機(jī)上的resolve文件拷貝到其他主機(jī)

# ansible server -m copy -a 'src=/etc/resolv.conf dest=/etc/resolv.conf force=yes backup=yes'

準(zhǔn)備簽發(fā)證書環(huán)境

運(yùn)維主機(jī)pl66-245.host.com上

安裝CFSSL

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64  -O /usr/bin/123/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/123/cfssl-json
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/123/cfssl-certinfo
chmod +x /usr/bin/cfssl*

創(chuàng)建生產(chǎn)CA證書的JSON 配置文件

vim /opt/certs/ca-config.json

{
    "signing": {
        "default": {
            "expiry": "175200h"
    },
        "profiles": {
            "server": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
                    "client": {
                                "expiry": "175200h",
                                "usages": [
                                        "signing",
                                        "key encipherment",
                                        "client auth"
                                ]
            },
                        "peer": {
                                "expiry": "175200h",
                                "usages": [
                                        "signing",
                                        "key encipherment",
                                        "server auth",
                    "client auth"
                                ]
                        }
        }
    }
}

證書類型

  • client certificate: 客戶端使用孕豹,用于服務(wù)端認(rèn)證客戶端,例如etcdctl捧挺、etcd proxy倚搬、fleetctl、docker客戶端
  • server certificate: 服務(wù)端使用脱盲,客戶端以此驗(yàn)證服務(wù)端身份,例如docker服務(wù)端邑滨、kube-apiserver
  • peer certificate: 雙向證書,用于etcd集群成員間通信

創(chuàng)建生產(chǎn)CA證書簽名請求(csr)的JSON配置文件

# /opt/cets/ca-csr.json

{
    "CN": "pinnet",
    "hosts": [ 
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "chengdu",
            "L": "chengdu",
            "O": "pl",
            "OU": "ops"
        }
    ],
    "ca": {
        "expiry": "175200h"
    }
}

CN: Common Name,瀏覽器使用該字段驗(yàn)證網(wǎng)站是否合法钱反,一般寫的是域名掖看。
C: Country,國家
ST:State面哥,州哎壳,省
L: Locality,地區(qū)尚卫,城市
O: Organization Name归榕,組織名稱, 公司名稱
OU:Organization Unit Name吱涉,組織單位名稱刹泄,公司部門

生產(chǎn)CA證書和私鑰

[root@66-245 certs]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca

輸出結(jié)果如下

[root@66-245 certs]#  cfssl gencert -initca ca-csr.json | cfssl-json -bare ca
2020/03/07 05:58:05 [INFO] generating a new CA key and certificate from CSR
2020/03/07 05:58:05 [INFO] generate received request
2020/03/07 05:58:05 [INFO] received CSR
2020/03/07 05:58:05 [INFO] generating key: rsa-2048
2020/03/07 05:58:06 [INFO] encoded CSR
2020/03/07 05:58:06 [INFO] signed certificate with serial number 64696289091365665227482443074556056282272288290
[root@66-245 certs]# ll
total 16
-rw-r--r-- 1 root root  989 Mar  7 05:58 ca.csr
-rwxr-xr-x 1 root root  224 Mar  7 05:56 ca-csr.json
-rw------- 1 root root 1675 Mar  7 05:58 ca-key.pem
-rw-r--r-- 1 root root 1338 Mar  7 05:58 ca.pem
[root@66-245 certs]# 

部署docker環(huán)境

安裝

\color{blue}{pl66-242,pl66-243怎爵, pl66-245上 :}

# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

配置

創(chuàng)建docker配置文件

# vim /etc/docker/daemon.json

{
    "graph": "/data/docker",
    "storage-driver": "overlay2",
    "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.yw.com"],
    "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
    "bip": "172.16.242.1/24",
    "exec-opts": ["native.cgroupdriver=systemd"],
    "live-restore": true
}

啟動(dòng)腳本

/usr/lib/systemd/system/docker.service


[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

啟動(dòng)

# systemctl start docker
# systemctl enable docker
# docker version

部署docker鏡像私有倉庫harbor

\color{blue}{ pl66-245上 :}

harbor官方github地址

harbor下載地址

下載離線安裝版本1.9.3

github 版本信息

解壓縮包

[root@66-245 opt]# tar -xcf harbor-offline-installer-v1.9.3.tgz

為版本添加版本號

[root@66-245 opt]# mv harbor/ harbor1.9.3

軟連接

[root@66-245 opt]# ln -s /opt/harbor1.9.3/ /opt//harbor
以便軟件升級

harbor參數(shù)修改

hostname: harbor.yw.com
http:
  port: 180
harbor_admin_password: Harbor12345    ##生產(chǎn)環(huán)境需更改
data_volume: /data/harbor
log:
  level: info
  rotate_count: 50
  rotate_size: 200M
  location: /data/harbor/logs

創(chuàng)建harbor所需的目錄

# mkdir -p /data/harbor
# mkdir -p /data/harbor/logs

安裝docker-compose

\color{Green}{harbor依賴于docker-compose做單機(jī)編排}

[root@66-245 harbor]# yum install docker-compose -y

執(zhí)行harbor安裝文件

[root@66-245 harbor]# /opt/harbor/install.sh

harbor

安裝完成后檢查

[root@66-245 harbor]# docker-compose ps

docker-compose ps

安裝nginx做反向代理

[root@66-245 harbor]# yum -y install nginx

添加Nginx反向代理配置

vim /etc/nginx/conf.d/harbor.yw.com.conf
server {
    listen  80;
    server_name harbor.yw.com;
    
    client_mx_body_size 1000m;
    
    location / {
        proxy_pass http://127.0.0.1:180;
    }   
}

在DNS服務(wù)器上(pl66-240)上添加harbor的A記錄循签,并重啟服務(wù)讓其生效

dns

systemctl restart named

驗(yàn)證:

dig -t A harbor.yw.com +short
10.10.66.240

檢查添加的配置文件

image.png

檢查命令


image.png
image.png

驗(yàn)證harbor

打開網(wǎng)頁,輸入 http:harbor.yw.com


shu

輸入賬號密碼登錄

  • 用戶名:admin
  • 密碼:Harbor12345

新建項(xiàng)目

image.png

image.png

下載nginx鏡像

[root@pl66-245 nginx]# docker pull nginx

查看nginx鏡像ID

[root@pl66-245 nginx]# docker images

為鏡像添加TAG

[root@pl66-245 nginx]# docker tag 6678c7c2e56c harbor.yw.com/public/nginx:1.9.1

將本地nginx鏡像上傳至harbor中

  1. 登錄harbor

docker login harbor.yw.com

[root@pl66-245 nginx]# docker login harbor.yw.com
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@pl66-245 nginx]# 

2.將鏡像上傳至harbor

docker push harbor.yw.com/public/nginx:1.9.1

[root@pl66-245 nginx]# docker push harbor.yw.com/public/nginx:1.9.1
The push refers to repository [harbor.yw.com/public/nginx]
55a77731ed26: Pushed 
71f2244bc14d: Pushed 
f2cb0ecef392: Pushed 
1.9.1: digest: sha256:3936fb3946790d711a68c58be93628e43cbca72439079e16d154b5db216b58da size: 948
[root@pl66-245 nginx]# 

安裝Master節(jié)點(diǎn)服務(wù)

部署etcd集群

集群規(guī)劃

主機(jī)名 角色 IP
pl66-241 etcd lead 10.10.66.241
pl66-242 etcd follow 10.10.66.242
pl66-243 etcd follow 10.10.66.243

注意:這里部署文檔以pl66-241為例疙咸,另外兩臺主機(jī)部署方法類似

創(chuàng)建生成證書簽名請求(csr)的JSON配置文件

vi /opt/certs/cat etcd-peer-csr.json

{
    "CN": "k8s-etcd",
    "hosts": [ 
        "10.10.66.240",
        "10.10.66.241",
        "10.10.66.242",
        "10.10.66.243"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "chengdu",
            "L": "chengdu",
            "O": "pl",
            "OU": "ops"
        }
    ]
}

生成etcd證書和私鑰

[root@pl66-245 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssl-json -bare etcd-peer

[root@pl66-245 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-c
2020/03/10 02:26:28 [INFO] generate received request
2020/03/10 02:26:28 [INFO] received CSR
2020/03/10 02:26:28 [INFO] generating key: rsa-2048
2020/03/10 02:26:28 [INFO] encoded CSR
2020/03/10 02:26:28 [INFO] signed certificate with serial number 643611486410713894911975662668229763052251494279
2020/03/10 02:26:28 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

檢查生成的證書、私鑰

[root@pl66-245 certs]# ll
total 36
-rw-r--r-- 1 root root  918 Mar  9 10:26 ca-config.json
-rw-r--r-- 1 root root  989 Mar  7 05:58 ca.csr
-rwxr-xr-x 1 root root  224 Mar  7 05:56 ca-csr.json
-rw------- 1 root root 1675 Mar  7 05:58 ca-key.pem
-rw-r--r-- 1 root root 1338 Mar  7 05:58 ca.pem
-rw-r--r-- 1 root root 1062 Mar 10 02:26 etcd-peer.csr
-rw-r--r-- 1 root root  262 Mar 10 01:36 etcd-peer-csr.json
-rw------- 1 root root 1675 Mar 10 02:26 etcd-peer-key.pem
-rw-r--r-- 1 root root 1424 Mar 10 02:26 etcd-peer.pem
[root@pl66-245 certs]# 

創(chuàng)建etcd用戶

在pl66-241上:

useradd -s /sbin/nologin -M etcd

[root@pl66-241 ~]# useradd -s /sbin/nologin -M etcd
[root@pl66-241 ~]# id etcd
uid=1004(etcd) gid=1004(etcd) groups=1004(etcd)

下載軟件风科,解壓撒轮,做軟連接

etcd下載地址
\color{blue}{pl66-241上:}

[root@pl66-241 opt]# tar xvf etcd-v3.1.20-linux-amd64.tar.gz -C /opt
[root@pl66-242 opt]# mv etcd-v3.1.20-linux-amd64 etcd-v3.1.20
[root@pl66-241 opt]# ln -s /opt/etcd-v3.1.20 /opt/etcd

創(chuàng)建目錄,拷貝證書,私鑰

\color{blue}{pl66-241上:}

  • 創(chuàng)建目錄

[root@pl66-241 opt]# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server

  • 拷貝證書
    ***將運(yùn)維主機(jī)上生成的ca.pem乞旦、etcd-peer-key.pem、etcd-peer.pem拷貝到/opt/etcd/certs目錄中题山,注意私鑰
    /opt/etcd/certs

[root@pl66-241 certs]# scp pl66-245:/opt/certs/ca.pem .
[root@pl66-241 certs]# scp pl66-245:/opt/certs/etcd-peer-key.pem .
[root@pl66-241 certs]# scp pl66-245:/opt/certs/etcd-peer.pem .
文件權(quán)限600***

  • 修改權(quán)限
[root@pl66-241 certs]# ll
total 12
-rw-r--r--. 1 root root 1338 Mar 10 11:19 ca.pem
-rw-------. 1 root root 1675 Mar 10 11:20 etcd-peer-key.pem
-rw-r--r--. 1 root root 1424 Mar 10 11:20 etcd-peer.pem
[root@pl66-241 certs]# 

創(chuàng)建etcd服務(wù)啟動(dòng)腳本

\color{blue}{pl66-241上:}

vi /opt/etcd/etcd-server-startup.sh

#!/bin/sh
./etcd --name etcd-server-66-241 \
    --data-dir /data/etcd/etcd-server \
    --listen-peer-urls https://10.10.66.241:2380 \
    --listen-client-urls https://10.10.66.241:2379,http://127.0.0.1:2379 \
    --quota-backend-bytes 800000000 \
    --initial-advertise-peer-urls https://10.10.66.241:2380 \
    --advertise-client-urls https://10.10.66.241:2379,http://127.0.0.1:2379 \
    --initial-cluster etcd-server-66-241=https://10.10.66.241:2380,etcd-server-66-242=https://10.10.66.242:2380,etcd-server-66-243=https://10.10.66.243:2380 \
    --ca-file ./certs/ca.pem \
    --cert-file ./certs/etcd-peer.pem \
    --key-file ./certs/etcd-peer-key.pem \
    --client-cert-auth \
    --trusted-ca-file ./certs/ca.pem \
    --peer-ca-file ./certs/ca.pem \
    --peer-cert-file ./certs/etcd-peer.pem \
    --peer-key-file ./certs/etcd-peer-key.pem \
    --peer-client-cert-auth \
    --peer-trusted-ca-file ./certs/ca.pem \
    --log-output stdout

調(diào)整權(quán)限和目錄

\color{blue}{pl66-241上:}

[root@pl66-241 etcd]# chmod +x /opt/etcd/etcd-server-startup.sh
[root@pl66-241 etcd]# chown -R etcd.etcd /opt/etcd-v3.1.20/
[root@pl66-241 etcd]# chown -R etcd.etcd /data/etcd/
[root@pl66-241 etcd]# chown -R etcd.etcd /data/logs/etcd-server/
[root@pl66-241 etcd]# mkdir -p /data/logs/etcd-server

安裝supervisor軟件

\color{blue}{pl66-241上:}
管理后臺進(jìn)程兰粉,如果進(jìn)程掛了可以自動(dòng)拉起來

[root@pl66-241 etcd]# yum -y install supervisor
[root@pl66-241 etcd]# systemctl start supervisord
[root@pl66-241 etcd]# systemctl enable supervisord

創(chuàng)建etcd-server的啟動(dòng)配置

\color{blue}{pl66-241上:}

/etc/suppervisord.d/etcd-server.ini

[program:etcd-server-66-241]
command=/opt/etcd/etcd-server-startup.sh
numprocs=1
directory=/opt/etcd
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=etcd
redirect_stderr=true
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_events_enabled=false

注意:etcd集群各主機(jī)啟動(dòng)配置略有不同,配置其他節(jié)點(diǎn)時(shí)注意修改顶瞳。

啟動(dòng)etcd服務(wù)并檢查

[root@pl66-241 supervisord.d]# supervisorctl update
etcd-server-66-241: added process group

查看啟動(dòng)日志

[root@pl66-241 supervisord.d]# tail -fn 200 /data/logs/etcd-server/etcd.stdout.log

狀態(tài)檢查

[root@pl66-241 member]# supervisorctl status
etcd-server-66-241               RUNNING   pid 28649, uptime 0:00:36
[root@pl66-241 member]# netstat -ntlp | grep etcd
tcp        0      0 10.10.66.241:2379       0.0.0.0:*               LISTEN      28650/./etcd        
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      28650/./etcd        
tcp        0      0 10.10.66.241:2380       0.0.0.0:*               LISTEN      28650/./etcd       

配置另外兩臺服務(wù)器

檢查cluster狀態(tài)

[root@pl66-241 etcd]# ./etcdctl cluster-health
member f8d0e74dd98768e is healthy: got healthy result from http://127.0.0.1:2379
member 53fdb991bce71f1c is healthy: got healthy result from http://127.0.0.1:2379
member 690d0b927b2d3fb7 is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy
[root@pl66-241 etcd]# 
[root@pl66-241 etcd]# ./etcdctl member list
f8d0e74dd98768e: name=etcd-server-66-242 peerURLs=https://10.10.66.242:2380 clientURLs=http://127.0.0.1:2379,https://10.10.66.242:2379 isLeader=false
53fdb991bce71f1c: name=etcd-server-66-243 peerURLs=https://10.10.66.243:2380 clientURLs=http://127.0.0.1:2379,https://10.10.66.243:2379 isLeader=false
690d0b927b2d3fb7: name=etcd-server-66-241 peerURLs=https://10.10.66.241:2380 clientURLs=http://127.0.0.1:2379,https://10.10.66.241:2379 isLeader=true
[root@pl66-241 etcd]# 

部署kube-apiserver集群

集群規(guī)劃

主機(jī)名 角色 IP
pl66-240 kube-apiserver 10.10.66.242
pl66-241 kube-apiserver 10.10.66.243
pl66-242 4層負(fù)載均衡 10.10.66.240
pl66-243 4層負(fù)載均衡 10.10.66.241

注意:這里10.10.66.240和10.10.66.241使用nginx做4層負(fù)載均衡器玖姑,用keepalived跑一個(gè)vip:10.10.66.250,代理兩個(gè)kube-apiserver慨菱,實(shí)現(xiàn)高可用

這里部署文檔以pl66-243主機(jī)為例焰络,另外一臺運(yùn)算節(jié)點(diǎn)安裝部署方法類似

下載軟件,解壓符喝,創(chuàng)建軟連接

\color{blue}{pl66-242上:}
kubernetes官方Github地址
kubernetes下載地址

[root@pl66-242 opt]# tar xvf kubernetes-server-linux-arm64.tar.gz
[root@pl66-243 opt]# mv kubernetes kubernetes-v1.15.2
[root@pl66-243 opt]# ln -s kubernetes-v1.15.2 kubernetes

[root@pl66-243 opt]# ll
total 442992
drwx--x--x. 4 root root        28 Mar  7 14:43 containerd
lrwxrwxrwx. 1 root root        17 Mar 10 16:46 etcd -> /opt/etcd-v3.1.20
drwxr-xr-x. 4 etcd etcd       166 Mar 10 16:48 etcd-v3.1.20
-rw-r--r--. 1 root root   9850227 Mar 10 16:29 etcd-v3.1.20-linux-amd64.tar.gz
lrwxrwxrwx. 1 root root        18 Mar 13 10:44 kubernetes -> kubernetes-v1.15.2
-rw-r--r--. 1 root root 443770238 Mar 13 09:29 kubernetes-server-linux-amd64.tar.gz
drwxr-xr-x. 4 root root        79 Aug  5  2019 kubernetes-v1.15.2

簽發(fā)client證書

\color{blue}{pl66-245上:}
創(chuàng)建生成證書簽名請求(csr)的JSON配置文件

/opt/certs/client-csr.json

{
    "CN": "k8s-node",
    "hosts": [ 
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "chengdu",
            "L": "chengdu",
            "O": "pl",
            "OU": "ops"
        }
    ]
}

生成client證書和私鑰

[root@pl66-245 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssl-json -bare client

檢查生成的證書闪彼、私鑰

[root@pl66-245 certs]# ls -l | grep client
-rw-r--r-- 1 root root  993 Mar 11 07:58 client.csr
-rw-r--r-- 1 root root  192 Mar 11 07:58 client-csr.json
-rw------- 1 root root 1675 Mar 11 07:58 client-key.pem
-rw-r--r-- 1 root root 1359 Mar 11 07:58 client.pem
[root@pl66-245 certs]# 

簽發(fā)kube-apiserver證書

\color{blue}{pl66-245上:}

創(chuàng)建生成證書簽名請求(csr)的JSON配置文件

/opt/certs/apiserver-csr.json

{
    "CN": "k8s-apiserver",
    "hosts": [ 
        "127.0.0.1",
        "192.168.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.cluster",
        "kubernetes.default.cluster.local",
        "10.10.66.241",
        "10.10.66.242",
        "10.10.66.243",
        "10.10.66.250"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "chengdu",
            "L": "chengdu",
            "O": "pl",
            "OU": "ops"
        }
    ]
}

生成kube-apiserver證書和私鑰

[root@pl66-245 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json | cfssl-json -bare apiserver

檢查生成的證書、私鑰

[root@pl66-245 certs]# ls -l | grep apiserver
-rw-r--r-- 1 root root 1240 Mar 11 08:47 apiserver.csr
-rw-r--r-- 1 root root  421 Mar 11 06:38 apiserver-csr.json
-rw------- 1 root root 1679 Mar 11 08:47 apiserver-key.pem
-rw-r--r-- 1 root root 1582 Mar 11 08:47 apiserver.pem
[root@pl66-245 certs]# 

拷貝證書至各運(yùn)算節(jié)點(diǎn)协饲,并創(chuàng)建配置

\color{blue}{pl66-242上:}

拷貝證書畏腕、私鑰,注意私鑰文件屬性600

/opt/kubernetes/server/bin/cert

[root@pl66-242 cert]# scp pl66-245:/opt/certs/ca.pem .
The authenticity of host 'pl66-245 (10.10.66.245)' can't be established.
ECDSA key fingerprint is SHA256:2YOuINoiCs2y07VJzw8hwpc4pbPES7BNYU1c01zdoBg.
ECDSA key fingerprint is MD5:63:11:13:4d:18:eb:fa:2c:9e:21:73:43:5a:51:e9:5e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'pl66-245,10.10.66.245' (ECDSA) to the list of known hosts.
root@pl66-245's password: 
ca.pem                                                                          100% 1338   796.4KB/s   00:00    
[root@pl66-242 cert]# scp pl66-245:/opt/certs/apiserver-key.pem .
root@pl66-245's password: 
apiserver-key.pem                                                               100% 1679   998.9KB/s   00:00    
[root@pl66-242 cert]# scp pl66-245:/opt/certs/apiserver.pem .
root@pl66-245's password: 
apiserver.pem                                                                   100% 1582     1.0MB/s   00:00    
[root@pl66-242 cert]# scp pl66-245:/opt/certs/ca-key.pem .
root@pl66-245's password: 
ca-key.pem                                                                      100% 1675   913.5KB/s   00:00    
[root@pl66-242 cert]# 
[root@pl66-242 cert]# 
[root@pl66-242 cert]# scp pl66-245:/opt/certs/client-key.pem .
root@pl66-245's password: 
client-key.pem                                     n                             100% 1675   848.3KB/s   00:00    
[root@pl66-242 cert]# scp pl66-245:/opt/certs/client.pem .
root@pl66-245's password: 
client.pem                                                                      100% 1359   773.5KB/s   00:00    
[root@pl66-242 cert]# ls -l 
total 24
-rw-------. 1 root root 1679 Mar 11 08:53 apiserver-key.pem
-rw-r--r--. 1 root root 1582 Mar 11 08:53 apiserver.pem
-rw-------. 1 root root 1675 Mar 11 08:53 ca-key.pem
-rw-r--r--. 1 root root 1338 Mar 11 08:53 ca.pem
-rw-------. 1 root root 1675 Mar 11 08:54 client-key.pem
-rw-r--r--. 1 root root 1359 Mar 11 08:54 client.pem
[root@pl66-242 cert]# 

創(chuàng)建配置

/opt/kubernetes/server/bin/conf/audit.yaml

apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"

創(chuàng)建啟動(dòng)腳本

\color{blue}{pl66-242上:}

vi /opt/kubernetes/server/bin/kube-apiserver.sh

#!/bin/bash
./kube-apiserver \
        --apiserver-count 2 \
        --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
        --audit-policy-file ./conf/audit.yaml \
        --authorization-mode RBAC \
        --client-ca-file ./cert/ca.pem \
        --requestheader-client-ca-file ./cert/ca.pem \
        --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
        --etcd-cafile ./cert/ca.pem \
        --etcd-certfile ./cert/client.pem \
        --etcd-keyfile ./cert/client-key.pem \
        --etcd-servers https://10.10.66.241:2379,https://10.10.66.242:2379,https://10.10.66.243:2379 \
        --service-account-key-file ./cert/ca-key.pem \
        --service-cluster-ip-range 192.168.0.0/16 \
        --service-node-port-range 3000-29999 \
        --target-ram-mb=1024 \
        --kubelet-client-certificate ./cert/client.pem \
        --kubelet-client-key ./cert/client-key.pem \
        --log-dir /data/logs/kubernetes/kube-apiserver \
        --tls-cert-file ./cert/apiserver.pem \
        --tls-private-key-file ./cert/apiserver-key.pem \
        --v 2

調(diào)整權(quán)限和目錄

\color{blue}{pl66-242上:}

chmod +x /opt/kubernetes/server/bin/kube-apiserver.sh
mkdir -p /data/logs/kubernetes/kube-apiserver

創(chuàng)建supervisor配置

\color{blue}{pl66-242上:}

vi /etc/supervisord.d/kube-apiserver.ini

[program:kube-apiserver-66-242]
command=/opt/kubernetes/server/bin/kube-apiserver.sh
numprocs=1
directory=/opt/kubernetes/server/bin/
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_events_enabled=false

啟動(dòng)服務(wù)并檢查

[root@pl66-242 supervisord.d]# supervisorctl update

[root@pl66-242 supervisord.d]# supervisorctl status

[root@pl66-242 supervisord.d]# supervisorctl status
etcd-server-66-242               RUNNING   pid 10902, uptime 1:49:11
kube-apiserver-66-242            RUNNING   pid 10901, uptime 1:49:11
[root@pl66-242 supervisord.d]# 

\color{blue}{pl66-243的配置與242一模一樣茉稠,只有supervisor上面的ini配置參數(shù)需要改一下名字:}

配置4層反向代理

\color{blue}{pl66-240描馅,pl66-241上:}

安裝nginx

[root@pl66-240 ~]# yum -y install nginx

Nginx配置

/etc/nginx/nginx.conf

stream {
    upstream kube-apiserver {
        server 10.10.66.242:6443    max_fails=3 fail_timeout=30s;
        server 10.10.66.243:6443    max_fails=3 fail_timeout=30s;
    }
    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass kube-apiserver;
    }
}

注意:需要在最后追加

keepalived配置

安裝keepalived

[root@pl66-241 etcd]# yum -y install keepalived

創(chuàng)建腳本check_port.sh

vim /etc/keepalived/check_port.sh
chmod +x keepalived.conf

#!/bin/bash
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then 
    PORT_PROCESS=`ss -lnt | grep $CHK_PORT | wc -l`
    if [ $PORT_PROCESS -eq 0 ];then
        echo "Port $CHK_PORT is Not Used,End."
        exit 1
    fi
else
    echo "Check Port Cant Be Empty!"
fi

keepalive主

\color{blue}{pl66-240上:}

yum -y install keepalived
[root@pl66-240 keepalived]# rpm -qa keepalived
keepalived-1.3.5-16.el7.x86_64

vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
    router_id 10.10.66.240

}

vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 7443"
    interval 2
    weight -20
}

vrrp_instance VI_1 {
    state MASTER
    interface enp2s0
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 10.10.66.240
    nopreemt

    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
        chk_nginx
    }
    virtual_ipaddress {
        10.10.66.250
    }
}

keepalive備

\color{blue}{pl66-241上:}

yum -y install keepalived
[root@pl66-241 keepalived]# rpm -qa keepalived
keepalived-1.3.5-16.el7.x86_64
vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
    router_id 10.10.66.241

}

vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 7443"
    interval 2
    weight -20
}

vrrp_instance VI_1 {
    state BACKUP
    interface enp3s0
    virtual_router_id 251
    mcast_src_ip 10.10.66.241
    priority 90
    advert_int 1

    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
        chk_nginx
    }
    virtual_ipaddress {
        10.10.66.250
    }
}

啟動(dòng)代理并檢查

\color{blue}{pl66-241 , pl66-242上:}

systemctl start keepalived
systemctl enable keepalived
nginx -s reload

[root@pl66-240 keepalived]# ip add 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether f4:4d:30:14:4d:75 brd ff:ff:ff:ff:ff:ff
    inet 10.10.66.240/24 brd 10.10.66.255 scope global noprefixroute enp2s0
       valid_lft forever preferred_lft forever
    inet 10.10.66.250/32 scope global enp2s0
       valid_lft forever preferred_lft forever
    inet6 fe80::f64d:30ff:fe14:4d75/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:35:68:1c:de brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
[root@pl66-240 keepalived]# 

部署controller-manager

集群規(guī)劃

主機(jī)名 角色 IP
pl66-242 controller-manager 10.10.66.242
pl66-243 controller-manager 10.10.66.243

注意:這里部署文檔以pl66-242主機(jī)為例,另外一臺運(yùn)算節(jié)點(diǎn)安裝部署方法類似

創(chuàng)建啟動(dòng)腳本

\color{blue}{pl66-242:}

/opt/kubernetes/server/bin/kube-controller-manager.sh

#!/bin/sh
./kube-controller-manager \
    --cluster-cidr  172.16.0.0/16 \
    --leader-elect true \
    --log-dir /data/kubernetes/kube-controller-manager \
    --master http://127.0.0.1:8080 \
    --service-account-private-key-file ./cert/ca-key.pem \
    --service-cluster-ip-range 192.168.0.0/16 \
    --root-ca-file ./cert/ca.pem \
    --v 2

調(diào)整文件權(quán)限而线,創(chuàng)建目錄

\color{blue}{pl66-242:}

chmod+x /opt/kubernetes/server/bin/kube-controller-manager.sh
mkdir -p /data/kubernetes/kube-controller-manager
mkdir -p /data/logs/kubernetes/kube-controller-manager

創(chuàng)建supervisor配置

/etc/supervisord.d/kube-controller-manager.ini

[program:kube-controller-manager--66-242]
command=/opt/kubernetes/server/bin/kube-controller-manager.sh
numprocs=1
directory=/opt/kubernetes/server/bin/
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_events_enabled=false

啟動(dòng)服務(wù)并檢查

\color{blue}{pl66-242:}

[root@pl66-242 supervisord.d]# supervisorctl update

[root@pl66-242 supervisord.d]# supervisorctl status
etcd-server-66-242                RUNNING   pid 10902, uptime 8:05:27
kube-apiserver-66-242             RUNNING   pid 10901, uptime 8:05:27
kube-controller-manager--66-242   RUNNING   pid 11558, uptime 0:06:13

部署kube-scheduler

集群規(guī)劃

主機(jī)名 角色 IP
pl66-242 kube-scheduler 10.10.66.242
pl66-243 kube-scheduler 10.10.66.243

注意:這里部署文檔以pl66-242主機(jī)為例铭污,另外一臺運(yùn)算節(jié)點(diǎn)安裝部署方法類似

創(chuàng)建啟動(dòng)腳本

\color{blue}{pl66-242:}

vim /opt/kubernetes/server/bin/kube-scheduler.sh

#!/bin/bash
./kube-scheduler \
    --leader-elect \
    --log-dir /data/logs/kubernetes/kube-scheduler \
    --master http://127.0.0.1:8080 \
    --v 2

調(diào)整文件權(quán)限,創(chuàng)建目錄

\color{blue}{pl66-242:}

chmod +x /opt/kubernetes/server/bin/kube-scheduler.sh
mkdir -p /data/logs/kubernetes/kube-scheduler

創(chuàng)建supervisor配置

vim /etc/supervisord.d/kube-scheduler.ini

[program:kube-scheduler--66-242]
command=/opt/kubernetes/server/bin/kube-scheduler.sh
numprocs=1
directory=/opt/kubernetes/server/bin/
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_events_enabled=false

添加信息并檢查

[root@pl66-242 supervisord.d]# supervisorctl update

[root@pl66-242 supervisord.d]# supervisorctl status
etcd-server-66-242                RUNNING   pid 11782, uptime 0:01:23
kube-apiserver-66-242             RUNNING   pid 11762, uptime 0:01:23
kube-controller-manager--66-242   RUNNING   pid 11807, uptime 0:00:31
kube-scheduler--66-242            RUNNING   pid 11763, uptime 0:01:23
[root@pl66-242 supervisord.d]# 

創(chuàng)建kubectl軟連接

[root@pl66-242 supervisord.d]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl

[root@pl66-242 supervisord.d]# which kubectl
/usr/bin/kubectl

檢查集群狀態(tài)

[root@pl66-242 supervisord.d]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
[root@pl66-242 supervisord.d]# 

部署Node節(jié)點(diǎn)服務(wù)

部署kubelet

集群規(guī)劃

主機(jī)名 角色 IP
pl66-242.host.com kubelet 10.10.66.242
pl66-243.host.com kubelet 10.10.66.243

注意:這里部署文檔以pl66-242主機(jī)為例吞获,另外一臺運(yùn)算節(jié)點(diǎn)安裝部署方法類似

簽發(fā)kubelet證書

\color{blue}{pl66-245上:}

創(chuàng)建生成證書簽名請求(csr)的Json配置文件

vi /opt/certs/kubelet-csr.json

{
    "CN": "k8s-kubelet",
    "hosts": [ 
        "127.0.0.1",
        "10.10.66.240",
        "10.10.66.241",
        "10.10.66.242",
        "10.10.66.243",
        "10.10.66.250",
                "10.10.66.251",
                "10.10.66.252",
                "10.10.66.253"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "chengdu",
            "L": "chengdu",
            "O": "pl",
            "OU": "ops"
        }
    ]
}

生成kubelet證書和私鑰

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet

[root@pl66-245 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet2020/03/16 02:18:20 [INFO] generate received request
2020/03/16 02:18:20 [INFO] received CSR
2020/03/16 02:18:20 [INFO] generating key: rsa-2048
2020/03/16 02:18:20 [INFO] encoded CSR
2020/03/16 02:18:20 [INFO] signed certificate with serial number 411291623634880987451147311712722127071427596871
2020/03/16 02:18:20 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

檢查生成的證書况凉、私鑰

[root@pl66-245 certs]# ll /opt/certs/ | grep kubelet
-rw-r--r-- 1 root root 1106 Mar 16 02:18 kubelet.csr
-rw-r--r-- 1 root root  394 Mar 16 02:15 kubelet-csr.json
-rw------- 1 root root 1679 Mar 16 02:18 kubelet-key.pem
-rw-r--r-- 1 root root 1456 Mar 16 02:18 kubelet.pem
[root@pl66-245 certs]# 

拷貝證書至各運(yùn)算節(jié)點(diǎn),并創(chuàng)建配置

\color{blue}{pl66-242上:}

拷貝證書、私鑰各拷,注意私鑰文件屬性600

/opt/kubernetes/server/bin/cert
[root@pl66-242 cert]# scp pl66-245:/opt/certs/kubelet-key.pem .
[root@pl66-242 cert]# scp pl66-245:/opt/certs/kubelet.pem .

[root@pl66-242 cert]# ll
total 32
-rw-------. 1 root root 1679 Mar 12 09:53 apiserver-key.pem
-rw-r--r--. 1 root root 1582 Mar 12 09:53 apiserver.pem
-rw-------. 1 root root 1675 Mar 12 09:53 ca-key.pem
-rw-r--r--. 1 root root 1338 Mar 12 09:53 ca.pem
-rw-------. 1 root root 1675 Mar 12 09:53 client-key.pem
-rw-r--r--. 1 root root 1359 Mar 12 09:53 client.pem
-rw-------. 1 root root 1679 Mar 16 02:25 kubelet-key.pem
-rw-r--r--. 1 root root 1456 Mar 16 02:25 kubelet.pem
[root@pl66-242 cert]#

創(chuàng)建配置

set-cluster
注意:在conf目錄下

/opt/kubernetes/server/bin/conf
set-cluster

[root@pl66-242 conf]# kubectl config set-cluster myk8s \
 --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
 --embed-certs=true \
 --server=https://10.10.66.250:7443 \
 --kubeconfig=kubelet.kubeconfig

Cluster "myk8s" set.
[root@pl66-242 conf]# 

set-credentials

[root@pl66-242 conf]# kubectl config set-credentials k8s-node \
> --client-certificate=/opt/kubernetes/server/bin/cert/client.pem \
> --client-key=/opt/kubernetes/server/bin/cert/client-key.pem \
> --embed-certs=true \
> --kubeconfig=kubelet.kubeconfig
User "k8s-node" set.

set-context

[root@pl66-242 conf]# kubectl config set-context myk8s-context \
> --cluster=myk8s \
> --user=k8s-node \
> --kubeconfig=kubelet.kubeconfig
Context "myk8s-context" created.
[root@pl66-242 conf]# 

use-context

[root@pl66-242 conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
Switched to context "myk8s-context".
[root@pl66-242 conf]# 

k8s-node.yaml

  • 創(chuàng)建資源配置文件

/opt/kubernetes/server/bin/conf/k8s-node.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef: 
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node
  • 應(yīng)用資源配置文件
[root@pl66-242 conf]# kubectl create -f k8s-node.yaml 
clusterrolebinding.rbac.authorization.k8s.io/k8s-node created
  • 檢查
[root@pl66-242 cert]# kubectl get clusterrolebinding k8s-node
NAME       AGE
k8s-node   18m

拷貝kubelet配置文件刁绒、證書到243主機(jī)上

[root@pl66-242 conf]# ll
total 20
-rw-r--r--. 1 root root 2223 Mar 13 01:14 audit.yaml
-rw-r--r--. 1 root root 1003 Mar 12 10:14 audit.yaml.bak
-rw-r--r--. 1 root root  259 Mar 16 06:03 k8s-node.yaml
-rw-------. 1 root root 6178 Mar 16 06:00 kubelet.kubeconfig
[root@pl66-242 conf]# pwd
/opt/kubernetes/server/bin/conf
[root@pl66-242 conf]# scp kubelet.kubeconfig pl66-243:/opt/kubernetes/server/bin/conf/
root@pl66-243's password: 
kubelet.kubeconfig                                                                                                 100% 6178     2.3MB/s   00:00    
[root@pl66-242 conf]# 

\color{blue}{pl66-245上:}

[root@pl66-245 certs]# scp kubelet.pem root@pl66-243:/opt/kubernetes/server/bin/cert
The authenticity of host 'pl66-243 (10.10.66.243)' can't be established.
ECDSA key fingerprint is SHA256:yghdzfvB+QjjAsNSdGlAOhu1cm2yEIVLRidqi2k3+QQ.
ECDSA key fingerprint is MD5:52:2b:f4:1b:d0:83:00:dd:62:b6:66:d2:9f:38:77:8b.
Are you sure you want to continue connecting (yes/no)? yes            
Warning: Permanently added 'pl66-243' (ECDSA) to the list of known hosts.
root@pl66-243's password: 
kubelet.pem                                                                                                        100% 1456   890.8KB/s   00:00    
[root@pl66-245 certs]# scp kubelet-key.pem root@pl66-243:/opt/kubernetes/server/bin/cert
root@pl66-243's password: 
kubelet-key.pem                                                                                                    100% 1679   442.0KB/s   00:00    
[root@pl66-245 certs]# 

準(zhǔn)備pause基礎(chǔ)鏡像

\color{blue}{pl66-245上:}

  • 下載
[root@pl66-245 certs]# docker pull kubernetes/pause
Using default tag: latest
latest: Pulling from kubernetes/pause
4f4fb700ef54: Pull complete 
b9c8ec465f6b: Pull complete 
Digest: sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105
Status: Downloaded newer image for kubernetes/pause:latest
docker.io/kubernetes/pause:latest
[root@pl66-245 certs]# 
  • 提交至私有倉庫(harbor)中
  • 登錄harbor

[root@pl66-245 certs]# docker login harbor.yw.com

  • 給鏡像打tag

[root@pl66-245 certs]# docker tag f9d5de079539 harbor.yw.com/public/pause:latest

  • push到harbor

[root@pl66-245 certs]# docker push harbor.yw.com/public/pause:latest

創(chuàng)建kubelet啟動(dòng)腳本

\color{blue}{pl66-242上:}

/opt/kubernetes/server/bin/kubelet.sh

#!/bin/bash

./kubelet \
    --anonymous-auth=false \
    --cgroup-driver systemd \
    --cluster-dns 192.168.0.2 \
    --cluster-domain cluster.local \
    --runtime-cgroups=/systemd/system.slice \
    --kubelet-cgroups=/systemd/system.slice \
    --fail-swap-on="false" \
    --client-ca-file ./cert/ca.pem \
    --tls-cert-file ./cert/kubelet.pem \
    --tls-private-key-file ./cert/kubelet-key.pem \
    --hostname-override pl66-242.host.com \
    --image-gc-high-threshold 20 \
    --image-gc-low-threshold 10 \
    --kubeconfig ./conf/kubelet.kubeconfig \
    --log-dir /data/logs/kubernetes/kube-kubelet \
    --pod-infra-container-image harbor.yw.com/public/pause:latest \
    --root-dir /data/kubelet

檢查配置,權(quán)限烤黍,創(chuàng)建日志目錄

\color{blue}{pl66-242上:}

[root@pl66-242 conf]# ll | grep kubelet
-rw-------. 1 root root 6178 Mar 16 06:00 kubelet.kubeconfig

chmod +x /opt/kubernetes/server/bin/kubelet.sh
mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet

創(chuàng)建supervisor配置

\color{blue}{pl66-242上:}

添加角色

[root@pl66-242 ~]# kubectl label node pl66-242.host.com node-role.kubernetes.io/master=
[root@pl66-242 ~]# kubectl label node pl66-242.host.com node-role.kubernetes.io/node=

部署kube-proxy

集群規(guī)劃

主機(jī)名 角色 IP
pl66-242.host.com kube-proxy 10.10.66.242
pl66-243.host.com kube-proxy 10.10.66.243

\color{blue}{pl66-243的配置與242一模一樣知市,只有supervisor上面的ini配置參數(shù)需要改一下名字:}

創(chuàng)建生成證書簽名請求(csr)的JSON配置文件

/opt/certs/kube-proxy-csr.json

{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "chengdu",
            "L": "chengdu",
            "O": "pl",
            "OU": "ops"
        }
    ]
}

生成證書、私鑰

[root@pl66-245 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json | cfssl-json -bare kube-proxy-client 
2020/03/16 08:58:51 [INFO] generate received request
2020/03/16 08:58:51 [INFO] received CSR
2020/03/16 08:58:51 [INFO] generating key: rsa-2048
2020/03/16 08:58:51 [INFO] encoded CSR
2020/03/16 08:58:51 [INFO] signed certificate with serial number 601899238979766833696791320168818948790769415904
2020/03/16 08:58:51 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

檢查

[root@pl66-245 certs]# ll | grep kube-proxy
-rw-r--r-- 1 root root 1005 Mar 16 08:58 kube-proxy-client.csr
-rw------- 1 root root 1675 Mar 16 08:58 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1371 Mar 16 08:58 kube-proxy-client.pem
-rw-r--r-- 1 root root  184 Mar 16 08:53 kube-proxy-csr.json
[root@pl66-245 certs]# 

拷貝證書,私鑰,注意私鑰文件屬性600

/opt/kubernetes/server/bin/cert

[root@pl66-242 cert]# scp pl66-245:/opt/cert/
apiserver-key.pem  apiserver.pem      ca-key.pem         ca.pem             client-key.pem     client.pem         
[root@pl66-242 cert]# scp pl66-245:/opt/certs/kube-proxy-client.pem .
root@pl66-245's password: 
kube-proxy-client.pem                                                                                                    100% 1371   737.8KB/s   00:00    
[root@pl66-242 cert]# scp pl66-245:/opt/certs/kube-proxy-client-key.pem .
root@pl66-245's password: 
kube-proxy-client-key.pem                                                                                                100% 1675   899.0KB/s   00:00    
[root@pl66-242 cert]# ll
total 40
-rw-------. 1 root root 1679 Mar 12 09:53 apiserver-key.pem
-rw-r--r--. 1 root root 1582 Mar 12 09:53 apiserver.pem
-rw-------. 1 root root 1675 Mar 12 09:53 ca-key.pem
-rw-r--r--. 1 root root 1338 Mar 12 09:53 ca.pem
-rw-------. 1 root root 1675 Mar 12 09:53 client-key.pem
-rw-r--r--. 1 root root 1359 Mar 12 09:53 client.pem
-rw-------. 1 root root 1679 Mar 16 02:25 kubelet-key.pem
-rw-r--r--. 1 root root 1456 Mar 16 02:25 kubelet.pem
-rw-------. 1 root root 1675 Mar 16 09:40 kube-proxy-client-key.pem
-rw-r--r--. 1 root root 1371 Mar 16 09:40 kube-proxy-client.pem
[root@pl66-242 cert]# 

創(chuàng)建配置

set-cluster

[root@pl66-242 conf]# kubectl config set-cluster myk8s \
 --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
 --embed-certs=true \
 --server=https://10.10.66.250:7443 \
 --kubeconfig=kube-proxy.kubeconfig

Cluster "myk8s" set.
[root@pl66-242 conf]# 

set-credentials

[root@pl66-242 conf]# kubectl config set-credentials kube-proxy \
> --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
> --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
> --embed-certs=true \
> --kubeconfig=kube-proxy.kubeconfig

User "k8s-node" set.

set-context

[root@pl66-242 conf]# kubectl config set-context myk8s-context \
> --cluster=myk8s \
> --user=k8s-proxy \
> --kubeconfig=kube-proxy.kubeconfig

Context "myk8s-context" created.

[root@pl66-242 conf]# 

use-context

[root@pl66-242 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
Switched to context "myk8s-context".

拷貝配置文件到PL66-243上速蕊。

[root@pl66-242 conf]# ll
total 28
-rw-r--r--. 1 root root 2223 Mar 13 01:14 audit.yaml
-rw-r--r--. 1 root root 1003 Mar 12 10:14 audit.yaml.bak
-rw-r--r--. 1 root root  259 Mar 16 06:03 k8s-node.yaml
-rw-------. 1 root root 6178 Mar 16 06:00 kubelet.kubeconfig
-rw-------. 1 root root 6197 Mar 16 10:06 kube-proxy.kubeconfig
[root@pl66-242 conf]# scp kube-proxy.kubeconfig root@10.10.66.243:/opt/kubernetes/server/bin/conf/
root@10.10.66.243's password: 
kube-proxy.kubeconfig                                                                                                    100% 6197     2.3MB/s   00:00    
[root@pl66-242 conf]# 

創(chuàng)建kube-proxy啟動(dòng)腳本

\color{blue}{pl66-242上:}

  • 加載ipvs模塊

/root/ipvs.sh

#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "[^.]*")
do
  /sbin/modinfo -F filename $i &>/dev/null 
  if [ $? -eq 0 ];then
    /sbin/modprobe $i
  fi
done

增加執(zhí)行權(quán)限

chmod +x ipvs.sh

執(zhí)行腳本輸出結(jié)果

[root@pl66-242 ~]# lsmod | grep ip_vs
ip_vs_wrr              12697  0 
ip_vs_wlc              12519  0 
ip_vs_sh               12688  0 
ip_vs_sed              12519  0 
ip_vs_rr               12600  0 
ip_vs_pe_sip           12740  0 
nf_conntrack_sip       33860  1 ip_vs_pe_sip
ip_vs_nq               12516  0 
ip_vs_lc               12516  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_ftp              13079  0 
ip_vs_dh               12688  0 
ip_vs                 145497  24 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_pe_sip,ip_vs_lblcr,ip_vs_lblc
nf_nat                 26787  3 ip_vs_ftp,nf_nat_ipv4,nf_nat_masquerade_ipv4
nf_conntrack          133095  8 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_sip,nf_conntrack_ipv4
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

\color{blue}{\#復(fù)制到pl66-243上:}

LVS官方網(wǎng)站上的調(diào)度算法

輪叫調(diào)度(Round-Robin Scheduling)
加權(quán)輪叫調(diào)度(Weighted Round-Robin Scheduling)
最小連接調(diào)度(Least-Connection Scheduling)
加權(quán)最小連接調(diào)度(Weighted Least-Connection Scheduling)
基于局部性的最少鏈接(Locality-Based Least Connections Scheduling)
帶復(fù)制的基于局部性最少鏈接(Locality-Based Least Connections with Replication Scheduling)
目標(biāo)地址散列調(diào)度(Destination Hashing Scheduling)
源地址散列調(diào)度(Source Hashing Scheduling)
最短預(yù)期延時(shí)調(diào)度(Shortest Expected Delay Scheduling)
不排隊(duì)調(diào)度(Never Queue Scheduling)
  • 創(chuàng)建啟動(dòng)腳本

/opt/kubernetes/server/bin/kube-proxy.sh

[root@pl66-242 bin]# cat kube-proxy.sh 
#!/bin/bash
./kube-proxy \
    --cluster-cidr 172.16.0.0/16 \
    --hostname-override pl66-242.host.com \
    --proxy-mode=ipvs \
    --ipvs-scheduler=nq \
    --kubeconfig ./conf/kube-proxy.kubeconfig

注意:kube-proxy集群各主機(jī)的啟動(dòng)腳本略有不同嫂丙,部署其他節(jié)點(diǎn)時(shí)注意修改。

檢查配置规哲,權(quán)限跟啤,創(chuàng)建日志目錄

\color{blue}{pl66-242上:}

[root@pl66-242 bin]# ll conf/ | grep kube-proxy
-rw-------. 1 root root 6197 Mar 16 10:06 kube-proxy.kubeconfig
[root@pl66-242 bin]# chmod +x /opt/kubernetes/server/bin/kube-proxy.sh
[root@pl66-242 bin]# mkdir -p /data/logs/kubernetes/kube-proxy

創(chuàng)建supervisor配置

\color{blue}{pl66-242上:}

/etc/supervisord.d/kube-proxy.ini

[program:kube-proxy--66-242]
command=/opt/kubernetes/server/bin/kube-proxy.sh
numprocs=1
directory=/opt/kubernetes/server/bin/
autostart=true
autorestart=true
startsecs=30
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=root
redirect_stderr=true
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log
stdout_logfile_maxbytes=64MB
stdout_logfile_backups=4
stdout_capture_maxbytes=1MB
stdout_events_enabled=false

查看proxy日志

[root@pl66-242 supervisord.d]# tail -fn 200 /data/logs/kubernetes/kube-proxy/proxy.stdout.log

安裝ipvsadm

[root@pl66-242 supervisord.d]# yum -y install ipvsadm

驗(yàn)證kubernetes集群

在任意一個(gè)運(yùn)算節(jié)點(diǎn),創(chuàng)建一個(gè)資源配置清單

/root/nginx-ds.yaml

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ds
spec:
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: harbor.yw.com/public/nginx:1.9.1
        ports:
        - containerPort: 80

應(yīng)用資源配置,并檢查

/root
[root@pl66-242 ~]# kubectl create -f nginx-ds.yaml

[root@pl66-242 ~]# kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
nginx-ds-mwp84   1/1     Running   0          <invalid>
nginx-ds-qck7g   1/1     Running   0          <invalid>
[root@pl66-242 ~]# 
[root@pl66-242 ~]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE         IP             NODE                NOMINATED NODE   READINESS GATES
nginx-ds-mwp84   1/1     Running   0          <invalid>   172.16.242.2   pl66-242.host.com   <none>           <none>
nginx-ds-qck7g   1/1     Running   0          <invalid>   172.16.243.2   pl66-243.host.com   <none>           <none>
[root@pl66-242 ~]# 

驗(yàn)證

[root@pl66-242 ~]# curl 172.16.242.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a >nginx.org</a>.<br/>
Commercial support is available at
<a >nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@pl66-242 ~]# 
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
禁止轉(zhuǎn)載隅肥,如需轉(zhuǎn)載請通過簡信或評論聯(lián)系作者竿奏。
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市腥放,隨后出現(xiàn)的幾起案子泛啸,更是在濱河造成了極大的恐慌,老刑警劉巖秃症,帶你破解...
    沈念sama閱讀 210,978評論 6 490
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件候址,死亡現(xiàn)場離奇詭異,居然都是意外死亡种柑,警方通過查閱死者的電腦和手機(jī)岗仑,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 89,954評論 2 384
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來莹规,“玉大人赔蒲,你說我怎么就攤上這事×际” “怎么了舞虱?”我有些...
    開封第一講書人閱讀 156,623評論 0 345
  • 文/不壞的土叔 我叫張陵,是天一觀的道長母市。 經(jīng)常有香客問我矾兜,道長,這世上最難降的妖魔是什么患久? 我笑而不...
    開封第一講書人閱讀 56,324評論 1 282
  • 正文 為了忘掉前任椅寺,我火速辦了婚禮,結(jié)果婚禮上蒋失,老公的妹妹穿的比我還像新娘返帕。我一直安慰自己,他們只是感情好篙挽,可當(dāng)我...
    茶點(diǎn)故事閱讀 65,390評論 5 384
  • 文/花漫 我一把揭開白布荆萤。 她就那樣靜靜地躺著,像睡著了一般铣卡。 火紅的嫁衣襯著肌膚如雪链韭。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,741評論 1 289
  • 那天煮落,我揣著相機(jī)與錄音敞峭,去河邊找鬼。 笑死蝉仇,一個(gè)胖子當(dāng)著我的面吹牛旋讹,可吹牛的內(nèi)容都是我干的殖蚕。 我是一名探鬼主播,決...
    沈念sama閱讀 38,892評論 3 405
  • 文/蒼蘭香墨 我猛地睜開眼骗村,長吁一口氣:“原來是場噩夢啊……” “哼嫌褪!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起胚股,我...
    開封第一講書人閱讀 37,655評論 0 266
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎裙秋,沒想到半個(gè)月后琅拌,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 44,104評論 1 303
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡摘刑,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,451評論 2 325
  • 正文 我和宋清朗相戀三年进宝,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片枷恕。...
    茶點(diǎn)故事閱讀 38,569評論 1 340
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡党晋,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出徐块,到底是詐尸還是另有隱情未玻,我是刑警寧澤,帶...
    沈念sama閱讀 34,254評論 4 328
  • 正文 年R本政府宣布胡控,位于F島的核電站扳剿,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏昼激。R本人自食惡果不足惜庇绽,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,834評論 3 312
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望橙困。 院中可真熱鬧瞧掺,春花似錦、人聲如沸凡傅。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,725評論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽像捶。三九已至上陕,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間拓春,已是汗流浹背释簿。 一陣腳步聲響...
    開封第一講書人閱讀 31,950評論 1 264
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留硼莽,地道東北人庶溶。 一個(gè)月前我還...
    沈念sama閱讀 46,260評論 2 360
  • 正文 我出身青樓煮纵,卻偏偏與公主長得像,于是被迫代替她去往敵國和親偏螺。 傳聞我的和親對象是個(gè)殘疾皇子行疏,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 43,446評論 2 348

推薦閱讀更多精彩內(nèi)容