一益愈、環(huán)境準備
1.1、角色劃分
10.8.13.80 vip
10.8.13.81 master01 haproxy夷家、keepalived蒸其、etcd、kube-apiserver库快、kube-controller-manager摸袁、kube-scheduler
10.8.13.82 master02 haproxy、keepalived义屏、etcd靠汁、kube-apiserver蜂大、kube-controller-manager、kube-scheduler
10.8.13.83 master03 haproxy蝶怔、keepalived县爬、etcd、kube-apiserver添谊、kube-controller-manager、kube-scheduler
10.8.13.84 node01 kubelet察迟、docker斩狱、kube_proxy、flanneld
10.8.13.85 node02 kubelet扎瓶、docker所踊、kube_proxy、flanneld
1.2概荷、各主機ssh互通
#ssh-keygen
#ssh-copy-id 10.8.13.82(83-85)
1.3秕岛、環(huán)境初始化
1.3.1、停止iptables
systemctl stop firewalld.service
systemctl disable firewalld.service
1.3.2误证、關閉selinux
# cat /etc/selinux/config
SELINUX=disabled
# setenforce 0
1.3.4继薛、設置sysctl,開啟路由轉發(fā)
# cat /etc/sysctl.conf
fs.file-max=1000000
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
net.ipv4.ip_forward = 1
net.ipv4.tcp_max_tw_buckets = 6000
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_max_syn_backlog = 16384
net.core.netdev_max_backlog = 32768
net.core.somaxconn = 32768
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_fin_timeout = 20
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 2
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.ip_local_port_range = 1024 65000
net.nf_conntrack_max = 6553500
net.netfilter.nf_conntrack_max = 6553500
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_established = 3600
1.3.5愈捅、加載ipvs
cat << EOF | tee /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
二遏考、集群各功能模塊描述
Master節(jié)點:
Master節(jié)點上面主要由四個模塊組成,etcd蓝谨,APIServer灌具,schedule,controller-manager(haproxy、keepalived高可用后面單獨說)
etcd:
etcd是一個高可用的鍵值存儲系統(tǒng)譬巫,kubernetes使用它來存儲各個資源的狀態(tài)咖楣,從而實現(xiàn)了Restful的API。
APIServer:
APIServer負責對外提供Restful的kubernetes API的服務芦昔,它是系統(tǒng)管理指令的統(tǒng)一接口诱贿,任何對資源的增刪該查都要交給APIServer處理后再交給etcd。kubectl(kubernetes提供的客戶端工具烟零,該工具內部是對kubernetes API的調用)是直接和APIServer交互的瘪松。
schedule:
schedule負責調度Pod到合適的Node上,如果把scheduler看成一個黑匣子锨阿,那么它的輸入是pod和由多個Node組成的列表宵睦,輸出是Pod和一個Node的綁定。 kubernetes目前提供了調度算法墅诡,同樣也保留了接口壳嚎。用戶根據(jù)自己的需求定義自己的調度算法桐智。
controller manager:
如果APIServer做的是前臺的工作的話,那么controller manager就是負責后臺的烟馅。每一個資源都對應一個控制器说庭。而control manager就是負責管理這些控制器的,比如我們通過APIServer創(chuàng)建了一個Pod郑趁,當這個Pod創(chuàng)建成功后刊驴,APIServer的任務就算完成了。
Node節(jié)點:
每個Node節(jié)點主要由四個模板組成:kublet寡润, kube-proxy捆憎,docker,flanneld
kube-proxy:
該模塊實現(xiàn)了kubernetes中的服務發(fā)現(xiàn)和反向代理功能梭纹。kube-proxy支持TCP和UDP連接轉發(fā)躲惰,默認基Round Robin算法將客戶端流量轉發(fā)到與service對應的一組后端pod。服務發(fā)現(xiàn)方面变抽,kube-proxy使用etcd的watch機制監(jiān)控集群中service和endpoint對象數(shù)據(jù)的動態(tài)變化础拨,并且維護一個service到endpoint的映射關系,從而保證了后端pod的IP變化不會對訪問者造成影響绍载,另外诡宗,kube-proxy還支持session affinity。
kublet:
kublet是Master在每個Node節(jié)點上面的agent击儡,是Node節(jié)點上面最重要的模塊僚焦,它負責維護和管理該Node上的所有容器然痊,但是如果容器不是通過kubernetes創(chuàng)建的熄赡,它并不會管理叶撒。本質上鸣峭,它負責使Pod的運行狀態(tài)與期望的狀態(tài)一致佑菩。
flanneld:
源主機的flanneld服務將原本的數(shù)據(jù)內容UDP封裝后根據(jù)自己的路由表投遞給目的節(jié)點的flanneld服務猪狈,數(shù)據(jù)到達以后被解包晰赞,然后直接進入目的節(jié)點的flannel0虛擬網(wǎng)卡括儒,然后被轉發(fā)到目的主機的docker0虛擬網(wǎng)卡茧痒,最后就像本機容器通信一下的有docker0路由到達目標容器肮韧。
docker:
不做贅述
三、下載鏈接
Client Binaries
https://dl.k8s.io/v1.14.1/kubernetes-client-linux-amd64.tar.gz
Server Binaries
https://dl.k8s.io/v1.14.1/kubernetes-server-linux-amd64.tar.gz
Node Binaries
https://dl.k8s.io/v1.14.1/kubernetes-node-linux-amd64.tar.gz
etcd
https://github.com/etcd-io/etcd/releases/download/v3.3.11/etcd-v3.3.11-linux-amd64.tar.gz
flannel
https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
四旺订、Master部署
以下操作都在master01上執(zhí)行弄企,生成證書之后拷貝到master02和master03
4.1、下載軟件
wget https://dl.k8s.io/v1.14.1/kubernetes-server-linux-amd64.tar.gz
wget https://dl.k8s.io/v1.14.1/kubernetes-client-linux-amd64.tar.gz
wget https://github.com/etcd-io/etcd/releases/download/v3.3.11/etcd-v3.3.11-linux-amd64.tar.gz
wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
4.2区拳、ssl安裝
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
4.3拘领、創(chuàng)建etcd證書
在所有節(jié)點(master01-03、node01-02)創(chuàng)建此路徑
mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p
1)樱调、etcd ca配置
cd /k8s/etcd/ssl/
cat << EOF | tee ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"etcd": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
2)约素、etcd ca證書
cat << EOF | tee ca-csr.json
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
3)届良、etcd server證書
cat << EOF | tee server-csr.json
{
"CN": "etcd",
"hosts": [
"10.8.13.81",
"10.8.13.82",
"10.8.13.83"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
4)、生成etcd ca證書和私鑰
初始化ca
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[root@master01 ssl]# ls
ca-config.json ca-csr.json server-csr.json
[root@master01 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/05/01 16:13:54 [INFO] generating a new CA key and certificate from CSR
2019/05/01 16:13:54 [INFO] generate received request
2019/05/01 16:13:54 [INFO] received CSR
2019/05/01 16:13:54 [INFO] generating key: rsa-2048
2019/05/01 16:13:54 [INFO] encoded CSR
2019/05/01 16:13:54 [INFO] signed certificate with serial number 144752911121073185391033754516204538929473929443
[root@master01 ssl]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server-csr.json
生成server證書
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server
[root@master01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server
2019/05/01 16:18:53 [INFO] generate received request
2019/05/01 16:18:53 [INFO] received CSR
2019/05/01 16:18:53 [INFO] generating key: rsa-2048
2019/05/01 16:18:54 [INFO] encoded CSR
2019/05/01 16:18:54 [INFO] signed certificate with serial number 388122587040599986639159163167557684970159030057
2019/05/01 16:18:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites.
For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master01 ssl]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem
4.4圣猎、etcd安裝
1)解壓縮
tar -zxf etcd-v3.3.11-linux-amd64.tar.gz
cd etcd-v3.3.11-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/
mkdir /data1/etcd
2)配置etcd主文件
vim /k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/data1/etcd"
ETCD_LISTEN_PEER_URLS="https://10.8.13.81:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.8.13.81:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.8.13.81:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.8.13.81:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://10.8.13.81:2380,etcd02=https://10.8.13.82:2380,etcd03=https://10.8.13.83:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#[Security]
ETCD_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
3)配置etcd啟動文件
vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/data1/etcd/
EnvironmentFile=-/k8s/etcd/cfg/etcd.conf
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /k8s/etcd/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\" --cert-file=\"${ETCD_CERT_FILE}\" --key-file=\"${ETCD_KEY_FILE}\" --trusted-ca-file=\"${ETCD_TRUSTED_CA_FILE}\" --client-cert-auth=\"${ETCD_CLIENT_CERT_AUTH}\" --peer-cert-file=\"${ETCD_PEER_CERT_FILE}\" --peer-key-file=\"${ETCD_PEER_KEY_FILE}\" --peer-trusted-ca-file=\"${ETCD_PEER_TRUSTED_CA_FILE}\" --peer-client-cert-auth=\"${ETCD_PEER_CLIENT_CERT_AUTH}\""
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
4)士葫、拷貝master01etcd的證書、配置文件送悔、啟動文件到master02和master03對應路徑下
scp /k8s/etcd/ssl/* 10.8.13.82:/k8s/etcd/ssl/
scp /k8s/etcd/ssl/* 10.8.13.83:/k8s/etcd/ssl/
scp /k8s/etcd/cfg/* 10.8.13.82:/k8s/etcd/cfg/
scp /k8s/etcd/cfg/* 10.8.13.83:/k8s/etcd/cfg/
scp /k8s/etcd/bin/* 10.8.13.82:/k8s/etcd/bin/
scp /k8s/etcd/bin/* 10.8.13.83:/k8s/etcd/bin/
scp /usr/lib/systemd/system/etcd.service 10.8.13.82:/usr/lib/systemd/system/etcd.service
scp /usr/lib/systemd/system/etcd.service 10.8.13.83:/usr/lib/systemd/system/etcd.service
5)慢显、修改master02、master03 etcd的conf配置文件
matser02 etcd.conf配置如下:
ssh 10.8.13.82
vim /k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/data1/etcd"
ETCD_LISTEN_PEER_URLS="https://10.8.13.82:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.8.13.82:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.8.13.82:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.8.13.82:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://10.8.13.81:2380,etcd02=https://10.8.13.82:2380,etcd03=https://10.8.13.83:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#[Security]
ETCD_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
matser03 etcd.conf配置如下:
ssh 10.8.13.83
vim /k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/data1/etcd"
ETCD_LISTEN_PEER_URLS="https://10.8.13.83:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.8.13.83:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.8.13.83:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.8.13.83:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://10.8.13.81:2380,etcd02=https://10.8.13.82:2380,etcd03=https://10.8.13.83:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#[Security]
ETCD_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
6)欠啤、啟動etcd服務鳍怨,并加入開機自啟動(master三個節(jié)點都執(zhí)行)
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
7)、etcd服務檢查
/k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://10.8.13.81:2379,https://10.8.13.82:2379,https://10.8.13.83:2379" cluster-health
以下為輸出:
member 262d942ab474feaa is healthy: got healthy result from https://10.8.13.82:2379
member 3e95c59733e7d54f is healthy: got healthy result from https://10.8.13.83:2379
member fe03446cb13e0221 is healthy: got healthy result from https://10.8.13.81:2379
cluster is healthy
至此etcd安裝完成跪妥。。声滥。
4.5眉撵、haproxy安裝配置
1)、master01配置(需要注意的是端口自定義為16443)
yum -y install haproxy
master01落塑、master02纽疟、master03都安裝haproxy
vim /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
mode tcp
bind *:16443
option tcplog
default_backend kubernetes-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
mode tcp
balance roundrobin
server k8s01 10.8.13.81:6443 check
server k8s02 10.8.13.82:6443 check
server k8s03 10.8.13.83:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
bind *:1080
stats auth admin:awesomePassword
stats refresh 5s
stats realm HAProxy\ Statistics
stats uri /admin?stats
2)拷貝master01的haproxy到master02和master03對應路徑下
scp /etc/haproxy/haproxy.cfg 10.8.13.82:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg 10.8.13.83:/etc/haproxy/haproxy.cfg
3)啟動haproxy服務,并加入開機自啟動(master三個節(jié)點都執(zhí)行)
systemctl daemon-reload
systemctl enable haproxy
systemctl start haproxy
4.6憾赁、keepalived安裝配置
1)master01配置
yum -y install keepalived
master01污朽、master02、master03都安裝keepalived
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.8.13.80
}
track_script {
check_haproxy
}
}
2)master02配置
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.8.13.80
}
track_script {
check_haproxy
}
}
3)master03配置
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 51
priority 98
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.8.13.80
}
track_script {
check_haproxy
}
}
4)啟動keepalived服務
systemctl daemon-reload
systemctl enable keepalived
systemctl start keepalived
[root@master01 ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2019-05-10 20:33:33 CST; 3 days ago
Process: 992 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 1115 (keepalived)
CGroup: /system.slice/keepalived.service
├─1115 /usr/sbin/keepalived -D
├─1116 /usr/sbin/keepalived -D
└─1117 /usr/sbin/keepalived -D
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
[root@hwzx-test-cmpmaster01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:50:56:90:22:79 brd ff:ff:ff:ff:ff:ff
inet 10.8.13.81/24 brd 10.8.13.255 scope global ens160
valid_lft forever preferred_lft forever
inet 10.8.13.80/32 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::6772:8bb6:b50c:57fe/64 scope link
valid_lft forever preferred_lft forever
vip在master01上
5)keepalived配置注意事項
>1.killall -0 根據(jù)進程名稱檢測進程是否存活龙考,如果服務器沒有該命令蟆肆,請使用yum install psmisc -y安裝
>2.第一個master節(jié)點的state為MASTER,其他master節(jié)點的state為BACKUP
>3.priority表示各個節(jié)點的優(yōu)先級晦款,范圍:0~250(非強制要求)
4.7炎功、生成kubernets證書與私鑰
1)制作kubernetes ca證書
cd /k8s/kubernetes/ssl
cat << EOF | tee ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat << EOF | tee ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
[root@master01 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2019/05/01 09:47:08 [INFO] generating a new CA key and certificate from CSR
2019/05/01 09:47:08 [INFO] generate received request
2019/05/01 09:47:08 [INFO] received CSR
2019/05/01 09:47:08 [INFO] generating key: rsa-2048
2019/05/01 09:47:08 [INFO] encoded CSR
2019/05/01 09:47:08 [INFO] signed certificate with serial number 156611735285008649323551446985295933852737436614
[root@master01 ssl]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
2)制作apiserver證書
注意hosts處,所有IP都寫進去缓溅,包括vip
cat << EOF | tee server-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.254.0.1",
"127.0.0.1",
"10.8.13.81",
"10.8.13.82",
"10.8.13.83",
"10.8.13.84",
"10.8.13.85",
"10.8.13.80",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
[root@master01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2019/05/01 09:51:56 [INFO] generate received request
2019/05/01 09:51:56 [INFO] received CSR
2019/05/01 09:51:56 [INFO] generating key: rsa-2048
2019/05/01 09:51:56 [INFO] encoded CSR
2019/05/01 09:51:56 [INFO] signed certificate with serial number 399376216731194654868387199081648887334508501005
2019/05/01 09:51:56 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master01 ssl]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem
3)制作kube-proxy證書
cat << EOF | tee kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@master01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2019/05/01 09:52:40 [INFO] generate received request
2019/05/01 09:52:40 [INFO] received CSR
2019/05/01 09:52:40 [INFO] generating key: rsa-2048
2019/05/01 09:52:40 [INFO] encoded CSR
2019/05/01 09:52:40 [INFO] signed certificate with serial number 633932731787505365511506755558794469389165123417
2019/05/01 09:52:40 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master01 ssl]# ls
ca-config.json ca-csr.json ca.pem kube-proxy-csr.json kube-proxy.pem server-csr.json server.pem
ca.csr ca-key.pem kube-proxy.csr kube-proxy-key.pem server.csr server-key.pem
4.8部署kubernetes server
kubernetes master 節(jié)點運行如下組件:
kube-apiserver
kube-scheduler
kube-controller-manager
kube-scheduler 和 kube-controller-manager 以集群模式運行蛇损,通過 leader 選舉產生一個工作進程,其它進程處于阻塞模式坛怪。
1)解壓縮文件
tar -zxf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/
2)部署kube-apiserver組件
創(chuàng)建TLS Bootstrapping Token
[root@master01 bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
af93a4194e7bcf7f05dc0bab3a6e97cd
vim /k8s/kubernetes/cfg/token.csv
af93a4194e7bcf7f05dc0bab3a6e97cd,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
創(chuàng)建Apiserver配置文件
注:--bind-address=當前節(jié)點ip
--advertise-address=當前節(jié)點ip
vim /k8s/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://10.8.13.81:2379,https://10.8.13.82:2379,https://10.8.13.83:2379 \
--bind-address=10.8.13.81 \
--secure-port=6443 \
--advertise-address=10.8.13.81 \
--allow-privileged=true \
--service-cluster-ip-range=10.254.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/server.pem \
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
創(chuàng)建apiserver systemd文件
vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
拷貝master01 kubernetes的證書淤齐、配置文件、啟動文件到master02和master03對應路徑下
scp /k8s/kubernetes/ssl/* 10.8.13.82:/k8s/kubernetes/ssl/
scp /k8s/kubernetes/ssl/* 10.8.13.83:/k8s/kubernetes/ssl/
scp /k8s/kubernetes/cfg/* 10.8.13.82:/k8s/kubernetes/cfg/
scp /k8s/kubernetes/cfg/* 10.8.13.83:/k8s/kubernetes/cfg/
scp /k8s/kubernetes/bin/* 10.8.13.82:/k8s/kubernetes/bin/
scp /k8s/kubernetes/bin/* 10.8.13.83:/k8s/kubernetes/bin/
scp /usr/lib/systemd/system/kube-apiserver.service 10.8.13.82:/usr/lib/systemd/system
scp /usr/lib/systemd/system/kube-apiserver.service 10.8.13.83:/usr/lib/systemd/system
5)袜匿、修改master02更啄、master03 etcd的conf配置文件
matser02 etcd.conf配置如下:
ssh 10.8.13.82
vim /k8s/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://10.8.13.81:2379,https://10.8.13.82:2379,https://10.8.13.83:2379 \
--bind-address=10.8.13.82 \
--secure-port=6443 \
--advertise-address=10.8.13.82 \
--allow-privileged=true \
--service-cluster-ip-range=10.254.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/server.pem \
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
ssh 10.8.13.83
vim /k8s/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://10.8.13.81:2379,https://10.8.13.82:2379,https://10.8.13.83:2379 \
--bind-address=10.8.13.83 \
--secure-port=6443 \
--advertise-address=10.8.13.83 \
--allow-privileged=true \
--service-cluster-ip-range=10.254.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/server.pem \
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
啟動服務
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
[root@elasticsearch01 bin]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2019-05-10 20:33:32 CST; 2 days ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 705 (kube-apiserver)
CGroup: /system.slice/kube-apiserver.service
└─705 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.8.13.81:2379,https://10.8.13.82:2379,https://10.8.13.83:2379 --bind-address=10.8.13.81 --secure-port=6443 --advertise-address=10.8.13.81 --allow-privileged=true --s...
5月 13 16:00:43 hwzx-test-cmpmaster01 kube-apiserver[705]: I0513 16:00:43.495504 705 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (3.700854ms) 200 [kube-apiserver/v1.13.1 (linux/amd64) kubernetes/eec55b9 10.8.13.81:56744]
5月 13 16:00:45 hwzx-test-cmpmaster01 kube-apiserver[705]: I0513 16:00:45.955530 705 wrap.go:47] GET /api/v1/services?resourceVersion=37540&timeout=6m29s&timeoutSeconds=389&watch=true: (6m29.001574609s) 200 [kube-proxy/v1.13.1 (linux/amd64) kub... 10.8.13.81:56844]
5月 13 16:00:45 hwzx-test-cmpmaster01 kube-apiserver[705]: I0513 16:00:45.958607 705 get.go:247] Starting watch for /api/v1/services, rv=37540 labels= fields= timeout=8m28s
5月 13 16:00:46 hwzx-test-cmpmaster01 kube-apiserver[705]: I0513 16:00:46.323978 705 wrap.go:47] GET /api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: (4.410282ms) 200 [kube-scheduler/v1.13.1 (linux/amd64) kubernetes/eec55b9/...n 127.0.0.1:43276]
5月 13 16:00:46 hwzx-test-cmpmaster01 kube-apiserver[705]: I0513 16:00:46.371766 705 wrap.go:47] GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: (3.606335ms) 200 [kube-controller-manager/v1.13.1 (linux/amd64) k...n 127.0.0.1:43776]
5月 13 16:00:46 hwzx-test-cmpmaster01 kube-apiserver[705]: I0513 16:00:46.376888 705 wrap.go:47] GET /apis/apiregistration.k8s.io/v1/apiservices?resourceVersion=32859&timeout=5m5s&timeoutSeconds=305&watch=true: (5m5.001015872s) 200 [kube-apiser... 10.8.13.81:56744]
5月 13 16:00:46 hwzx-test-cmpmaster01 kube-apiserver[705]: I0513 16:00:46.377312 705 reflector.go:357] k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:117: Watch close - *apiregistration.APIService total 0 items received
5月 13 16:00:46 hwzx-test-cmpmaster01 kube-apiserver[705]: I0513 16:00:46.378469 705 get.go:247] Starting watch for /apis/apiregistration.k8s.io/v1/apiservices, rv=32859 labels= fields= timeout=8m12s
5月 13 16:00:49 hwzx-test-cmpmaster01 kube-apiserver[705]: I0513 16:00:49.206602 705 wrap.go:47] GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: (4.541086ms) 200 [kube-controller-manager/v1.13.1 (linux/amd64) k...n 127.0.0.1:43776]
5月 13 16:00:50 hwzx-test-cmpmaster01 kube-apiserver[705]: I0513 16:00:50.027213 705 wrap.go:47] GET /api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: (4.418662ms) 200 [kube-scheduler/v1.13.1 (linux/amd64) kubernetes/eec55b9/...n 127.0.0.1:43276]
Hint: Some lines were ellipsized, use -l to show in full.
[root@master01 bin]# ps -ef |grep kube-apiserver
root 705 1 3 5月10 ? 02:35:10 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.8.13.81:2379,https://10.8.13.82:2379,https://10.8.13.83:2379 --bind-address=10.8.13.81 --secure-port=6443 --advertise-address=10.8.13.81 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/k8s/etcd/ssl/ca.pem --etcd-certfile=/k8s/etcd/ssl/server.pem --etcd-keyfile=/k8s/etcd/ssl/server-key.pem
root 7098 24767 0 15:57 pts/0 00:00:00 grep --color=auto kube-apiserver
[root@master01 bin]# netstat -tulpn |grep kube-apiserve
tcp 0 0 10.8.13.81:6443 0.0.0.0:* LISTEN 705/kube-apiserver
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 705/kube-apiserver
3)部署kube-scheduler組件
創(chuàng)建kube-scheduler配置文件
vim /k8s/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
參數(shù)備注:
--address:在 127.0.0.1:10251 端口接收 http /metrics 請求;kube-scheduler 目前還不支持接收 https 請求居灯;
--kubeconfig:指定 kubeconfig 文件路徑锈死,kube-scheduler 使用它連接和驗證 kube-apiserver贫堰;
--leader-elect=true:集群運行模式,啟用選舉功能待牵;被選為 leader 的節(jié)點負責處理工作其屏,其它節(jié)點為阻塞狀態(tài);
創(chuàng)建kube-scheduler systemd文件
vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
拷貝master01 kube-scheduler配置文件缨该、啟動文件到master02和master03對應路徑下
scp /k8s/kubernetes/cfg/kube-scheduler 10.8.13.82:/k8s/kubernetes/cfg/kube-scheduler
scp /k8s/kubernetes/cfg/kube-scheduler 10.8.13.83:/k8s/kubernetes/cfg/kube-scheduler
scp /usr/lib/systemd/system/kube-scheduler.service 10.8.13.82:/usr/lib/systemd/system/kube-scheduler.service
scp /usr/lib/systemd/system/kube-scheduler.service 10.8.13.83:/usr/lib/systemd/system/kube-scheduler.service
啟動服務
systemctl daemon-reload
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
[root@master01 bin]# systemctl status kube-scheduler.service
● kube-scheduler.service - Kubernetes Scheduler
Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2019-05-10 20:33:32 CST; 2 days ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 693 (kube-scheduler)
CGroup: /system.slice/kube-scheduler.service
└─693 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
5月 13 16:10:49 hwzx-test-cmpmaster01 kube-scheduler[693]: I0513 16:10:49.024121 693 leaderelection.go:289] lock is held by hwzx-test-cmpmaster03_7601efea-7319-11e9-8964-0050569059b4 and has not yet expired
5月 13 16:10:49 hwzx-test-cmpmaster01 kube-scheduler[693]: I0513 16:10:49.024161 693 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler
5月 13 16:10:51 hwzx-test-cmpmaster01 kube-scheduler[693]: I0513 16:10:51.151743 693 leaderelection.go:289] lock is held by hwzx-test-cmpmaster03_7601efea-7319-11e9-8964-0050569059b4 and has not yet expired
5月 13 16:10:51 hwzx-test-cmpmaster01 kube-scheduler[693]: I0513 16:10:51.151799 693 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler
5月 13 16:10:53 hwzx-test-cmpmaster01 kube-scheduler[693]: I0513 16:10:53.434965 693 leaderelection.go:289] lock is held by hwzx-test-cmpmaster03_7601efea-7319-11e9-8964-0050569059b4 and has not yet expired
5月 13 16:10:53 hwzx-test-cmpmaster01 kube-scheduler[693]: I0513 16:10:53.434999 693 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler
5月 13 16:10:57 hwzx-test-cmpmaster01 kube-scheduler[693]: I0513 16:10:57.571674 693 leaderelection.go:289] lock is held by hwzx-test-cmpmaster03_7601efea-7319-11e9-8964-0050569059b4 and has not yet expired
5月 13 16:10:57 hwzx-test-cmpmaster01 kube-scheduler[693]: I0513 16:10:57.571707 693 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler
5月 13 16:11:01 hwzx-test-cmpmaster01 kube-scheduler[693]: I0513 16:11:01.914369 693 leaderelection.go:289] lock is held by hwzx-test-cmpmaster03_7601efea-7319-11e9-8964-0050569059b4 and has not yet expired
5月 13 16:11:01 hwzx-test-cmpmaster01 kube-scheduler[693]: I0513 16:11:01.914411 693 leaderelection.go:210] failed to acquire lease kube-system/kube-scheduler
4)部署kube-controller-manager組件
創(chuàng)建kube-controller-manager配置文件
vim /k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.254.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"
創(chuàng)建kube-controller-manager systemd文件
vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
拷貝master01 kube-controller-manager配置文件偎行、啟動文件到master02和master03對應路徑下
scp /k8s/kubernetes/cfg/kube-controller-manager 10.8.13.82:/k8s/kubernetes/cfg/kube-controller-manager
scp /k8s/kubernetes/cfg/kube-controller-manager 10.8.13.83:/k8s/kubernetes/cfg/kube-controller-manager
scp /usr/lib/systemd/system/kube-controller-manager.service 10.8.13.82:/usr/lib/systemd/system/kube-controller-manager.service
scp /usr/lib/systemd/system/kube-controller-manager.service 10.8.13.83:/usr/lib/systemd/system/kube-controller-manager.service
啟動服務
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
[root@master01 bin]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2019-05-10 20:33:32 CST; 2 days ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 685 (kube-controller)
CGroup: /system.slice/kube-controller-manager.service
└─685 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca...
5月 13 16:16:45 hwzx-test-cmpmaster01 kube-controller-manager[685]: I0513 16:16:45.539102 685 leaderelection.go:289] lock is held by hwzx-test-cmpmaster03_823f19e6-7319-11e9-94be-0050569059b4 and has not yet expired
5月 13 16:16:45 hwzx-test-cmpmaster01 kube-controller-manager[685]: I0513 16:16:45.539136 685 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager
5月 13 16:16:48 hwzx-test-cmpmaster01 kube-controller-manager[685]: I0513 16:16:48.767187 685 leaderelection.go:289] lock is held by hwzx-test-cmpmaster03_823f19e6-7319-11e9-94be-0050569059b4 and has not yet expired
5月 13 16:16:48 hwzx-test-cmpmaster01 kube-controller-manager[685]: I0513 16:16:48.767221 685 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager
5月 13 16:16:50 hwzx-test-cmpmaster01 kube-controller-manager[685]: I0513 16:16:50.939294 685 leaderelection.go:289] lock is held by hwzx-test-cmpmaster03_823f19e6-7319-11e9-94be-0050569059b4 and has not yet expired
5月 13 16:16:50 hwzx-test-cmpmaster01 kube-controller-manager[685]: I0513 16:16:50.939329 685 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager
5月 13 16:16:53 hwzx-test-cmpmaster01 kube-controller-manager[685]: I0513 16:16:53.212185 685 leaderelection.go:289] lock is held by hwzx-test-cmpmaster03_823f19e6-7319-11e9-94be-0050569059b4 and has not yet expired
5月 13 16:16:53 hwzx-test-cmpmaster01 kube-controller-manager[685]: I0513 16:16:53.212218 685 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager
5月 13 16:16:57 hwzx-test-cmpmaster01 kube-controller-manager[685]: I0513 16:16:57.291399 685 leaderelection.go:289] lock is held by hwzx-test-cmpmaster03_823f19e6-7319-11e9-94be-0050569059b4 and has not yet expired
5月 13 16:16:57 hwzx-test-cmpmaster01 kube-controller-manager[685]: I0513 16:16:57.291430 685 leaderelection.go:210] failed to acquire lease kube-system/kube-controller-manager
4.9、驗證kubeserver服務
設置環(huán)境變量(所有服務器都執(zhí)行此步)
vim /etc/profile
PATH=/k8s/kubernetes/bin:$PATH
source /etc/profile
查看master服務狀態(tài)
[root@master01 ~]# kubectl get cs,nodes
NAME STATUS MESSAGE ERROR
componentstatus/scheduler Healthy ok
componentstatus/controller-manager Healthy ok
componentstatus/etcd-0 Healthy {"health":"true"}
componentstatus/etcd-1 Healthy {"health":"true"}
componentstatus/etcd-2 Healthy {"health":"true"}
至此master組件安裝完畢
五贰拿、Node部署(node01蛤袒、node02安裝)
kubernetes work 節(jié)點運行如下組件:
docker
kubelet
kube-proxy
flannel
5.1 Docker環(huán)境安裝
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install docker-ce -y
systemctl start docker && systemctl enable docker
5.2 部署kubelet組件
kublet 運行在每個 worker 節(jié)點上,接收 kube-apiserver 發(fā)送的請求膨更,管理 Pod 容器妙真,執(zhí)行交互式命令,如exec荚守、run珍德、logs 等;
kublet 啟動時自動向 kube-apiserver 注冊節(jié)點信息,內置的 cadvisor 統(tǒng)計和監(jiān)控節(jié)點的資源使用情況;
為確保安全矗漾,只開啟接收 https 請求的安全端口锈候,對請求進行認證和授權,拒絕未授權的訪問(如apiserver敞贡、heapster)泵琳。
1)、安裝二進制文件
wget https://dl.k8s.io/v1.13.1/kubernetes-node-linux-amd64.tar.gz
tar zxvf kubernetes-node-linux-amd64.tar.gz
cd kubernetes/node/bin/
cp kube-proxy kubelet kubectl /k8s/kubernetes/bin/
2)誊役、從master01復制相關證書到node01和node02節(jié)點
[root@master01 ssl]# cd /k8s/kubernetes/ssl/
[root@master01 ssl]# scp *.pem 10.8.13.84:/k8s/kubernetes/ssl/
root@10.8.13.84's password:
ca-key.pem 100% 1679 914.6KB/s 00:00
ca.pem 100% 1359 1.0MB/s 00:00
kube-proxy-key.pem 100% 1675 1.2MB/s 00:00
kube-proxy.pem 100% 1403 1.1MB/s 00:00
server-key.pem 100% 1679 809.1KB/s 00:00
server.pem 100% 1675 1.2MB/s 00:00
[root@master01 ssl]# scp /k8s/etcd/ssl/* 10.8.13.84:/k8s/etcd/ssl/
[root@master01 ssl]# scp /k8s/etcd/bin/* 10.8.13.84:/k8s/etcd/bin/
[root@master01 ssl]# scp *.pem 10.8.13.85:/k8s/kubernetes/ssl/
root@10.8.13.85's password:
ca-key.pem 100% 1679 914.6KB/s 00:00
ca.pem 100% 1359 1.0MB/s 00:00
kube-proxy-key.pem 100% 1675 1.2MB/s 00:00
kube-proxy.pem 100% 1403 1.1MB/s 00:00
server-key.pem 100% 1679 809.1KB/s 00:00
server.pem 100% 1675 1.2MB/s 00:00
[root@master01 ssl]# scp /k8s/etcd/ssl/* 10.8.13.85:/k8s/etcd/ssl/
[root@master01 ssl]# scp /k8s/etcd/bin/* 10.8.13.85:/k8s/etcd/bin/
3)获列、創(chuàng)建kubelet bootstrap kubeconfig文件
通過腳本實現(xiàn)
KUBE_APISERVER=vip:haproxy中自定義的端口
BOOTSTRAP_TOKEN=部署kube-apiserver中生成的token
vim /k8s/kubernetes/cfg/environment.sh
#!/bin/bash
#創(chuàng)建kubelet bootstrapping kubeconfig
BOOTSTRAP_TOKEN=af93a4194e7bcf7f05dc0bab3a6e97cd
KUBE_APISERVER="https://10.8.13.80:16443"
#設置集群參數(shù)
kubectl config set-cluster kubernetes \
--certificate-authority=/k8s/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
#設置客戶端認證參數(shù)
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 設置上下文參數(shù)
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 創(chuàng)建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=/k8s/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem \
--client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
執(zhí)行腳本
[root@node01 cfg]# cd /k8s/kubernetes/cfg/
[root@node01 cfg]# sh environment.sh
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@node01 cfg]# ls
bootstrap.kubeconfig environment.sh kube-proxy.kubeconfig
4)、創(chuàng)建kubelet參數(shù)配置模板文件
address:node節(jié)點IP
vim /k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.8.13.84
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.254.0.10"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
5)蛔垢、創(chuàng)建kubelet配置文件
--hostname-override=node節(jié)點IP
vim /k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.8.13.84 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
6)蛛倦、創(chuàng)建kubelet systemd文件
vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
7)、將kubelet-bootstrap用戶綁定到系統(tǒng)集群角色(在master01執(zhí)行)
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
注意這個默認連接localhost:8080端口啦桌,可以在master上操作
[root@master01 ssl]# kubectl create clusterrolebinding kubelet-bootstrap \
> --clusterrole=system:node-bootstrapper \
> --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
8)溯壶、復制node01kubelet配置和啟動服務文件到node02相對應路徑
scp /k8s/kubernetes/cfg/* 10.8.13.85:/k8s/kubernetes/cfg/
scp /usr/lib/systemd/system/kubelet.service 10.8.13.85:/usr/lib/systemd/system/kubelet.service
9)、修改node02中kubelet.config和kubelet文件中的nodeIP
node02中kubelet.config配置
address:node節(jié)點IP
vim /k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.8.13.85
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.254.0.10"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
node02中kubelet配置
--hostname-override=node節(jié)點IP
vim /k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.8.13.85 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
10)甫男、啟動服務
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
[root@node01 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-05-10 20:31:30 CST; 3 days ago
Main PID: 8583 (kubelet)
Memory: 45.5M
CGroup: /system.slice/kubelet.service
└─8583 /k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=10.8.13.84 --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig --config=/k8s/kubernetes/cfg/kubelet.config --cer...
11)且改、Master接受kubelet CSR請求(master01操作,接受兩個node節(jié)點)
可以手動或自動 approve CSR 請求板驳。推薦使用自動的方式又跛,因為從 v1.8 版本開始,可以自動輪轉approve csr 后生成的證書若治,如下是手動 approve CSR請求操作方法
查看CSR列表
[root@master01 ssl]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc 102s kubelet-bootstrap Pending
接受node
[root@master01 ssl]# kubectl certificate approve node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc
certificatesigningrequest.certificates.k8s.io/node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc approved
再查看CSR
[root@master01 ssl]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc 5m13s kubelet-bootstrap Approved,Issued
5.3部署kube-proxy組件(node01執(zhí)行)
kube-proxy 運行在所有 node節(jié)點上慨蓝,它監(jiān)聽 apiserver 中 service 和 Endpoint 的變化情況感混,創(chuàng)建路由規(guī)則來進行服務負載均衡
1)、創(chuàng)建 kube-proxy 配置文件
--hostname-override=node節(jié)點IP
vim /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.8.13.84 \
--cluster-cidr=10.254.0.0/16 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
2)礼烈、創(chuàng)建kube-proxy systemd文件
vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
3)弧满、復制node01kube-proxy配置和服務啟動文件到node02相對應路徑
scp /k8s/kubernetes/cfg/kube-proxy 10.8.13.85:/k8s/kubernetes/cfg/kube-proxy
scp /usr/lib/systemd/system/kube-proxy.service 10.8.13.85:/usr/lib/systemd/system/kube-proxy.service
4)、修改node02kube-proxy配置文件如下
--hostname-override=node節(jié)點IP
vim /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.8.13.85 \
--cluster-cidr=10.254.0.0/16 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
5)此熬、啟動服務
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
[root@node01 ~]# systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-05-10 20:31:31 CST; 3 days ago
Main PID: 8669 (kube-proxy)
Memory: 9.9M
CGroup: /system.slice/kube-proxy.service
? 8669 /k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=10.8.13.84 --cluster-cidr=10.254.0.0/16 --kubeconfig...
May 14 09:07:50 hwzx-test-cmpnode01 kube-proxy[8669]: I0514 09:07:50.634641 8669 config.go:141] Calling handler.OnEndpointsUpdate
May 14 09:07:51 hwzx-test-cmpnode01 kube-proxy[8669]: I0514 09:07:51.365166 8669 config.go:141] Calling handler.OnEndpointsUpdate
May 14 09:07:52 hwzx-test-cmpnode01 kube-proxy[8669]: I0514 09:07:52.647317 8669 config.go:141] Calling handler.OnEndpointsUpdate
May 14 09:07:53 hwzx-test-cmpnode01 kube-proxy[8669]: I0514 09:07:53.375833 8669 config.go:141] Calling handler.OnEndpointsUpdate
May 14 09:07:54 hwzx-test-cmpnode01 kube-proxy[8669]: I0514 09:07:54.658691 8669 config.go:141] Calling handler.OnEndpointsUpdate
May 14 09:07:55 hwzx-test-cmpnode01 kube-proxy[8669]: I0514 09:07:55.387881 8669 config.go:141] Calling handler.OnEndpointsUpdate
May 14 09:07:56 hwzx-test-cmpnode01 kube-proxy[8669]: I0514 09:07:56.670562 8669 config.go:141] Calling handler.OnEndpointsUpdate
May 14 09:07:57 hwzx-test-cmpnode01 kube-proxy[8669]: I0514 09:07:57.398763 8669 config.go:141] Calling handler.OnEndpointsUpdate
May 14 09:07:58 hwzx-test-cmpnode01 kube-proxy[8669]: I0514 09:07:58.682049 8669 config.go:141] Calling handler.OnEndpointsUpdate
May 14 09:07:59 hwzx-test-cmpnode01 kube-proxy[8669]: I0514 09:07:59.411141 8669 config.go:141] Calling handler.OnEndpointsUpdate
6)庭呜、查看集群狀態(tài)
[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.8.13.84 Ready <none> 3d13h v1.14.1
10.8.13.85 Ready <none> 3d13h v1.14.1
至此node組件安裝完成
六、Flanneld網(wǎng)絡部署(以node01為例犀忱,node02同樣操作)
默認沒有flanneld網(wǎng)絡募谎,Node節(jié)點間的pod不能通信,只能Node內通信阴汇,為了部署步驟簡潔明了数冬,故flanneld放在后面安裝
flannel服務需要先于docker啟動。flannel服務啟動時主要做了以下幾步的工作:
從etcd中獲取network的配置信息
劃分subnet搀庶,并在etcd中進行注冊
將子網(wǎng)信息記錄到/run/flannel/subnet.env中
6.1 etcd注冊網(wǎng)段
[root@node01 ~]# /k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://10.8.13.81:2379,https://10.8.13.82:2379,https://10.8.13.83:2379,https://10.8.13.84:2379,https://10.8.13.85:2379" set /k8s/network/config '{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}
flanneld 當前版本 (v0.11.0) 不支持 etcd v3拐纱,故使用 etcd v2 API 寫入配置 key 和網(wǎng)段數(shù)據(jù);
寫入的 Pod 網(wǎng)段 ${CLUSTER_CIDR} 必須是 /16 段地址地来,必須與 kube-controller-manager 的 --cluster-cidr 參數(shù)值一致;
6.2 flannel安裝
1)熙掺、解壓安裝
tar -zxf flannel-v0.11.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/
2)未斑、配置flanneld
vim /k8s/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://10.8.13.81:2379,https://10.8.13.82:2379,https://10.8.13.83:2379,https://10.8.13.84:2379,https://10.8.13.85:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem -etcd-prefix=/k8s/network"
3)、創(chuàng)建flanneld systemd文件
vim /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/k8s/kubernetes/cfg/flanneld
ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
注意
mk-docker-opts.sh 腳本將分配給 flanneld 的 Pod 子網(wǎng)網(wǎng)段信息寫入 /run/flannel/docker 文件币绩,后續(xù) docker 啟動時 使用這個文件中的環(huán)境變量配置 docker0 網(wǎng)橋蜡秽;
flanneld 使用系統(tǒng)缺省路由所在的接口與其它節(jié)點通信,對于有多個網(wǎng)絡接口(如內網(wǎng)和公網(wǎng))的節(jié)點缆镣,可以用 -iface 參數(shù)指定通信接口;
flanneld 運行時需要 root 權限芽突;
3)配置Docker啟動指定子網(wǎng)
添加EnvironmentFile=/run/flannel/subnet.env,修改ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS即可
vim /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
4)董瞻、啟動服務
注意啟動flannel前要關閉docker及相關的kubelet這樣flannel才會覆蓋docker0網(wǎng)橋
systemctl daemon-reload
systemctl stop docker
systemctl start flanneld
systemctl enable flanneld
systemctl start docker
systemctl restart kubelet
systemctl restart kube-proxy
5)寞蚌、驗證服務
[root@node01 bin]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=10.254.88.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.254.88.1/24 --ip-masq=false --mtu=1450"
注意查看docker0和flannel是不是屬于同一網(wǎng)段
[root@node01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:50:56:90:67:d1 brd ff:ff:ff:ff:ff:ff
inet 10.8.13.84/24 brd 10.8.13.255 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::802:2c0f:a197:38a7/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:5c:18:5b:93 brd ff:ff:ff:ff:ff:ff
inet 10.254.88.1/24 brd 10.254.88.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:5cff:fe18:5b93/64 scope link
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether 8e:f6:f8:87:47:ee brd ff:ff:ff:ff:ff:ff
inet 10.254.88.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::8cf6:f8ff:fe87:47ee/64 scope link
valid_lft forever preferred_lft forever
至此flannel安裝完成
[root@hwzx-test-cmpmaster01 ~]# kubectl get nodes,cs
NAME STATUS ROLES AGE VERSION
node/10.8.13.84 Ready <none> 3d13h v1.14.1
node/10.8.13.85 Ready <none> 3d13h v1.14.1
NAME STATUS MESSAGE ERROR
componentstatus/controller-manager Healthy ok
componentstatus/scheduler Healthy ok
componentstatus/etcd-1 Healthy {"health":"true"}
componentstatus/etcd-0 Healthy {"health":"true"}
componentstatus/etcd-2 Healthy {"health":"true"}