K8s安裝
主機部署組件說明(根據(jù)文檔架構(gòu)):
master-1主機對應(yīng)的是ETCE_NAME="etcd01" IP為192.168.174.30? 安裝組件包括 kube-apiserver, kube-controller-manager, kube-scheduler, etcd
master-2主機對應(yīng)的是ETCE_NAME="etcd02" IP為192.168.174.31? 安裝組件包括 kube-apiserver, kube-controller-manager, kube-scheduler, etcd
node-1主機對應(yīng)的是ETCE_NAME="etcd03" IP為192.168.174.40? 安裝組件包括 kubelet, kube-proxy, docker, flannel, etcd
node-1主機對應(yīng)的IP為192.168.174.41? 安裝組件包括 kubelet, kube-proxy, docker, flannel
master組件作用
kube-apiserver:
kube-apiserver對外暴露了Kubernetes API抚太。它是的 Kubernetes 前端控制層琳骡。它被設(shè)計為水平擴展收夸,即通過部署更多實例來縮放
kube-cotroller-manager:
kube-controller-manager運行控制器,它們是處理集群中常規(guī)任務(wù)的后臺線程。邏輯上,每個控制器是一個單獨的進(jìn)程须鼎,但為了降低復(fù)雜性,它們都被編譯成獨立的可執(zhí)行文件府蔗,并在單個進(jìn)程中運行晋控。
etcd:
etc用于kubernetes的后端存儲。所有集群數(shù)據(jù)都存儲在此處姓赤,始終為kubernetes集群的etcd數(shù)據(jù)提供備份計劃赡译。
kube-scheduler:
kube-scheduler監(jiān)視沒有分配節(jié)點的新創(chuàng)建的pod,選擇一個節(jié)點供他們運行模捂。
node組件作用:
kubelet:
kubelet是主要的節(jié)點代理捶朵,它監(jiān)測已分配給其節(jié)點的Pod(通過apiserver或通過本地配置文件),提供如下功能:
* 掛載Pod所需要的數(shù)據(jù)卷(volume)
* 下載Pod的secrets
* 通過Docker運行(或通過rkt)運行Pod的容器
* 周期性的對容器生命周期進(jìn)行探測
* 如果需要狂男,通過創(chuàng)建鏡像Pod(Mirror Pod)將Pod的狀態(tài)報告回系統(tǒng)的其余部分
* 將節(jié)點的狀態(tài)報告會系統(tǒng)的其余部分
kube-proxy:
kube-proxy通過維護(hù)主機上的網(wǎng)絡(luò)規(guī)則并執(zhí)行連接轉(zhuǎn)發(fā)综看,實現(xiàn)了kubernetes服務(wù)抽象
flannel:
flannel網(wǎng)絡(luò)插件,目前支持UDP岖食、VxLAN红碑、AWS VPC和GCE路由等數(shù)據(jù)轉(zhuǎn)發(fā)方式
工作原理是,數(shù)據(jù)源容器中發(fā)出后,經(jīng)由所在主機的docker0虛擬網(wǎng)卡轉(zhuǎn)發(fā)到flannel0虛擬網(wǎng)卡析珊,node間的flannel0虛擬網(wǎng)卡互為網(wǎng)關(guān)羡鸥,所以當(dāng)node1上的pod訪問node2上的pod時就可以通信了。如果兩臺node其中一臺無法訪問公網(wǎng)一臺可訪問公網(wǎng)忠寻,那么無法訪問公網(wǎng)的會通過可以訪問公網(wǎng)的node而訪問公網(wǎng)惧浴。
laster負(fù)載均衡 10.206.176.19對應(yīng)192.168.176.19? 組件 LVS
1、使用cfssl生成自簽證書所需文件已下載到目錄中奕剃,如需重復(fù)下載可用瀏覽器打開下面鏈接衷旅,或直接wget
https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
#chmod +x cfssl*
#mv cfssl_linux-amd64 /usr/local/bin/cfssl
#mv cfssljson_linux-amd64? /usr/local/bin/cfssljson
#mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
2、創(chuàng)建CA證書配置纵朋,生成CA證書和密鑰
etcd和kubernetes都需要生成證書
etcd證書存放目錄:mkdir -p /etc/etcd/ssl
kubernetes證書存放目錄:mkdir -p /etc/kubernetes/ssl
創(chuàng)建證書時臨時存放:mkdir /root/ssl
文件有格式要求柿顶,首先我們先生成默認(rèn)文件,然后根據(jù)config.json文件的格式創(chuàng)建ca-config.json文件操软,過期時間設(shè)置為87600h嘁锯。
創(chuàng)建CA配置文件
#cd /root/ssl
#cfssl print-defaults config > config.json
#vim config.json
{
? ? "signing": {
? ? ? ? "default": {
? ? ? ? ? ? "expiry": "87600h"
? ? ? ? },
? ? ? ? "profiles": {
? ? ? ? ? ? "www": {
? ? ? ? ? ? ? ? "expiry": "87600h",
? ? ? ? ? ? ? ? "usages": [
? ? ? ? ? ? ? ? ? ? "signing",
? ? ? ? ? ? ? ? ? ? "key encipherment",
? ? ? ? ? ? ? ? ? ? "server auth",
? ? ? ? ? ? ? ? ? ? "client auth"
? ? ? ? ? ? ? ? ]
? ? ? ? ? ? }
? ? ? ? }
? ? }
}
#mv config.json? ca-config.json
ca-config.json:可以定義多個 profiles,分別指定不同的過期時間聂薪、使用場景等參數(shù)家乘;后續(xù)在簽名證書時使用某個 profile;
signing:表示該證書可用于簽名其它證書胆建;生成的 ca.pem 證書中 CA=TRUE烤低;
server auth:表示client可以用該 CA 對server提供的證書進(jìn)行驗證肘交;
client auth:表示server可以用該CA對client提供的證書進(jìn)行驗證笆载;
創(chuàng)建CA證書簽名請求
#cfssl print-defaults csr > csr.json
#vim csr.json
{
? ? "CN": "etcd CA",
? ? "key": {
? ? ? ? "algo": "rsa",
? ? ? ? "size": 2048
? ? },
? ? "names": [
? ? ? ? {
? ? ? ? ? ? "C": "CN",
? ? ? ? ? ? "L": "BeiJing",
? ? ? ? ? ? "ST": "BeiJing"
? ? ? ? }
? ? ]
}
#mv csr.json? ca-csr.json
“CN”:Common Name,kube-apiserver 從證書中提取該字段作為請求的用戶名 (User Name)涯呻;瀏覽器使用該字段驗證網(wǎng)站是否合法凉驻;
“O”:Organization,kube-apiserver 從證書中提取該字段作為請求用戶所屬的組 (Group)复罐;
生成 CA 證書和私鑰
#cfssl gencert -initca ca-csr.json | cfssljson -bare ca
創(chuàng)建kubernetes證書
創(chuàng)建kubernetes證書簽名請求
#vim server-csr.json
{
? ? "CN": "etcd",
? ? "hosts": [
? ? ? "192.168.174.30",
? ? ? "192.168.174.31",
? ? ? "192.168.174.40"
? ? ],
? ? "key": {
? ? ? ? "algo": "rsa",
? ? ? ? "size": 2048
? ? },
? ? "names": [
? ? ? ? {
? ? ? ? ? ? "C": "CN",
? ? ? ? ? ? "L": "BeiJing",
? ? ? ? ? ? "ST": "BeiJing"
? ? ? ? }
? ? ]
}
生成 kubernetes 證書和私鑰
#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2019/03/01 10:55:26 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
上面的警告忽略
一涝登、安裝Etcd
以下配置需要在三臺服務(wù)器上配置,參考excel文檔中的部署架構(gòu)
這里對應(yīng)的主機分別是:
master-1主機對應(yīng)的是ETCE_NAME="etcd01" IP為192.168.174.30
master-2主機對應(yīng)的是ETCE_NAME="etcd02" IP為192.168.174.31
node-1主機對應(yīng)的是ETCE_NAME="etcd03" IP為192.168.174.40
除了配置文件需要三臺不同效诅,其余操作一致胀滚。
1、二進(jìn)制包下載位置
https://github.com/etcd-io/etcd/releases/tag/v3.2.12
下載etcd-v3.2.12-linux-amd64.tar.gz
#mkdir /opt/etcd/{bin,cfg,ssl} -p
#tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
#mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
2乱投、創(chuàng)建etcd配置文件
#cd /opt/etcd/cfg/
#vim etcd
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.174.31:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.174.31:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.174.31:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.174.31:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.174.31:2380,etcd02=https://192.168.174.30:2380,etcd03=https://192.168.174.40:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
配置文件注解
ETCD_NAME="etcd01"? ? #節(jié)點名稱
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"? ? #數(shù)據(jù)目錄
ETCD_ISTEN_PEER_URLS="https://192.168.174.31:2380"? ? #集群通信監(jiān)聽地址 咽笼,IP為自己服務(wù)器IP
ETCD_ISTEN_CLIENT_URLS="https://192.168.174.31:2379"? ? #客戶端訪問監(jiān)聽地址,IP為自己服務(wù)器IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.174.31:2380"? ? #集群通告地址戚炫,IP為自己服務(wù)器IP
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.174.31:2379"? ? #客戶端通告地址剑刑,IP為自己服務(wù)器IP
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.174.31:2380,etcd02=https://192.168.174.30:2380,etcd03=https://192.168.174.40:2380"? ? #集群節(jié)點地址
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"? ? #集群Token
ETCD_INITIAL_CLUSTER_STATE="new"? ? #加入集群的當(dāng)前狀態(tài),new是新集群,existing表示加入已有集群
systemctl管理etcd
#vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=ontify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
把證書拷貝到配置文件中的位置
#cp ca*pem server*pem /opt/etcd/ssl/
關(guān)閉防火墻
systemctl stop firewalld.service
systemctl disable firewalld.service #禁止firewall開機啟動
setenforce 0
啟動etcd
systemctl start etcd
systemctl enable etcd
檢查etcd集群狀態(tài)
#/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.174.31:2379,https://192.168.174.30:2379,https://192.168.174.40:2379" cluster-health
輸出以下內(nèi)容即為成功:
luster-health
member 21cdde3b45cde04f is healthy: got healthy result from https://192.168.174.40:2379
member ca2bc200e194cb1c is healthy: got healthy result from https://192.168.174.30:2379
member dd8d169df81310cc is healthy: got healthy result from https://192.168.174.31:2379
日志查看
/var/log/message
或
journalctl -u etcd
二施掏、在Node節(jié)點安裝Docker
#yum -y install yum-utils device-mapper-persistent-data lvm2
#yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
#yum install docker-ce -y
#curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://bc437cce.m.daocloud.io
#systemctl start docker
#systemctl enable docker
三钮惠、部署Flannel網(wǎng)絡(luò)
Falnnel要用etcd存儲自身一個子網(wǎng)信息,所以要保證能成功連接Etcd七芭,寫入預(yù)定義字網(wǎng)段
在master-1上執(zhí)行:
#/opt/etcd/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://192.168.174.30:2379,https://192.168.174.31:2379,htts://192.168.174.40:2379" \
set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
已下步驟在每個node節(jié)點上都操作
1素挽、下載二進(jìn)制包
#wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
#tar -zxvf flannel-v0.10.0-linux-amd64.tar.gz
#mkdir -p /opt/kubernetes/bin
#mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/
把證書拷貝到配置文件中的位置
#cp ca*pem server*pem /opt/etcd/ssl/? (所說的證書都是master-01上生成的證書)
2、配置Flannel
#mkdir /opt/kubernetes/cfg
#vim /opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.174.30:2379,https://192.168.174.31:2379,https://192.168.174.40:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
3狸驳、systemd管理Flannel
#vim /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
4毁菱、配置docker啟動指定子網(wǎng)段
#cp /usr/lib/systemd/system/docker.service? /usr/lib/systemd/system/docker.service.bak
#vim /usr/lib/systemd/system/docker.service
#cat /usr/lib/systemd/system/docker.service? |egrep -v '^#|^$'
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutStartSec=0
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
5、重啟Flannel和docker
#systemctl daemon-reload
#systemctl start flanneld
#systemctl enable flanneld
#systemctl restart docker
6锌历、檢查是否生效
#ps -ef |grep docker
root? ? ? 3761? ? 1? 1 15:27 ?? ? ? ? 00:00:00 /usr/bin/dockerd --bip=172.17.44.1/24 --ip-masq=false --mtu=1450
#ip addr
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
? ? link/ether 02:42:3a:13:4d:7f brd ff:ff:ff:ff:ff:ff
? ? inet 172.17.44.1/24 brd 172.17.44.255 scope global docker0
? ? ? valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
? ? link/ether 02:0e:32:58:7f:75 brd ff:ff:ff:ff:ff:ff
? ? inet 172.17.44.0/32 scope global flannel.1
? ? ? valid_lft forever preferred_lft forever
? ? inet6 fe80::e:32ff:fe58:7f75/64 scope link
? ? ? valid_lft forever preferred_lft forever
docker0和flannel網(wǎng)卡在同一個網(wǎng)段贮庞,并且兩個node節(jié)點直接應(yīng)該能通
如果ping不通,并且/var/log/message下有類似iptables的告警究西,需要再關(guān)閉防火墻systemctl stop firewalld.service? 窗慎,之后重啟docker
==========================================================================================================================================
四、在Master節(jié)點部署組件
1卤材、生成證書
創(chuàng)建CA證書
mkdir /root/kubernetes-ssl
cd /root/kubernetes-ssl
# cat ca-config.json
{
? ? "signing": {
? ? ? ? "default": {
? ? ? ? ? ? "expiry": "87600h"
? ? ? ? },
? ? ? ? "profiles": {
? ? ? ? ? ? "kubernetes": {
? ? ? ? ? ? ? ? "expiry": "87600h",
? ? ? ? ? ? ? ? "usages": [
? ? ? ? ? ? ? ? ? ? "signing",
? ? ? ? ? ? ? ? ? ? "key encipherment",
? ? ? ? ? ? ? ? ? ? "server auth",
? ? ? ? ? ? ? ? ? ? "client auth"
? ? ? ? ? ? ? ? ]
? ? ? ? ? ? }
? ? ? ? }
? ? }
}
# cat ca-csr.json
{
? ? "CN": "kubernetes",
? ? "key": {
? ? ? ? "algo": "rsa",
? ? ? ? "size": 2048
? ? },
? ? "names": [
? ? ? ? {
? ? ? ? ? ? "C": "CN",
? ? ? ? ? ? "L": "BeiJing",
? ? ? ? ? ? "ST": "BeiJing",
? ? ? ? ? ? "O": "k8s",
? ? ? ? ? ? "OU": "System"
? ? ? ? }
? ? ]
}
# cfssl gencert -initca ca-csr.json |cfssljson -bare ca
生成apiserver證書
# cat server-csr.json
{
? ? "CN": "kubernetes",
? ? "hosts": [
? ? ? "10.0.0.1",
? ? ? "127.0.0.1",
? ? ? "192.168.176.19",
? ? ? "192.168.174.30",
? ? ? "192.168.174.31",
? ? ? "192.168.174.40",
? ? ? "kubernetes",
? ? ? "kubernetes.default",
? ? ? "kubernetes.default.svc",
? ? ? "kubernetes.default.svc.cluster",
? ? ? "kubernetes.default.svc.cluster.local"
? ? ],
? ? "key": {
? ? ? ? "algo": "rsa",
? ? ? ? "size": 2048
? ? },
? ? "names": [
? ? ? ? {
? ? ? ? ? ? "C": "CN",
? ? ? ? ? ? "L": "BeiJing",
? ? ? ? ? ? "ST": "BeiJing",
? ? ? ? ? ? "O": "k8s",
? ? ? ? ? ? "OU": "System"
? ? ? ? }
? ? ]
}
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
生成kube-proxy證書
# cat kube-proxy-csr.json
{
? ? "CN": "system:kube-proxy",
? ? "hosts": [],
? ? "key": {
? ? ? ? "algo": "rsa",
? ? ? ? "size": 2048
? ? },
? ? "names": [
? ? ? ? {
? ? ? ? ? ? "C": "CN",
? ? ? ? ? ? "L": "BeiJing",
? ? ? ? ? ? "ST": "BeiJing",
? ? ? ? ? ? "O": "k8s",
? ? ? ? ? ? "OU": "System"
? ? ? ? }
? ? ]
}
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
最終生成證書文件包括:
# ls *pem
ca-key.pem? ca.pem? kube-proxy-key.pem? kube-proxy.pem? server-key.pem? server.pem
2遮斥、部署apiserver組件
下載二進(jìn)制包:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md? (只下載kubernetes-server-linux-amd64.tar.gz)
# mkdir /opt/kubernetes/{bin,cfg,ssl} -p
# tar zxvf kubernetes-server-linux-amd64.tar.gz
# cd kubernetes/server/bin/
# cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin/
創(chuàng)建token文件
# cat /opt/kubernetes/cfg/token.csv
674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
第一列:隨機字符串,自己可生成
第二列:用戶名
第三列:UID
第四列:用戶組
創(chuàng)建apiserver配置文件
vim /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.174.30:2379,https://192.168.174.31:2379,https://192.168.174.40:2379 \
--bind-address=192.168.174.30 \
--secure-port=6443 \
--advertise-address=192.168.174.30 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
注解:
cat /opt/kubernetes/cfg/kube-apiserver? ?
KUBE_APISERVER_OPTS="--logtostderr=true \? ? #啟用日志
--v=4 \? #日志等級
--etcd-servers=https://192.168.174.30:2379,https://192.168.174.31:2379,https://192.168.174.40:2379 \? ? #集群地址
--bind-address=192.168.174.30 \? ? #監(jiān)聽地址
--secure-port=6443 \? ? #安全端口
--advertise-address=192.168.174.30 \? ? #集群通告地址
--allow-privileged=true \? ? #啟用授權(quán)
--service-cluster-ip-range=10.0.0.0/24 \? ? #Service虛擬IP地址段
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \? ? #準(zhǔn)入控制模塊
--authorization-mode=RBAC,Node \? ? #認(rèn)證授權(quán)扇丛,啟用RBAC授權(quán)和節(jié)點自管理
--enable-bootstrap-token-auth \? ?
--token-auth-file=/opt/kubernetes/cfg/token.csv \? ? #啟用TLS bootstrap功能
--service-node-port-range=30000-50000 \? ? #Service Node類型默認(rèn)分配端口范圍
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
system管理apiserver
# vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
#cd /root/kubernetes-ssl/
#cp *.pem /opt/kubernetes/ssl/
啟動:
#systemctl daemon-reload
#systemctl enable kube-apiserver
#systemctl restart kube-apiserver
部署schduler組件
創(chuàng)建schduler配置文件
# vim /opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"
注解:
--master 連接本地apiserver
--leader-elect 當(dāng)該組件啟動多個時术吗,自動選舉(HA)
systemd管理schduler組件
# vim /opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"
# vim /usr/lib/systemd/system/kube-scheduler.service
[Unti]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
啟動:
#systemctl daemon-reload
#systemctl enable kube-scheduler
#systemctl restart kube-scheduler
部署controller-manager組件
創(chuàng)建controller-manager配置文件:
# vim /opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"
system管理controller-manager組件
# vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
啟動:
#systemctl daemon-reload
#systemctl enable kube-controller-manager
#systemctl restart kube-controller-manager
查看集群組件狀態(tài):
# /opt/kubernetes/bin/kubectl get cs
NAME? ? ? ? ? ? ? ? STATUS? ? MESSAGE? ? ? ? ? ? ? ERROR
controller-manager? Healthy? ok? ? ? ? ? ? ? ? ?
scheduler? ? ? ? ? ? Healthy? ok? ? ? ? ? ? ? ? ?
etcd-2? ? ? ? ? ? ? Healthy? {"health": "true"}?
etcd-1? ? ? ? ? ? ? Healthy? {"health": "true"}?
etcd-0? ? ? ? ? ? ? Healthy? {"health": "true"}
輸出如上內(nèi)容表示成功
兩個節(jié)點都部署
在NODE節(jié)點部署組件
master apiserver啟用TLS認(rèn)證后,Node節(jié)點kubelet組件想要加入集群帆精,必須使用CA簽發(fā)的有效證書才能與apiserver通信较屿,當(dāng)Node節(jié)點很多時,簽署證書是一件很繁瑣的事情卓练,因此有了TLS Bootstrapping機制隘蝎,kubelet會以一個低權(quán)限用戶自動向apiserver申請證書,kubelet的證書由apiserver動態(tài)簽署襟企。
在Master節(jié)點操作:
1嘱么。將kubelet-bootstrap用戶綁定到系統(tǒng)集群角色
/opt/kubernetes/bin/kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
2.創(chuàng)建kubeconfig文件
進(jìn)入到生成kubernetes證書的目錄
#cd /root/kubernets-ssl
定義環(huán)境變量
#KUBE_APISERVER="https://192.168.176.19:6443"? ? (如果配置負(fù)載均衡則使用負(fù)載均衡地址,如果沒有配置則指定master節(jié)點地址)
#BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc
創(chuàng)建bootstrapping.kubeconfig文件
設(shè)置集群參數(shù)(直接在終端執(zhí)行)
#/opt/kubernetes/bin/kubectl config set-cluster kubernetes \
? ? --certificate-authority=./ca.pem \
? ? --embed-certs=true \
? ? --server=${KUBE_APISERVER} \
? ? --kubeconfig=bootstrap.kubeconfig
設(shè)置客戶端認(rèn)證參數(shù)(直接在終端執(zhí)行)
#/opt/kubernetes/bin/kubectl config set-credentials kubelet-bootstrap \
? ? --token=${BOOTSTRAP_TOKEN} \
? ? --kubeconfig=bootstrap.kubeconfig
設(shè)置上下文參數(shù)(直接在終端執(zhí)行)
#/opt/kubernetes/bin/kubectl config set-context default \
? ? --cluster=kubernetes \
? ? --user=kubelet-bootstrap \
? ? --kubeconfig=bootstrap.kubeconfig
設(shè)置默認(rèn)上下文(直接在終端執(zhí)行)
#/opt/kubernetes/bin/kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
創(chuàng)建kube-proxy.kubeconfig文件(直接在終端執(zhí)行)
#/opt/kubernetes/bin/kubectl config set-cluster kubernetes \
? ? --certificate-authority=./ca.pem \
? ? --embed-certs=true \
? ? --server=${KUBE_APISERVER} \
? ? --kubeconfig=kube-proxy.kubeconfig
#/opt/kubernetes/bin/kubectl config set-credentials kube-proxy \
? ? --client-certificate=./kube-proxy.pem \
? ? --client-key=./kube-proxy-key.pem \
? ? --embed-certs=true \
? ? --kubeconfig=kube-proxy.kubeconfig
#/opt/kubernetes/bin/kubectl config set-context default \
? ? --cluster=kubernetes \
? ? --user=kube-proxy \
? ? --kubeconfig=kube-proxy.kubeconfig
#/opt/kubernetes/bin/kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
#ls
bootstrapping.kubeconfig? kube-proxy.kubeconfig
將這兩個文件拷貝到Node節(jié)點/opt/kubernetes/cfg目錄下
3.部署kubelet組件
將之前下載的二進(jìn)制包中kubele和kube-proxy拷貝到Node節(jié)點的/opt/kubernetes/bin目錄下顽悼,之前下載的二進(jìn)制包解壓后目錄/root/kubernetes/server/bin曼振,找不到用find搜
在node節(jié)點下創(chuàng)建kubelet配置文件(其中的IP寫本機IP)
#cat /opt/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.174.41 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
參數(shù)說明
--hostname-override=192.168.174.41 \? ? #在集群中顯示的主機名
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \? ? #文件位置,會自動生成
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \? ? #指定剛生成的bootstrap.kubeconfig文件
--cert-dir=/opt/kubernetes/ssl \? ? #頒發(fā)證書存放位置
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"? ? #管理Pod網(wǎng)絡(luò)的鏡像
編寫kubelet.config配置文件
# cat /opt/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.174.41
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
? anonymous:
? ? enabled: true
? webhook:
? ? enabled: false
systemd管理kubele組件
# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
啟動
#systemctl daemon-reload
#systemctl enable kubelet
#systemctl restart kubelet
在Mater審批Node加入集群
需要手動允許節(jié)點加入
#/opt/kubernetes/bin/kubectl get csr
#/opt/kubernetes/bin/kubectl certificate approve XXXXXID
#/opt/kubernetes/bin/kubectl get node
4.部署kube-proxy組件
創(chuàng)建kube-proxy配置文件:
# cat /opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.174.40 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
systemd管理kube-proxy組件
cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
啟動:
#systemctl daemon-reload
#systemctl enable kube-proxy
#systemctl restart kube-proxy
如果啟動后報錯:
kube-proxy: W0305 10:54:29.666610? 31207 server.go:605] Failed to retrieve node info: nodes "192.168.174.40" not found
需要檢查:
/opt/kubernetes/cfg/kubelet.config? 和? /opt/kubernetes/cfg/kube-proxy? 中的IP地址是否都是本機地址蔚龙。
查看集群狀態(tài)
# /opt/kubernetes/bin/kubectl get node
NAME? ? ? ? ? ? STATUS? ? ROLES? ? AGE? ? ? VERSION
192.168.174.40? Ready? ? <none>? ? 6h? ? ? ? v1.11.8
192.168.174.41? Ready? ? <none>? ? 1d? ? ? ? v1.11.8
# /opt/kubernetes/bin/kubectl get cs
NAME? ? ? ? ? ? ? ? STATUS? ? MESSAGE? ? ? ? ? ? ? ERROR
scheduler? ? ? ? ? ? Healthy? ok? ? ? ? ? ? ? ? ?
controller-manager? Healthy? ok? ? ? ? ? ? ? ? ?
etcd-1? ? ? ? ? ? ? Healthy? {"health": "true"}?
etcd-2? ? ? ? ? ? ? Healthy? {"health": "true"}?
etcd-0? ? ? ? ? ? ? Healthy? {"health": "true"}
5.運行一個測試示例
創(chuàng)建一個nginx web冰评,判斷集群是否正常工作
# /opt/kubernetes/bin/kubectl run nginx --image=nginx --replicas=3
# /opt/kubernetes/bin/kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
service/nginx exposed
查看Pod,Service:
# /opt/kubernetes/bin/kubectl get pods
NAME? ? ? ? ? ? ? ? ? ? READY? ? STATUS? ? RESTARTS? AGE
nginx-64f497f8fd-9ccgb? 1/1? ? ? Running? 0? ? ? ? ? 6h
nginx-64f497f8fd-bbw97? 1/1? ? ? Running? 0? ? ? ? ? 6h
nginx-64f497f8fd-pxkh8? 1/1? ? ? Running? 0? ? ? ? ? 6h
# /opt/kubernetes/bin/kubectl get svc
NAME? ? ? ? TYPE? ? ? ? CLUSTER-IP? EXTERNAL-IP? PORT(S)? ? ? ? AGE
kubernetes? ClusterIP? 10.0.0.1? ? <none>? ? ? ? 443/TCP? ? ? ? 4d
nginx? ? ? ? NodePort? ? 10.0.0.16? ? <none>? ? ? ? 88:39427/TCP? 57s
記錄下最后一條的端口號39427
訪問:http://192.168.174.40:39427
6.部署Dashboard(Web UI)
dashboard-deployment.yaml 部署Pod,提供web服務(wù)
dashboard-rbac.yaml 授權(quán)訪問apiserver獲取信息
dashboard-service.yaml 發(fā)布服務(wù)府蛇,提供對外訪問
# cat dashboard-deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
? name: kubernetes-dashboard
? namespace: kube-system
? labels:
? ? k8s-app: kubernetes-dashboard
? ? kubernetes.io/cluster-service: "true"
? ? addonmanager.kubernetes.io/mode: Reconcile
spec:
? selector:
? ? matchLabels:
? ? ? k8s-app: kubernetes-dashboard
? template:
? ? metadata:
? ? ? labels:
? ? ? ? k8s-app: kubernetes-dashboard
? ? ? annotations:
? ? ? ? scheduler.alpha.kubernetes.io/critical-pod: ''
? ? spec:
? ? ? serviceAccountName: kubernetes-dashboard
? ? ? containers:
? ? ? - name: kubernetes-dashboard
? ? ? ? image: registry.cn-hangzhou.aliyuncs.com/kube_containers/kubernetes-dashboard-amd64:v1.8.1
? ? ? ? resources:
? ? ? ? ? limits:
? ? ? ? ? ? cpu: 100m
? ? ? ? ? ? memory: 300Mi
? ? ? ? ? requests:
? ? ? ? ? ? cpu: 100m
? ? ? ? ? ? memory: 100Mi
? ? ? ? ports:
? ? ? ? - containerPort: 9090
? ? ? ? ? protocol: TCP
? ? ? ? livenessProbe:
? ? ? ? ? httpGet:
? ? ? ? ? ? path: /
? ? ? ? ? ? port: 9090
? ? ? ? ? initialDelaySeconds: 30
? ? ? ? ? timeoutSeconds: 30
? ? ? tolerations:
? ? ? - key: "CriticalAddonsOnly"
? ? ? ? operator: "Exists"
# cat dashboard-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
? labels:
? ? k8s-app: kubernetes-dashboard
? ? addonmanager.kubernetes.io/mode: Reconcile
? name: kubernetes-dashboard
? namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
? name: kubernetes-dashboard-minimal
? namespace: kube-system
? labels:
? ? k8s-app: kubernetes-dashboard
? ? addonmanager.kubernetes.io/mode: Reconcile
roleRef:
? kind: ClusterRole
? name: cluster-admin
? apiGroup: rbac.authorization.k8s.io
subjects:
? - kind: ServiceAccount
? ? name: kubernetes-dashboard
? ? namespace: kube-system
# cat dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
? name: kubernetes-dashboard
? namespace: kube-system
? labels:
? ? k8s-app: kubernetes-dashboard
? ? kubernetes.io/cluster-service: "true"
? ? addonmanager.kubernetes.io/mode: Reconcile
spec:
? type: NodePort
? selector:
? ? k8s-app: kubernetes-dashboard
? ports:
? - port: 80
? ? targetPort: 9090
創(chuàng)建:
/opt/kubernetes/bin/kubectl create -f dashboard-rbac.yaml
/opt/kubernetes/bin/kubectl create -f dashboard-deployment.yaml
/opt/kubernetes/bin/kubectl create -f dashboard-service.yaml
等待一會集索,查看資源狀態(tài)
# /opt/kubernetes/bin/kubectl get all -n kube-system
NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? READY? ? STATUS? ? RESTARTS? AGE
pod/kubernetes-dashboard-d9545b947-jrmmc? 1/1? ? ? Running? 0? ? ? ? ? 16m
NAME? ? ? ? ? ? ? ? ? ? ? ? ? TYPE? ? ? CLUSTER-IP? EXTERNAL-IP? PORT(S)? ? ? ? AGE
service/kubernetes-dashboard? NodePort? 10.0.0.6? ? <none>? ? ? ? 80:41545/TCP? 16m
NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? DESIRED? CURRENT? UP-TO-DATE? AVAILABLE? AGE
deployment.apps/kubernetes-dashboard? 1? ? ? ? 1? ? ? ? 1? ? ? ? ? ? 1? ? ? ? ? 16m
NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? DESIRED? CURRENT? READY? ? AGE
replicaset.apps/kubernetes-dashboard-d9545b947? 1? ? ? ? 1? ? ? ? 1? ? ? ? 16m
# /opt/kubernetes/bin/kubectl get svc -n kube-system
NAME? ? ? ? ? ? ? ? ? TYPE? ? ? CLUSTER-IP? EXTERNAL-IP? PORT(S)? ? ? ? AGE
kubernetes-dashboard? NodePort? 10.0.0.6? ? <none>? ? ? ? 80:41545/TCP? 16m
訪問:http://192.168.174.40:41545