目標(biāo)
搭建 1 主 2 從無證書認(rèn)證的 k8s 集群
準(zhǔn)備 3 臺(tái)主機(jī):
- 9.1 作為 k8s 集群的 master,9.2、9.3 為 node:
- 192.168.9.1
- 192.168.9.2
- 192.168.9.3
- etcd 集群的準(zhǔn)備:詳細(xì)介紹
- Flannel 的安裝及配置:詳細(xì)介紹
- 下載 k8s_v1.7.4,下載地址
關(guān)閉 selinux
- 查看狀態(tài) /usr/sbin/sestatus -v
- 修改 disabled :vi /etc/selinux/config
- 重啟 Linux
整體規(guī)劃
master 上安裝
- kube-apiserver
- kube-scheduler
- kube-controller-manager
node 上安裝
- kubelet
- kube-proxy
目錄結(jié)構(gòu)規(guī)劃
/app/k8s/bin 這個(gè)目錄是所有k8s 相關(guān)的可執(zhí)行程序存放目錄票罐,所以創(chuàng)建好后要配置到系統(tǒng)環(huán)境變量中
/app/k8s/conf 這個(gè)目錄存放 k8s 相關(guān)的配置文件
/app/k8s/kubelet_data 這個(gè)目錄存放 kubelet 的相關(guān)數(shù)據(jù)文件
/app/k8s/certs 這個(gè)目錄存放相關(guān)的證書文件庄涡,本篇沒有進(jìn)行安全證書的認(rèn)證,但是目錄還是需要?jiǎng)?chuàng)建的
三臺(tái)服務(wù)器目錄初始化
mkdir -p /app/k8s/{bin, conf, kubelet_data, certs}
可執(zhí)行程序的安裝
- 將下載的 kubernetes-server-linux-amd64.tar.gz 解壓好唯,并將 kube-apiserver、kube-controller-manager燥翅、kubectl kube-scheduler 移入到 master 節(jié)點(diǎn) 192.168.9.1 的 /app/k8s/bin 目錄下
- 將下載的 kubernetes-server-linux-amd64.tar.gz 解壓骑篙,并將 kubelet 、kube-proxy 移入到 node 節(jié)點(diǎn) 192.168.9.2/3 的 /app/k8s/bin 目錄下
k8s 的公共配置:
vi /app/k8s/conf/config
三臺(tái)機(jī)器分別創(chuàng)建 /app/k8s/conf森书、/app/k8s/certs 目錄及 /app/k8s/conf/config 配置文件
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.9.1:8080"
K8s 的 api-server 配置
vi /app/k8s/conf/apiserver
僅在 master 節(jié)點(diǎn) 192.168.9.1 中配置
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
# KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
KUBE_API_ADDRESS="--insecure-bind-address=192.168.9.1"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# The dir of cert files
KUBE_CERT_DIR="--cert-dir=/app/k8s/certs"
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.9.1:2379,http://192.168.9.2:2379,http://192.168.9.3:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.0.0/16"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
配置 apiserver 的 systemd 啟動(dòng)文件
vi /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
EnvironmentFile=-/app/k8s/conf/config
EnvironmentFile=-/app/k8s/conf/apiserver
ExecStart=/app/k8s/bin/kube-apiserver \
$KUBE_CERT_DIR \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
啟動(dòng) apiserver
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver
配置 controller-manager
vi /app/k8s/conf/controller-manager
僅在 master 節(jié)點(diǎn) 192.168.9.1 中配置
###
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS=""
配置 controller-manager 的 systemd 啟動(dòng)文件
vi /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/app/k8s/conf/config
EnvironmentFile=-/app/k8s/conf/controller-manager
ExecStart=/app/k8s/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
啟動(dòng) controller-manager
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
配置 scheduler
vi /app/k8s/conf/scheduler
僅 master 節(jié)點(diǎn)配置
###
# kubernetes scheduler config
# default config should be adequate
# Add your own!
KUBE_SCHEDULER_ARGS=""
配置 scheduler 的 systemd 啟動(dòng)文件:
vi /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/app/k8s/conf/config
EnvironmentFile=-/app/k8s/conf/scheduler
ExecStart=/app/k8s/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
啟動(dòng) kube-scheduler
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler
驗(yàn)證 master 節(jié)點(diǎn)功能是否正常
kubectl -s 192.168.9.1:8080 get componentstatuses
node 節(jié)點(diǎn)的配置
配置 kubeconfig
vi /app/k8s/conf/kubeconfig
每個(gè)節(jié)點(diǎn)都需要配置 kubeconfig靶端,kubeconfig 可通過 kubectl 在 master 上生成 copy 到各個(gè) node 節(jié)點(diǎn)的,這里先不多介紹了凛膏,大家直接 vi 創(chuàng)建 yaml 文件即可杨名。
apiVersion: v1
clusters:
- cluster:
server: 192.168.9.1:8080
name: default
- cluster:
server: http://192.168.9.1:8080
name: kubernetes
contexts:
- context:
cluster: default
user: ""
name: default
- context:
cluster: kubernetes
user: ""
name: kubernetes
current-context: default
kind: Config
preferences: {}
users: []
配置 kubelet
vi /app/k8s/conf/kubelet
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.9.2"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
# 此處 hostname-override 根據(jù)每個(gè)節(jié)點(diǎn)實(shí)際的服務(wù)器信息命名
KUBELET_HOSTNAME="--hostname-override=k8s-node-9.2"
# pod infrastructure container
# 此處 pod-infra-container-image 的值要根據(jù)私有倉庫中實(shí)際的鏡像地址填寫
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
# Add your own!
KUBELET_ARGS="--cert-dir=/app/k8s/certs --kubeconfig=/app/k8s/conf/kubeconfig --require-kubeconfig=true --root-dir=/app/k8s/kubelet_data --container-runtime-endpoint=unix:///app/k8s/kubele
t_data/dockershim.sock"
配置 kubelet 的 systemd 的啟動(dòng)文件:
vi /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=-/app/k8s/conf/config
EnvironmentFile=-/app/k8s/conf/kubelet
ExecStart=/app/k8s/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
啟動(dòng) kubelet
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
配置 kube-proxy
vi /app/k8s/conf/proxy
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS=""
配置 kube-proxy 的 systemd 啟動(dòng)文件:
vi /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
EnvironmentFile=-/app/k8s/conf/config
EnvironmentFile=-/app/k8s/conf/proxy
ExecStart=/app/k8s/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
啟動(dòng) kubelet-proxy
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
在 master 節(jié)點(diǎn)檢查 node 節(jié)點(diǎn)狀態(tài)
kubectl -s 192.168.9.1:8080 get nodes
如果獲取的 node status 狀態(tài)都是 ready 證明 k8s 集群搭建完畢