操作系統
#服務器1
[root@VM_0_12_centos ~]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
#服務器2
[root@VM_0_3_centos ~]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
服務器1作為Master葫哗,其IP為193.112.177.239
服務器2作為Node1 ,其IP為123.207.26.143
部署前的準備工作
docker-ce的安裝
# step 1: 安裝必要的一些系統工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加軟件源信息
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: 更新并安裝 Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
# Step 4: 開啟Docker服務
sudo service docker start
# Step 5: 查看安裝的docker-ce的版本
docker version
#配置阿里云的鏡像源
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
#通過登陸https://cr.console.aliyun.com钳宪,獲取屬于自己的鏡像源的地址。?
{
"registry-mirrors": ["https://xxxx.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
etcd的安裝
ETCD是用于共享配置和服務發(fā)現的分布式、一致性的KV存儲系統褪迟,主要包括了增刪改查食磕、安全認證谣光、集群、選舉芬为、事務萄金、分布式鎖蟀悦、Watch機制等等,實現了RAFT協議氧敢,功能相當強大
有兩臺云主機
# step 1: 將2臺主機的信息添加到三臺主機的hosts文件中日戈,編輯/etc/hosts寫入信息如下
123.207.26.143 node1
193.112.177.239 etcd2
# step 2:為每一臺主機安裝etcd
yum install etcd -y
# step 3:配置每一臺主機的集群信息,編輯/etc/etcd/etcd.conf
#node1節(jié)點
# [member]
# 節(jié)點名稱
ETCD_NAME=node1
# 數據存放位置
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
# 監(jiān)聽其他 Etcd 實例的地址
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
# 監(jiān)聽客戶端地址
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#[cluster]
# 通知其他 Etcd 實例地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://node1:2380"
# 初始化集群內節(jié)點地址
ETCD_INITIAL_CLUSTER="node1=http://node1:2380,etcd2=http://etcd2:2380"
# 初始化集群狀態(tài)孙乖,new 表示新建
ETCD_INITIAL_CLUSTER_STATE="new"
# 初始化集群 token
ETCD_INITIAL_CLUSTER_TOKEN="mritd-etcd-cluster"
# 通知 客戶端地址
ETCD_ADVERTISE_CLIENT_URLS="http://node1:2379,http://node1:4001"
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://etcd2:2380"
ETCD_INITIAL_CLUSTER="node1=http://node1:2380,etcd2=http://etcd2:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="mritd-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd2:2379,http://etcd2:4001"
flannel的安裝
#為每一臺主機安裝flannel浙炼,作為docker之間通信的組件
curl -L https://github.com/coreos/flannel/releases/download/v0.7.0/flannel-v0.7.0-linux-amd64.tar.gz -o flannel.tar.gz
mkdir -p /opt/flannel
tar xzf flannel.tar.gz -C /opt/flannel
安裝k8s
在Master安轉k8s server
API SERVER是整個k8s集群的注冊中心、交通樞紐唯袄、安全控制入口弯屈。
#step 1:下載k8s-server
wget https://dl.k8s.io/v1.10.0/kubernetes-server-linux-amd64.tar.gz
#step 2:解壓
tar xvf kubernetes-server-linux-amd64.tar.gz
#step 3:拷貝kube-apiserver到/usr/local/bin/目錄下
cp ./kubernetes/server/bin/kube-apiserver /usr/local/bin/
#step 4:在/usr/lib/systemd/system/目錄下創(chuàng)建kube-apiserver.service,編輯內容如下
[Unit]
Description=Kube API Server
After=etcd.service
Wants=etcd.service
[Service]
Type=notify
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/local/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#step 5:在/etc/kubernetes/目錄下創(chuàng)建apiserver配置文件恋拷,編輯內容如下
KUBE_API_ARGS="--etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=170.170.0.0/16 --service-node-port-range=1-65535 --admission-control=NamespaceLifecycle,LimitRanger,ResourceQuota --logtostderr=false --log-dir=/home/chen/log/kubenetes --v=2"
#step 6:運行kube-apiserver
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver.service
sudo systemctl start kube-apiserver.service
#step 7:查看是否安裝成功
curl http://localhost:8080/api/
在Master安裝kube-controller-manager
Kube Controller Manager作為集群內部的管理控制中心,負責集群內的Node资厉、Pod副本、服務端點(Endpoint)蔬顾、命名空間(Namespace)宴偿、服務賬號(ServiceAccount)、資源定額(ResourceQuota)的管理诀豁,當某個Node意外宕機時窄刘,Kube Controller Manager會及時發(fā)現并執(zhí)行自動化修復流程,確保集群始終處于預期的工作狀態(tài)舷胜。
#step 1:拷貝kube-controller-manager到/usr/local/bin/
cp ./kubernetes/server/bin/kube-controller-manager /usr/local/bin/
#step 2:在/usr/lib/systemd/system/目錄下創(chuàng)建kube-controller-manager.service編輯內容如下
[Unit]
Description=Kube Controller Manager
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#step 3:在/etc/kubernetes/目錄下創(chuàng)建controller-manager配置文件編輯如下
KUBE_CONTROLLER_MANAGER_ARGS="--master=http://127.0.0.1:8080 --logtostderr=false --log-dir=/home/chen/log/kubernetes --v=2"
#step 4:啟動kube-controller-manager
sudo systemctl daemon-reload
sudo systemctl enable kube-controller-manager.service
sudo systemctl start kube-controller-manager.service
在Master安裝kube-scheduler
Kube Scheduler是負責調度Pod到具體的Node娩践,它通過API Server提供的接口監(jiān)聽Pods,獲取待調度pod烹骨,然后根據一系列的預選策略和優(yōu)選策略給各個Node節(jié)點打分排序欺矫,然后將Pod調度到得分最高的Node節(jié)點上。
#step 1:拷貝kube-scheduler到/usr/local/bin/
cp ./kubernetes/server/bin/kube-scheduler /usr/local/bin/
#step 2:在/usr/lib/systemd/system/目錄下創(chuàng)建kube-scheduler.service編輯內容如下
[Unit]
Description=Kube Scheduler
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#step 3:在/etc/kubernetes/目錄下創(chuàng)建scheduler配置文件編輯如下
sudo systemctl daemon-reload
sudo systemctl enable kube-scheduler.service
sudo systemctl restart kube-scheduler.service
在Node安轉kubernetes-node
此處是所有node節(jié)點都需要進行的安裝操作
在k8s集群中展氓,每個Node節(jié)點都會啟動kubelet進程穆趴,用來處理Master節(jié)點下發(fā)到本節(jié)點的任務,管理Pod和pod中的容器遇汞。kubelet會在API Server上注冊節(jié)點信息未妹,定期向Master匯報節(jié)點資源使用情況。
#step 1:下載k8s-node
wget https://dl.k8s.io/v1.10.0/kubernetes-node-linux-amd64.tar.gz
tar xvf kubernetes-node-linux-amd64.tar.gz
#step 2:拷貝kubelet到/usr/local/bin/
cp ./kubernetes/node/bin/kubelet /usr/local/bin/
#step 3:在/usr/lib/systemd/system/目錄下創(chuàng)建kubelet.service編輯內容如下
[Unit]
Description=Kube Kubelet Server
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/local/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.yaml --fail-swap-on=false --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#step 4: 目錄下創(chuàng)建配置文件空入,包括kubelet和kubelet.yaml络它。kubelet.yaml內容如下
apiVersion: v1
kind: Config
clusters:
- name: local
- cluster:
server: http://193.112.177.239:8080
users:
- name: kubelet
contexts:
- context:
cluster: local
user: kubelet
- name: kubelet-context
current-context: kubelet-context
#step 5:啟動kubelet
sudo systemctl daemon-reload
sudo systemctl enable kubelet.service
sudo systemctl start kubelet.service
#step 6:查看是否啟動成功
sudo systemctl status kubelet.service
在node中安裝配置kube-proxy
kube-proxy是管理service的訪問入口,包括集群內Pod到Service的訪問和集群外訪問service歪赢。關于service和pod的概念可以自行網上查看化戳。
#step 2:拷貝kube-proxy到/usr/local/bin/
cp ./kubernetes/node/bin/kube-proxy /usr/local/bin/
#step 3:在/usr/lib/systemd/system/目錄下創(chuàng)建kube-proxy.service編輯內容如下
[Unit]
Description=Kube Kubelet Server
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/local/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.yaml --fail-swap-on=false --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#step 4:在/etc/kubernetes/目錄下創(chuàng)建proxy配置文件內容如下
KUBE_PROXY_ARGS="--master=http://193.112.177.239:8080 --logtostderr=false --log-dir=/home/chen/log/kubernetes --v=2"
#step 5:啟動proxy
sudo systemctl daemon-reload
sudo systemctl enable kube-proxy.service
sudo systemctl start kube-proxy.service
(此處需注意主機的名字不能有_否則不能創(chuàng)建節(jié)點,修改主機名可以參考https://www.cnblogs.com/zhaojiedi1992/p/zhaojiedi_linux_043_hostname.html)
如在node上連接不上apiserver可運行以下命令其中apiserverIP為IP
kebuctl -s http://apiserverIP:8080 version
參考
https://www.cnblogs.com/zhenyuyaodidiao/p/6237019.html
https://blog.csdn.net/weixin_39686421/article/details/80333015