?一、docker鏡像構(gòu)建
1.1俄删、Dockerfile文件命令
FROM? 基礎(chǔ)鏡像
MAINTAINER 作者信息
LABEL 鏡像的元數(shù)據(jù)
RUN 運(yùn)行的命令,多個(gè)命令間加&&
COPY 源文件? 目錄位置?? 忱辅;這個(gè)命令只考文件榆芦,不解壓
ADD 源文件? 目錄位置 ;復(fù)制文件,同時(shí)解壓粥惧,如果是從網(wǎng)絡(luò)上下載并不解壓
EXPOSE 業(yè)務(wù)端口
CMD ["啟動(dòng)鏡像時(shí)執(zhí)行的命令或腳本"]键畴,例如CMD ["/apps/nginx/sbin/nginx","-g","daemon off;"]
WORKDIR 指定工作目錄
USER 指定容器執(zhí)行操作的用戶
VOLUME 創(chuàng)建一個(gè)掛載點(diǎn),擁有掛載外部存儲(chǔ)
ENTRYPOINT 和CMD功能類(lèi)似突雪,為容器執(zhí)行運(yùn)行的服務(wù)或參數(shù)起惕,也可以和CMD混用,不過(guò)當(dāng)指定了ENTRYPOINT后CMD的含義就發(fā)生了改變咏删,不再是直接運(yùn)行命令惹想,而是將CMD的內(nèi)容作為參數(shù)傳給ENTRYPOINT指令,再由ENTRYPOINT執(zhí)行
ENTRYPOINT? ["/apps/nginx/sbin/nginx"]
CMD ["-g","daemon off;"]
上述表示執(zhí)行/apps/nginx/sbin/nginx -g daemon off;命令
二督函、nginx鏡像構(gòu)建
2.1嘀粱、Dockerfile文件
cat??Dockerfile
FROM centos:7.9.2009
maintainer? "dongxikui"
RUN yum -y install epel-release && yum install -y vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop &&useradd nginx -u 2001
ADD nginx-1.18.0.tar.gz /usr/local/src/
RUN cd /usr/local/src/nginx-1.18.0 && ./configure --prefix=/apps/nginx --with-http_sub_module && make && make install
ADD nginx.conf /apps/nginx/conf/nginx.conf? #配置文件修改過(guò)加了daemon off;
ADD code.tar.gz /data/nginx/html
ADD run_nginx.sh /apps/nginx/sbin/
RUN chmod a+x /apps/nginx/sbin/run_nginx.sh
EXPOSE 80 443
CMD ["/apps/nginx/sbin/run_nginx.sh"]
2.2辰狡、nginx啟動(dòng)腳本
cat? run_nginx.sh
#!/bin/bash
echo "223.5.5.5" >> /etc/hosts
/apps/nginx/sbin/nginx
2.3锋叨、鏡像構(gòu)建
docker build -t reg.fchiaas.local/nginx-18:v2 ./
Sending build context to Docker daemon? 7.638MB
Step 1/11 : FROM centos:7.9.2009
---> eeb6ee3f44bd
Step 2/11 : maintainer? "dongxikui"
---> Using cache
---> d7e456fd95cf
Step 3/11 : RUN yum -y install epel-release && yum install -y vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop &&useradd nginx -u 2001
---> Using cache
---> 2834a890f1dd
Step 4/11 : ADD nginx-1.18.0.tar.gz /usr/local/src/
---> Using cache
---> a6d1ac56319b
Step 5/11 : RUN cd /usr/local/src/nginx-1.18.0 && ./configure --prefix=/apps/nginx --with-http_sub_module && make && make install
---> Using cache
---> dd0e7464a1db
Step 6/11 : ADD nginx.conf /apps/nginx/conf/nginx.conf
---> Using cache
---> cc8e9d500708
Step 7/11 : ADD code.tar.gz /data/nginx/html
---> Using cache
---> b1df48e10565
Step 8/11 : ADD run_nginx.sh /apps/nginx/sbin/
---> Using cache
---> 3e4be150267a
Step 9/11 : RUN chmod a+x /apps/nginx/sbin/run_nginx.sh
---> Using cache
---> 6ee59cac7ed5
Step 10/11 : EXPOSE 80 443
---> Using cache
---> 554949fa23a3
Step 11/11 : CMD ["/apps/nginx/sbin/run_nginx.sh"]
---> Using cache
---> 6021b711445b
Successfully built 6021b711445b
Successfully tagged reg.fchiaas.local/nginx-18:v2
2.4、生成容器測(cè)試
docker run --rm -p 8081:80 reg.fchiaas.local/nginx-18:v2
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7c088919e3f? reg.fchiaas.local/nginx-18:v2? "/apps/nginx/sbin/ru…"? 21 seconds ago? Up 20 seconds? 443/tcp, 0.0.0.0:8081->80/tcp, :::8081->80/tcp? naughty_tereshkova
三宛篇、容器的cpu和內(nèi)存的資源限制
--cpus 核數(shù):限制容器使用的CPU核數(shù)
-m xxM/G:限制使用的內(nèi)襯的大小
CPU測(cè)試
測(cè)試前
top - 21:03:48 up 58 min, 3 users, load average: 1.07, 1.05, 0.48
Tasks: 214 total,? 1 running, 213 sleeping,? 0 stopped,? 0 zombie
%Cpu(s):? 0.0 us,? 0.0 sy,? 0.0 ni,100.0 id,? 0.0 wa,? 0.0 hi,? 0.0 si,? 0.0 st
MiB Mem :? 3907.9 total,? 2889.6 free,? ? 324.1 used,? ? 694.3 buff/cache
MiB Swap:? ? ? 0.0 total,? ? ? 0.0 free,? ? ? 0.0 used.? 3349.5 avail Mem
? 測(cè)試
root@cncf-docker:~# docker run -it --cpus 1.2 lorel/docker-stress-ng --cpu 2
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 2 cpu
top - 21:10:04 up 1:04, 3 users, load average: 0.08, 0.33, 0.32
Tasks: 222 total,? 4 running, 218 sleeping,? 0 stopped,? 0 zombie
%Cpu(s): 60.0 us,? 0.2 sy,? 0.0 ni, 39.8 id,? 0.0 wa,? 0.0 hi,? 0.0 si,? 0.0 st
MiB Mem :? 3907.9 total,? 2833.6 free,? ? 357.0 used,? ? 717.4 buff/cache
MiB Swap:? ? ? 0.0 total,? ? ? 0.0 free,? ? ? 0.0 used.? 3313.6 avail Mem
內(nèi)存測(cè)試
測(cè)試
root@cncf-docker:~# docker run -it? lorel/docker-stress-ng --vm 2
root@cncf-docker:~#?docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
5085d3b6638d? agitated_rosalind? 198.52%? 1.006GiB / 3.8GiB? 26.47%? ? 946B / 0B? 0B / 0B? ? 9
root@cncf-docker:~# docker run -it -m 20m lorel/docker-stress-ng --vm 4
root@cncf-docker:~#?docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
c7f76bb3e9b9? bold_chatelet? 146.01%? 20MiB / 20MiB? ? ? 100.00%? 876B / 0B? 0B / 0B? ? 9
四娃磺、kuberntes各個(gè)主要組件
4.1 kube-apiserver
k8s?API Server提供了k8s各類(lèi)資源對(duì)象(pod,RC,Service等)的增刪改查及watch等HTTP Rest接口,是整個(gè)系統(tǒng)的數(shù)據(jù)總線和數(shù)據(jù)中心叫倍。
kubernetes?API Server的功能:
提供了集群管理的REST API接口(包括認(rèn)證授權(quán)偷卧、數(shù)據(jù)校驗(yàn)以及集群狀態(tài)變更)嘿般;
提供其他模塊之間的數(shù)據(jù)交互和通信的樞紐(其他模塊通過(guò)API Server查詢(xún)或修改數(shù)據(jù),只有API Server才直接操作etcd);
是資源配額控制的入口涯冠;
擁有完備的集群安全機(jī)制.
4.2 kube-controller-manager
Controller Manager作為集群內(nèi)部的管理控制中心,負(fù)責(zé)集群內(nèi)的Node逼庞、Pod副本蛇更、服務(wù)端點(diǎn)(Endpoint)、命名空間(Namespace)赛糟、服務(wù)賬號(hào)(ServiceAccount)派任、資源定額(ResourceQuota)的管理,當(dāng)某個(gè)Node意外宕機(jī)時(shí)璧南,Controller Manager會(huì)及時(shí)發(fā)現(xiàn)并執(zhí)行自動(dòng)化修復(fù)流程掌逛,確保集群始終處于預(yù)期的工作狀
4.3?kube-scheduler
Kubernetes 調(diào)度器是一個(gè)控制面進(jìn)程,負(fù)責(zé)將 Pods 指派到節(jié)點(diǎn)上司倚。 調(diào)度器基于約束和可用資源為調(diào)度隊(duì)列中每個(gè) Pod 確定其可合法放置的節(jié)點(diǎn)豆混。 調(diào)度器之后對(duì)所有合法的節(jié)點(diǎn)進(jìn)行排序,將 Pod 綁定到一個(gè)合適的節(jié)點(diǎn)动知。 在同一個(gè)集群中可以使用多個(gè)不同的調(diào)度器皿伺;kube-scheduler 是其參考實(shí)現(xiàn)。 參閱調(diào)度?以獲得關(guān)于調(diào)度和 kube-scheduler 組件的更多信息盒粮。
4.4?kubelet
kubelet 是在每個(gè) Node 節(jié)點(diǎn)上運(yùn)行的主要 “節(jié)點(diǎn)代理”鸵鸥。它可以使用以下之一向 apiserver 注冊(cè): 主機(jī)名(hostname);覆蓋主機(jī)名的參數(shù)丹皱;某云驅(qū)動(dòng)的特定邏輯妒穴。
kubelet 是基于 PodSpec 來(lái)工作的。每個(gè) PodSpec 是一個(gè)描述 Pod 的 YAML 或 JSON 對(duì)象摊崭。 kubelet 接受通過(guò)各種機(jī)制(主要是通過(guò) apiserver)提供的一組 PodSpec讼油,并確保這些 PodSpec 中描述的容器處于運(yùn)行狀態(tài)且運(yùn)行狀況良好。 kubelet 不管理不是由 Kubernetes 創(chuàng)建的容器呢簸。
4.5? kube-proxy
Kubernetes 網(wǎng)絡(luò)代理在每個(gè)節(jié)點(diǎn)上運(yùn)行汁讼。網(wǎng)絡(luò)代理反映了每個(gè)節(jié)點(diǎn)上 Kubernetes API 中定義的服務(wù),并且可以執(zhí)行簡(jiǎn)單的 TCP阔墩、UDP 和 SCTP 流轉(zhuǎn)發(fā)嘿架,或者在一組后端進(jìn)行 循環(huán) TCP、UDP 和 SCTP 轉(zhuǎn)發(fā)啸箫。 當(dāng)前可通過(guò) Docker-links-compatible 環(huán)境變量找到服務(wù)集群 IP 和端口耸彪, 這些環(huán)境變量指定了服務(wù)代理打開(kāi)的端口。 有一個(gè)可選的插件忘苛,可以為這些集群 IP 提供集群 DNS蝉娜。 用戶必須使用 apiserver API 創(chuàng)建服務(wù)才能配置代理唱较。
4.6? etcd
etcd 是兼具一致性和高可用性的鍵值數(shù)據(jù)庫(kù),作為保存 Kubernetes 所有集群數(shù)據(jù)的后臺(tái)數(shù)據(jù)庫(kù)召川。
4.7 網(wǎng)絡(luò)組件
CNI 插件:遵守容器網(wǎng)絡(luò)接口(Container Network Interface南缓,CNI)?規(guī)范,其設(shè)計(jì)上偏重互操作性荧呐。
Kubernetes 遵從 CNI 規(guī)范的?v0.4.0?版本汉形。
Kubenet 插件:使用?bridge?和?host-local?CNI 插件實(shí)現(xiàn)了基本的?cbr0。
五倍阐、kubernetes集群二進(jìn)制安裝
5.1概疆、IP地址信息
10.0.0.21 master1 master1.fchiaas.local
10.0.0.22 master2? master2.fchiaas.local
10.0.0.23 master3? master3.fchiaas.local
10.0.0.24 node1? ? node1.fchiaas.local
10.0.0.25 node2? ? node2.fchiaas.local
10.0.0.26 node3? ? node3.fchiaas.local
10.0.0.27 haproxy1? haproxy1.fchiaas.local
10.0.0.28 haproxy2? haproxy2.fchiaas.local
10.0.0.29 etcd01? ? etcd01.fchiaas.local
10.0.0.30 etcd02? ? etcd02.fchiaas.local
10.0.0.31 etcd03? ? etcd03.fchiaas.local
10.0.0.20 myk8s-api.fchiaas.local #VIP
10.0.0.240 reg.fchiaas.local
5.2、軟件下載
安裝軟件的地址:https://github.com/easzlab/kubeasz/
wget https://github.com/easzlab/kubeasz/releases/download/3.1.1/ezdown
chmod +x ezdown
修改ezdown中docker版本
DOCKER_VER=19.03.15
執(zhí)行下載所有軟件并安裝docker等基礎(chǔ)軟件
./ezdown -D
cd /etc/kubeasz
root@master1:/etc/kubeasz# ./ezctl new k8-cluster1
2022-01-03 01:03:38 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8-cluster1
2022-01-03 01:03:38 DEBUG set version of common plugins
2022-01-03 01:03:38 DEBUG cluster k8-cluster1: files successfully created.
2022-01-03 01:03:38 INFO next steps 1: to config '/etc/kubeasz/clusters/k8-cluster1/hosts'
2022-01-03 01:03:38 INFO next steps 2: to config '/etc/kubeasz/clusters/k8-cluster1/config.yml'
root@master1:/etc/kubeasz#cd?/etc/kubeasz/clusters/k8-cluster1
修改hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
10.0.0.29
10.0.0.30
10.0.0.31
# master node(s)
[kube_master]
10.0.0.21
10.0.0.22
# work node(s)
[kube_node]
10.0.0.24
10.0.0.25
# [optional] loadbalance for accessing k8s from outside
[ex_lb]
10.0.0.28 LB_ROLE=backup EX_APISERVER_VIP=10.0.0.20 EX_APISERVER_PORT=6443
10.0.0.29 LB_ROLE=master EX_APISERVER_VIP=10.0.0.20 EX_APISERVER_PORT=6443
# [optional] ntp server for the cluster
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.100.0.0/16"
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.2000.0.0/16"
# NodePort Range
NODE_PORT_RANGE="30000-40000"
# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="fchiaas.local"
# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/usr/bin"
修改config.yml
# 設(shè)置時(shí)間源服務(wù)器【重要:集群內(nèi)機(jī)器時(shí)間必須同步】
ntp_servers:
? - "ntp1.aliyun.com"
#? - "time1.cloud.tencent.com"
#? - "0.cn.pool.ntp.org"
# node節(jié)點(diǎn)最大pod 數(shù)
MAX_PODS: 400
# coredns 自動(dòng)安裝
dns_install: "no"
ENABLE_LOCAL_DNS_CACHE: no
# metric server 自動(dòng)安裝
metricsserver_install: "no"
# dashboard 自動(dòng)安裝
dashboard_install: "no"
修改01.prepare.yml
# [optional] to synchronize system time of nodes with 'chrony'
- hosts:
? - kube_master
? - kube_node
? - etcd
#? - ex_lb? ?
? #- chrony
執(zhí)行下面的安裝
./ezctl setup k8s-cluster1 01
./ezctl setup k8s-cluster1 02
在etcd的主機(jī)上執(zhí)行下馬的命令測(cè)試
root@etcd03:~# NODE_IPS="10.0.0.29 10.0.0.30 10.0.0.31"
root@etcd03:~#? for ip in ${NODE_IPS}; do? ETCDCTL_API=3 /usr/bin/etcdctl? --endpoints=https://${ip}:2379? ? --cacert=/etc/kubernetes/ssl/ca.pem? --cert=/etc/kubernetes/ssl/etcd.pem? --key=/etc/kubernetes/ssl/etcd-key.pem? endpoint health; done
https://10.0.0.29:2379 is healthy: successfully committed proposal: took = 8.875052ms
https://10.0.0.30:2379 is healthy: successfully committed proposal: took = 9.369287ms
https://10.0.0.31:2379 is healthy: successfully committed proposal: took = 7.231524ms
繼續(xù)安裝
./ezctl setup k8s-cluster1 03
./ezctl setup k8s-cluster1 04
查看MASTER節(jié)點(diǎn)狀態(tài)
root@master1:/etc/kubeasz# kubectl get node
NAME? ? ? ? STATUS? ? ? ? ? ? ? ? ? ? ROLES? ? AGE? VERSION
10.0.0.21? Ready,SchedulingDisabled? master? 42s? v1.22.2
10.0.0.22? Ready,SchedulingDisabled? master? 42s? v1.22.2
繼續(xù)安裝
./ezctl setup k8s-cluster1 05
./ezctl setup k8s-cluster1 06
安裝完成06后峰搪,完成網(wǎng)絡(luò)的安裝岔冀,初步安裝完成
測(cè)試網(wǎng)絡(luò)
root@master1:/etc/kubeasz# calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |? ? PEER TYPE? ? | STATE |? SINCE? |? ? INFO? ? |
+--------------+-------------------+-------+----------+-------------+
| 10.0.0.22? ? | node-to-node mesh | up? ? | 04:17:41 | Established |
| 10.0.0.24? ? | node-to-node mesh | up? ? | 04:17:42 | Established |
| 10.0.0.25? ? | node-to-node mesh | up? ? | 04:17:40 | Established |
+--------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
測(cè)試容器
# kubectl run test1 --image=alpine sleep 500000
?kubectl get pods
NAME? ? READY? STATUS? ? RESTARTS? AGE
test1? 1/1? ? Running? 0? ? ? ? ? 20s
kubectl run test2 --image=centos sleep 500000
kubectl run test3 --image=centos sleep 500000
# kubectl get pods -o wide
NAME? ? READY? STATUS? ? RESTARTS? AGE? ? IP? ? ? ? ? ? ? NODE? ? ? ? NOMINATED NODE? READINESS GATES
test1? 1/1? ? Running? 0? ? ? ? ? 4m37s? 10.200.166.130? 10.0.0.24? <none>? ? ? ? ? <none>
test2? 1/1? ? Running? 0? ? ? ? ? 60s? ? 10.200.166.131? 10.0.0.24? <none>? ? ? ? ? <none>
test3? 1/1? ? Running? 0? ? ? ? ? 45s? ? 10.200.104.1? ? 10.0.0.25? <none>? ? ? ? ? <none>
ping外網(wǎng)測(cè)試
root@master1:/etc/kubeasz# kubectl exec -it test1 -- /bin/sh
sh-4.4# ping 223.5.5.5
PING 223.5.5.5 (223.5.5.5) 56(84) bytes of data.
64 bytes from 223.5.5.5: icmp_seq=1 ttl=113 time=11.1 ms
64 bytes from 223.5.5.5: icmp_seq=2 ttl=113 time=10.6 ms
^C
--- 223.5.5.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 10.558/10.830/11.103/0.291 ms
sh-4.4# exit
exit
root@master1:/etc/kubeasz# kubectl exec -it test3 -- /bin/sh
sh-4.4# ping 223.5.5.5
PING 223.5.5.5 (223.5.5.5) 56(84) bytes of data.
64 bytes from 223.5.5.5: icmp_seq=1 ttl=113 time=11.5 ms
64 bytes from 223.5.5.5: icmp_seq=2 ttl=113 time=10.2 ms
^C
--- 223.5.5.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 10.163/10.830/11.498/0.675 ms