第1章 k8s系統(tǒng)架構(gòu)
從系統(tǒng)架構(gòu)來看痴荐,k8s分為2個節(jié)點
Master 控制節(jié)點 指揮官
Node 工作節(jié)點 干活的
1.Master節(jié)點組成
API Server :提供k8s API接口
主要處理Rest操作以及更新Etcd中的對象
是所有資源增刪改查的唯一入口洲鸠。
Scheduler:資源調(diào)度器
根據(jù)etcd里的節(jié)點資源狀態(tài)決定將Pod綁定到哪個Node上
Controller Manager
負(fù)責(zé)保障pod的健康存在
資源對象的自動化控制中心偏友,Kubernetes集群有很多控制器滨溉。
Etcd
這個是Kubernetes集群的數(shù)據(jù)庫
所有持久化的狀態(tài)信息存儲在Etcd中
2.Node節(jié)點的組成
Docker Engine
負(fù)責(zé)節(jié)點容器的管理工作红选,最終創(chuàng)建出來的是一個Docker容器腰池。
kubelet
安裝在Node上的代理服務(wù)名眉,用來管理Pods以及容器/鏡像/Volume等,實現(xiàn)對集群對節(jié)點的管理污尉。
kube-proxy
安裝在Node上的網(wǎng)絡(luò)代理服務(wù)膀哲,提供網(wǎng)絡(luò)代理以及負(fù)載均衡,實現(xiàn)與Service通訊十厢。
第2章 k8s邏輯架構(gòu)
從邏輯架構(gòu)上看等太,k8s分為
Pod
Controller
Service
1.POD
POD是k8s的最小單位
POD的IP地址是隨機(jī)的捂齐,刪除POD會改變IP
POD都有一個根容器
一個POD內(nèi)可以由一個或多個容器組成
一個POD內(nèi)的容器共享根容器的網(wǎng)絡(luò)命名空間
一個POD的內(nèi)的網(wǎng)絡(luò)地址由根容器提供
2.Controller
用來管理POD,控制器的種類有很多
- RC Replication Controller 控制POD有多個副本
- RS ReplicaSet RC控制的升級版
- Deployment 推薦使用蛮放,功能更強(qiáng)大,包含了RS控制器
- DaemonSet 保證所有的Node上有且只有一個Pod在運(yùn)行
- StatefulSet 有狀態(tài)的應(yīng)用奠宜,為Pod提供唯一的標(biāo)識包颁,它可以保證部署和scale的順序
3.Service
NodeIP 對外提供用戶訪問
CluterIP 集群內(nèi)部IP瞻想,可以動態(tài)感知后面的POD IP
POD IP POD的IP
第3章 k8s實驗環(huán)境準(zhǔn)備
1.配置信息
主機(jī)名 IP地址 推薦配置 勉強(qiáng)配置
node1 10.0.0.11 1C4G40G 1C2G
node2 10.0.0.12 1C2G40G 1C1G
node3 10.0.0.13 1C2G40G 1C1G
2.初始化操作
干凈環(huán)境
配置主機(jī)名
配置host解析
關(guān)閉防火墻
關(guān)閉SELinux
配置時間同步
更新好阿里源
確保網(wǎng)絡(luò)通暢
關(guān)閉SWAP分區(qū)
第4章 安裝指定版本的docker
1.配置阿里源
cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
2.下載指定版本的docker
yum -y install docker-ce-18.09.9-3.el7 docker-ce-cli-18.09.9-3.el7
3.配置docker鏡像加速
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://ig2l319y.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
4.啟動
systemctl enable docker && systemctl start docker
5.檢查版本
docker -v
第5章 部署kubeadm和kubelet
注意!所有機(jī)器都需要操作C浣馈D⑾铡!
注意岳悟!所有機(jī)器都需要操作5杵!贵少!
注意呵俏!所有機(jī)器都需要操作!L显睢普碎!
1.設(shè)置k8s國內(nèi)yum倉庫
cat >/etc/yum.repos.d/kubernetes.repo<<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.安裝kubeadm
yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 ipvsadm
3.設(shè)置k8s禁止使用swap
cat > /etc/sysconfig/kubelet<<EOF
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
EOF
4.設(shè)置內(nèi)核參數(shù)
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system
5.設(shè)置kubelet開機(jī)啟動
systemctl enable kubelet && systemctl start kubelet
6.加載IPVS模塊
cat >/etc/sysconfig/modules/ipvs.modules<<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod +x /etc/sysconfig/modules/ipvs.modules
source /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack_ipv
第6章 初始化集群部署Master
0.安裝規(guī)劃
節(jié)點規(guī)劃
node1 master節(jié)點 API Server,controlle,scheduler,kube-proxy,kubelet,etcd
node2 node節(jié)點 Dokcer kubelet kube-proxy
node3 node節(jié)點 Dokcer kubelet kube-proxy
IP規(guī)劃
POD IP 10.2.0.0
Cluster IP 10.1.0.0
Node IP 10.0.0.0
1.初始化命令
注意!只在node1節(jié)點運(yùn)行!!!
注意录平!只在node1節(jié)點運(yùn)行!!!
注意麻车!只在node1節(jié)點運(yùn)行!!!
官網(wǎng)地址:
https://v1-16.docs.kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/
初始化命令:
kubeadm init \
--apiserver-advertise-address=10.0.0.11 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.16.2 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.2.0.0/16 \
--service-dns-domain=cluster.local \
--ignore-preflight-errors=Swap \
--ignore-preflight-errors=NumCPU
執(zhí)行完成后會有輸出,這是node節(jié)點加入k8s集群的命令
kubeadm join 10.0.0.11:6443 --token 2an0sn.kykpta54fw6uftgq \
--discovery-token-ca-cert-hash sha256:e7d36e1fb53e59b12f0193f4733edb465d924321bcfc055f801cf1ea59d90aae
2.為kubectl準(zhǔn)備kubeconfig
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
3.獲取node節(jié)點信息
[root@node1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 NotReady master 15m v1.16.
4.支持命令補(bǔ)全
yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
kubectl completion bash >/etc/bash_completion.d/kubectl
5.設(shè)置kube-proxy使用ipvs模式
執(zhí)行命令斗这,然后將mode: ""修改為mode: "ipvs"然后保存退出
kubectl edit cm kube-proxy -n kube-system
重啟kube-proxy
kubectl get pod -n kube-system |grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
查看pod信息
kubectl get -n kube-system pod|grep "kube-proxy"
檢查日志动猬,如果出現(xiàn)IPVS rr就表示成功
[root@node1 ~]# kubectl -n kube-system logs -f kube-proxy-7cdbn
I0225 08:03:57.736191 1 node.go:135] Successfully retrieved node IP: 10.0.0.11
I0225 08:03:57.736249 1 server_others.go:176] Using ipvs Proxier.
W0225 08:03:57.736841 1 proxier.go:420] IPVS scheduler not specified, use rr by default
檢查IPVS規(guī)則
[root@node1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.0.1:443 rr
-> 10.0.0.11:6443 Masq 1 0 0
TCP 10.1.0.10:53 rr
TCP 10.1.0.10:9153 rr
UDP 10.1.0.10:53 rr
第7章 部署網(wǎng)絡(luò)插件
注意!只在node1節(jié)點上安裝部署1砑T娌臁!
注意燃逻!只在node1節(jié)點上安裝部署P蚰俊!伯襟!
注意猿涨!只在node1節(jié)點上安裝部署!D饭帧叛赚!
1.部署Flannel網(wǎng)絡(luò)插件
git clone --depth 1 https://github.com/coreos/flannel.git
2.修改資源配置清單
cd flannel/Documentation/
vim kube-flannel.yml
egrep -n "10.2.0.0|mirror|eth0" kube-flannel.yml
128: "Network": "10.2.0.0/16",
172: image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
186: image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
192: - --iface=eth0
3.應(yīng)用資源配置清單
kubectl create -f kube-flannel.yml
4.檢查pod運(yùn)行狀態(tài),等一會應(yīng)該全是running
[root@node1 ~]# kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-bzlkw 1/1 Running 0 77m
coredns-58cc8c89f4-sgs44 1/1 Running 0 77m
etcd-node1 1/1 Running 0 76m
kube-apiserver-node1 1/1 Running 0 76m
kube-controller-manager-node1 1/1 Running 0 76m
kube-flannel-ds-amd64-cc5g6 1/1 Running 0 3m10s
kube-proxy-7cdbn 1/1 Running 0 23m
kube-scheduler-node1 1/1 Running 0 76m
第8章 部署Node節(jié)點
1.master節(jié)點輸出增加節(jié)點的命令
kubeadm token create --print-join-command
2.node節(jié)點執(zhí)行加入集群命令
kubeadm join 10.0.0.11:6443 --token uqf018.mia8v3i1zcai19sj --discovery-token-ca-cert-hash sha256:e7d36e1fb53e59b12f0193f4733edb465d924321bcfc055f801cf1ea59d90aae
3.在node1節(jié)點上查看狀態(tài)
kubectl get nodes
4.給節(jié)點打標(biāo)簽
[root@node1 ~]# kubectl label nodes node2 node-role.kubernetes.io/node=
[root@node1 ~]# kubectl label nodes node3 node-role.kubernetes.io/node=
5.再次查看節(jié)點狀態(tài)
[root@node1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready master 171m v1.16.2
node2 Ready node 27m v1.16.2
node3 Ready node 27m v1.16.2
第9章 常用資源類型
1.工作負(fù)載類型
RC ReplicaController
RS ReplicaSet
DP Deployment
DS DaemonSet
2.服務(wù)發(fā)現(xiàn)及負(fù)載均衡
Service
Ingress
3.配置與存儲資源
ConfigMap 存儲配置文件
Secret 存儲用戶字典
4.集群級別資源
Namespace
Node
Role
ClusterRole
RoleBinding
ClusterRoleBinding
第10章 資源配置清單
1.創(chuàng)建資源的方法
apiserver僅能接受json格式的資源定義
yaml格式提供的清單稽揭,apiserver可以自動將其轉(zhuǎn)換為json格式再提交
2.資源清單介紹
查看資源清單所需字段
kubectl explain pod
kubectl explain pod
kubectl explain pod.spec
kubectl explain pod.spec.volumes
資源清單字段介紹
apiVersion: v1 #屬于k8s哪一個API版本或組
kind: Pod #資源類型
metadata: #元數(shù)據(jù)俺附,嵌套字段
spec: #定義容器的規(guī)范,創(chuàng)建的容器應(yīng)該有哪些特性
status: #只讀的溪掀,由系統(tǒng)控制事镣,顯示當(dāng)前容器的狀態(tài)
3.使用資源配置清單創(chuàng)建POD
3.1首先使用命令行創(chuàng)建一個pod
kubectl create deployment nginx --image=nginx:alpine
kubectl get pod -o wide
3.2 將剛才創(chuàng)建的pod配置到處成yaml格式
kubectl get pod -o yaml > nginx-pod.yaml
3.3 精簡資源清單,刪掉不需要的配置
cat nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
json格式寫法:
{
apiVersion: "v1",
kind: "Pod",
metadata:
{
name: "nginx",
labels:
{
app: "nginx"
}
}
spec:
{
containers:
{
name: "nginx",
image: "nginx:alpine",
imagePullPolicy: "IfNotPresent"
}
}
}
3.5 刪除命令行創(chuàng)建的資源
kubectl delete deployments.apps nginx
3.6 應(yīng)用資源配置清單
kubectl create -f nginx-pod.yaml
3.7 查看pod信息
kubectl get pod -o wide
3.8 查看pod詳細(xì)信息
kubectl describe pod nginx
4.POD資源清單總結(jié)
聲明式管理 我想運(yùn)行一個Nginx k8s幫你干活
apiVersion: v1 #api版本
kind: Pod #資源類型
metadata: #元數(shù)據(jù)
name: nginx #元數(shù)據(jù)名稱
labels: #pod標(biāo)簽
app: nginx
spec: #容器定義
containers: #容器的特性
- name: nginx #容器名稱
image: nginx:alpine #容器的鏡像名稱
imagePullPolicy: IfNotPresent #容器的拉取策略
ports: #容器端口
- name: http
containerPort: 80 #容器暴露的端口
第11章 Node節(jié)點標(biāo)簽設(shè)置
1.查看node的標(biāo)簽
kubectl get node --show-labels
2.給node打標(biāo)簽
kubectl label nodes node2 CPU=Xeon
kubectl label nodes node3 disktype=ssd
3.編輯POD資源配置清單揪胃,使用node標(biāo)簽選擇器
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
nodeSelector:
#CPU: Xeon
disktype: SSD
4.刪除容器重新創(chuàng)建
kubectl delete pod nginx
kubectl create -f nginx-pod.yaml
5.查看結(jié)果
kubectl get pod -o wide
6.刪除節(jié)點標(biāo)簽
kubectl label nodes node2 CPU-
kubectl label nodes node3 disktype-
第12章 容器打標(biāo)簽
1.標(biāo)簽說明
一個標(biāo)簽可以給多個POD使用
一個POD也可以擁有多個標(biāo)簽
2.查看POD標(biāo)簽
kubectl get pod --show-labels
3.添加標(biāo)簽方法
3.1 方法1:直接編輯資源配置清單:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
release: beta
3.2 方法2:命令行打標(biāo)簽
kubectl label pods nginx release=beta
kubectl label pods nginx job=linux
kubectl get pod --show-labels
4.刪除標(biāo)簽
kubectl label pod nginx job-
kubectl get pod --show-labels
5.POD標(biāo)簽實驗
5.1 生成2個不同標(biāo)簽的POD
kubectl create deployment nginx --image=nginx:1.14.0
kubectl get pod --show-labels
kubectl label pods nginx-xxxxxxxx release=stable
kubectl get pod --show-labels
5.2 根據(jù)標(biāo)簽查看
kubectl get pods -l release=beta --show-labels
kubectl get pods -l release=stable --show-labels
5.3 根據(jù)標(biāo)簽刪除
kubectl delete pod -l app=nginx
第13章 運(yùn)行一個demo
1.編寫資源配置清單
mysql-dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
mysql-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
tomcat-dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb
spec:
replicas: 2
selector:
matchLabels:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: kubeguide/tomcat-app:v1
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: 'mysql'
- name: MYSQL_SERVICE_PORT
value: '3306'
tomcat-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: myweb
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001
selector:
app: myweb
2.應(yīng)用資源配置清單
kubectl create -f ./
3.查看創(chuàng)建的資源
kubectl get pod -o wide
kubectl get svc
4.瀏覽訪問
第14章 使用harbor作為私有倉庫
1.清理以前安裝的Harbor
docker ps -a|grep "goharbor"|awk '{print "docker stop "$1}'
docker ps -a|grep "goharbor"|awk '{print "docker rm "$1}'
docker images|grep "goharbor"|awk '{print "docker rmi "$1":"$2}'
2.解壓并修改harbor配置文件
cd /opt/
tar zxf harbor-offline-installer-v1.9.0-rc1.tgz
cd harbor/
vim harbor.yml
hostname: 10.0.0.11
port: 8888
harbor_admin_password: 123456
data_volume: /data/harbor
3.執(zhí)行安裝并訪問
./install.sh
瀏覽器訪問:
http://10.0.0.11:8888
4.創(chuàng)建一個私有倉庫k8s
web頁面操作
5.配置docker信任倉庫并重啟
注意Aв础7兆痢!三臺服務(wù)器都操作!!!
cat >/etc/docker/daemon.json<<EOF
{
"registry-mirrors": ["https://ig2l319y.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"insecure-registries" : ["http://10.0.0.11:8888"]
}
EOF
systemctl restart docker
注意K嫔痢Q羲啤!node1重啟docker后harbor會失效铐伴,需要重啟harbor
cd /opt/harbor
docker-compose stop
docker-compose start
6.docker登陸harbor
docker login 10.0.0.11:8888
7.將docker登陸憑證轉(zhuǎn)化為k8s能識別的base64編碼
只要一臺節(jié)點操作即可
[root@node1 ~]# cat /root/.docker/config.json|base64
ewoJImF1dGhzIjogewoJCSIxMC4wLjAuMTE6ODg4OCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZN
VEl6TkRVMiIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tl
ci1DbGllbnQvMTguMDkuOSAobGludXgpIgoJfQp9
8.編寫Secert資源配置清單
[root@node1 ~/demo]# cat harbor-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: harbor-secret
data:
.dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxMC4wLjAuMTE6ODg4OCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZNVEl6TkRVMiIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTguMDkuOSAobGludXgpIgoJfQp9
type: kubernetes.io/dockerconfigjson
9.應(yīng)用Secret資源
kubectl delete -f harbor-secret.yaml
kubectl create -f harbor-secret.yaml
kubectl get secrets
10.修改鏡像tag并上傳到harbor
docker tag kubeguide/tomcat-app:v1 10.0.0.11:8888/k8s/tomcat-app:v1
docker tag mysql:5.7 10.0.0.11:8888/k8s/mysql:5.7
docker push 10.0.0.11:8888/k8s/tomcat-app:v1
docker push 10.0.0.11:8888/k8s/mysql:5.7
11.修改demo資源配置清單
mysql-dp.yaml
imagePullSecrets:
- name: harbor-secret
tomcat-dp.yaml
imagePullSecrets:
- name: harbor-secret
12.應(yīng)用資源清單并查看
kubectl apply -f ./
kubectl get pod
第15章 POD控制器
1.控制器作用
1.pod類型的資源撮奏,刪除pod后,不會重建
2.替用戶監(jiān)視并保證相應(yīng)的節(jié)點上始終有用戶所期望的副本數(shù)量的pod在運(yùn)行
3.如果所運(yùn)行的pod副本數(shù)超過了用戶期望的当宴,那么控制器就會刪掉挽荡,直到和用戶期望的一致
4.如果所運(yùn)行的pod副本數(shù)低于用戶期望的,那么控制器就會創(chuàng)建即供,直到和用戶期望的一致
2.常用控制器類型
ReplicationController RC:
ReplicationSet RS:
按用戶期望的副本創(chuàng)建pod,并始終保持相應(yīng)數(shù)量副本
Deployment:
Deployment通過控制RS來保證POD始終保持相應(yīng)的數(shù)量副本
支持滾動更新定拟,回滾,回滾默認(rèn)保留10個版本
提供聲明式配置,支持動態(tài)修改
管理無狀態(tài)應(yīng)用最理想的控制器
node節(jié)點可能會運(yùn)行0個或多個POD
DeamonSet:
一個節(jié)點只運(yùn)行一個逗嫡,必須是始終運(yùn)行的狀態(tài)
StatefulSet:
有狀態(tài)應(yīng)用
Job:
只運(yùn)行一次的任務(wù)青自,不需要一直運(yùn)行的任務(wù)
確認(rèn)任務(wù)完成才會退出
Cronjob:
周期性的任務(wù)
4.ReplicationSet控制器
4.1編寫RS控制器資源配置清單
cat >nginx-rs.yaml <<EOF
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx-rs
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx-containers
image: nginx:1.14.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
EOF
4.2.應(yīng)用RS資源配置清單
kubectl create -f nginx-rs.yaml
4.3.查看RS資源
kubectl get rs
kubectl get pod -o wide
4.4.修改yaml文件應(yīng)用修改
vim nginx-rs.yaml
kubectl apply -f nginx-rs.yaml
4.5.動態(tài)修改配置 擴(kuò)容 收縮 升級
kubectl edit rs nginx
kubectl scale rs nginx --replicas=5
5.Deployment控制器
5.1 資源配置清單
cat >nginx-dp.yaml<<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx-containers
image: nginx:1.14.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
EOF
5.2.應(yīng)用資源配置清單
kubectl create -f nginx-dp.yaml
5.3.查看
kubectl get pod -o wide
kubectl get deployments.apps
kubectl describe deployments.apps nginx-deployment
5.4.更新版本
方法1: 命令行根據(jù)資源配置清單修改鏡像
kubectl set image -f nginx-dp.yaml nginx-containers=nginx:1.16.0
查看有沒有更新
kubectl get pod
kubectl describe deployments.apps nginx-deployment
kubectl describe pod nginx-deployment-7c596b4d95-6ztld
方法2: 命令行根據(jù)資源類型修改鏡像
打開2個窗口:
第一個窗口監(jiān)控pod狀態(tài)
kubectl get pod -w
第二個窗口更新操作
kubectl set image deployment nginx-deployment nginx-containers=nginx:1.14.0
查看更新后的deployment信息
kubectl describe deployments.apps nginx-deployment
----------------------------------------------------
Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set nginx-deployment-7c596b4d95 to 1
Normal ScalingReplicaSet 14m deployment-controller Scaled down replica set nginx-deployment-9c74bb6c7 to 1
Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set nginx-deployment-7c596b4d95 to 2
Normal ScalingReplicaSet 13m deployment-controller Scaled down replica set nginx-deployment-9c74bb6c7 to 0
Normal ScalingReplicaSet 8m30s deployment-controller Scaled up replica set nginx-deployment-9c74bb6c7 to 1
Normal ScalingReplicaSet 8m29s (x2 over 32m) deployment-controller Scaled up replica set nginx-deployment-9c74bb6c7 to 2
Normal ScalingReplicaSet 8m29s deployment-controller Scaled down replica set nginx-deployment-7c596b4d95 to 1
Normal ScalingReplicaSet 8m28s deployment-controller Scaled down replica set nginx-deployment-7c596b4d95 to 0
更新過程:
nginx-deployment-7c596b4d95-8z7kf #老的版本
nginx-deployment-7c596b4d95-6ztld #老的版本
nginx-deployment-9c74bb6c7-pgfxz 0/1 Pending
nginx-deployment-9c74bb6c7-pgfxz 0/1 Pending
nginx-deployment-9c74bb6c7-pgfxz 0/1 ContainerCreating #拉取新版本鏡像
nginx-deployment-9c74bb6c7-pgfxz 1/1 Running #運(yùn)行新POD
nginx-deployment-7c596b4d95-8z7kf 1/1 Terminating #停止一個舊的POD
nginx-deployment-9c74bb6c7-h7mk2 0/1 Pending
nginx-deployment-9c74bb6c7-h7mk2 0/1 Pending
nginx-deployment-9c74bb6c7-h7mk2 0/1 ContainerCreating #拉取新版本鏡像
nginx-deployment-9c74bb6c7-h7mk2 1/1 Running #運(yùn)行新POD
nginx-deployment-7c596b4d95-6ztld 1/1 Terminating #停止一個舊的POD
nginx-deployment-7c596b4d95-8z7kf 0/1 Terminating #等待舊的POD結(jié)束
nginx-deployment-7c596b4d95-6ztld 0/1 Terminating #等待舊的POD結(jié)束
查看滾動更新狀態(tài):
kubectl rollout status deployment nginx-deployment
滾動更新示意圖:
5.5.回滾上一個版本
kubectl describe deployments.apps nginx-deployment
kubectl rollout undo deployment nginx-deployment
kubectl describe deployments.apps nginx-deployment
5.6.回滾到指定版本
v1 1.14.0
v2 1.15.0
v3 3.333.3
回滾到v1版本
創(chuàng)建第一版 1.14.0
kubectl create -f nginx-dp.yaml --record
更新第二版 1.15.0
kubectl set image deployment nginx-deployment nginx-containers=nginx:1.15.0
更新第三版 1.99.0
kubectl set image deployment nginx-deployment nginx-containers=nginx:1.16.0
查看所有歷史版本
kubectl rollout history deployment nginx-deployment
查看指定歷史版本信息
kubectl rollout history deployment nginx-deployment --revision=1
回滾到指定版本
kubectl rollout undo deployment nginx-deployment --to-revision=1
5.7.擴(kuò)縮容
kubectl scale deployment nginx-deployment --replicas=5
kubectl scale deployment nginx-deployment --replicas=2
第16章 Service控制器
1.Service控制器介紹
Sercice控制器和POD控制器沒關(guān)系
Sercice控制器可以選擇由POD控制器創(chuàng)建的POD資源
2.三種IP
NodeIP :節(jié)點對外提供訪問的IP
ClusterIP :用來動態(tài)發(fā)現(xiàn)和負(fù)載均衡POD的IP
PodIP :提供POD使用的IP
3.創(chuàng)建ClusterIP
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: default
spec:
selector:
app: nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
type: ClusterIP
4.查看ClusterIP
kubectl get svc
5.創(chuàng)建NodeIP資源配置清單
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: default
spec:
selector:
app: nginx
ports:
- name: http
port: 8080 #clusterIP的端口號
protocol: TCP
targetPort: 80 #POD暴露的端口
nodePort: 30000 #NodeIP的端口號,也就是對外用戶訪問的端口號
type: NodePort
6.查看創(chuàng)建的資源
kubectl get svc
7.示意圖
第17章 Ingress控制器
1.NodePort缺點
1.沒有ingress之前驱证,pod對外提供服務(wù)只能通過NodeIP:NodePort的形式延窜,但是這種形式有缺點,一個節(jié)點上的PORT不能重復(fù)利用抹锄。比如某個服務(wù)占用了80逆瑞,那么其他服務(wù)就不能在用這個端口了。
2.NodePort是4層代理伙单,不能解析7層的http获高,不能通過域名區(qū)分流量
3.為了解決這個問題,我們需要用到資源控制器叫Ingress吻育,作用就是提供一個統(tǒng)一的訪問入口念秧。工作在7層
4.雖然我們可以使用nginx/haproxy來實現(xiàn)類似的效果,但是傳統(tǒng)部署不能動態(tài)的發(fā)現(xiàn)我們新創(chuàng)建的資源布疼,必須手動修改配置文件并重啟摊趾。
5.適用于k8s的ingress控制器主流的有ingress-nginx和traefik
2.安裝部署traefik
2.1 traefik_dp.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
tolerations:
- operator: "Exists"
nodeSelector:
kubernetes.io/hostname: node1
containers:
- image: traefik:v1.7.17
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
2.2 traefik_rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
2.3 traefik_svc.yaml
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: NodePort
3.應(yīng)用資源配置
kubectl create -f ./
4.查看并訪問
kubectl -n kube-system get svc
5.創(chuàng)建traefik的web-ui的ingress規(guī)則
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-ui
namespace: kube-system
spec:
rules:
- host: traefik.ui.com
http:
paths:
- path: /
backend:
serviceName: traefik-ingress-service
servicePort: 8080
6.訪問測試
traefik.ui.com
7.ingress實驗
7.1.實驗?zāi)繕?biāo)
未使用ingress之前只能通過IP+端口訪問:
tomcat 8080
nginx 8090
使用ingress之后直接可以使用域名訪問:
traefik.nginx.com:80 --> nginx 8090
traefik.tomcat.com:80 --> tomcat 8080
7.2.創(chuàng)建2個pod和svc
mysql-dp.yaml
mysql-svc.yaml
tomcat-dp.yaml
tomcat-svc.yaml
nginx-dp.yaml
nginx-svc-clusterip.yaml
7.3.創(chuàng)建ingress控制器資源配置清單并應(yīng)用
cat >nginx-ingress.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-nginx
namespace: default
spec:
rules:
- host: traefik.nginx.com
http:
paths:
- path: /
backend:
serviceName: nginx-service
servicePort: 80
EOF
cat >tomcat-ingress.yaml<<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-tomcat
namespace: default
spec:
rules:
- host: traefik.tomcat.com
http:
paths:
- path: /
backend:
serviceName: myweb
servicePort: 8080
EOF
kubectl apply -f nginx-ingress.yaml
kubectl apply -f tomcat-ingress.yaml
7.4.查看創(chuàng)建的資源
kubectl get svc
kubectl get ingresses
kubectl describe ingresses traefik-nginx
kubectl describe ingresses traefik-tomcat
7.5.訪問測試
traefik.nginx.com
traefik.tomcat.com
第18章 數(shù)據(jù)持久化
1.Volume介紹
Volume是Pad中能夠被多個容器訪問的共享目錄
Kubernetes中的Volume不Pad生命周期相同,但不容器的生命周期丌相關(guān)
Kubernetes支持多種類型的Volume游两,并且一個Pod可以同時使用任意多個Volume
Volume類型包括:
EmptyDir:Pod分配時創(chuàng)建砾层, K8S自動分配,當(dāng)Pod被移除數(shù)據(jù)被清空贱案。用于臨時空間等肛炮。
hostPath:為Pod上掛載宿主機(jī)目錄。用于持久化數(shù)據(jù)。
nfs:掛載相應(yīng)磁盤資源铸董。
2.EmptyDir實驗
cat >emptyDir.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: busybox-empty
spec:
containers:
- name: busybox-pod
image: busybox
volumeMounts:
- mountPath: /data/busybox/
name: cache-volume
command: ["/bin/sh","-c","while true;do echo $(date) >> /data/busybox/index.html;sleep 3;done"]
volumes:
- name: cache-volume
emptyDir: {}
EOF
3.hostPath實驗
3.1 type類型說明
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
DirectoryOrCreate 目錄不存在就自動創(chuàng)建
Directory 目錄必須存在
FileOrCreate 文件不存在則創(chuàng)建
File 文件必須存在
3.2 創(chuàng)建hostPath類型volume資源配置清單
apiVersion: v1
kind: Pod
metadata:
name: busybox-nodename
spec:
nodeName: node2
containers:
- name: busybox-pod
image: busybox
volumeMounts:
- mountPath: /data/pod/
name: hostpath-volume
command: ["/bin/sh","-c","while true;do echo $(date) >> /data/pod/index.html;sleep 3;done"]
volumes:
- name: hostpath-volume
hostPath:
path: /data/node/
type: DirectoryOrCreate
4.根據(jù)Node標(biāo)簽選擇POD創(chuàng)建在指定的Node上
4.1 方法1: 直接選擇Node節(jié)點名稱
apiVersion: v1
kind: Pod
metadata:
name: busybox-nodename
spec:
nodeName: node2
containers:
- name: busybox-pod
image: busybox
volumeMounts:
- mountPath: /data/pod/
name: hostpath-volume
command: ["/bin/sh","-c","while true;do echo $(date) >> /data/pod/index.html;sleep 3;done"]
volumes:
- name: hostpath-volume
hostPath:
path: /data/node/
type: DirectoryOrCreate
4.2 方法2: 根據(jù)Node標(biāo)簽選擇Node節(jié)點
節(jié)點添加標(biāo)簽
kubectl label nodes node3 disktype=SSD
資源配置清單
apiVersion: v1
kind: Pod
metadata:
name: busybox-nodename
spec:
nodeSelector:
disktype: SSD
containers:
- name: busybox-pod
image: busybox
volumeMounts:
- mountPath: /data/pod/
name: hostpath-volume
command: ["/bin/sh","-c","while true;do echo $(date) >> /data/pod/index.html;sleep 3;done"]
volumes:
- name: hostpath-volume
hostPath:
path: /data/node/
type: DirectoryOrCreate
5.編寫mysql的持久化deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-dp
namespace: default
spec:
selector:
matchLabels:
app: mysql
replicas: 1
template:
metadata:
name: mysql-pod
namespace: default
labels:
app: mysql
spec:
containers:
- name: mysql-pod
image: mysql:5.7
ports:
- name: mysql-port
containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-volume
volumes:
- name: mysql-volume
hostPath:
path: /data/mysql
type: DirectoryOrCreate
nodeSelector:
disktype: SSD
第19章 PV和PVC
1.PV和PVC介紹
PV是對底層網(wǎng)絡(luò)共享存儲的抽象,將共享存儲定義為一種“資源”肴沫。
PV由管理員創(chuàng)建和配置
PV只能是共享存儲
PVC則是用戶對存儲資源的一個“申請”粟害。
就像Pod消費(fèi)Node的資源一樣,PVC能夠“消費(fèi)”PV資源
PVC可以申請?zhí)囟ǖ拇鎯臻g和訪問模式
2.PV和PVC生命周期
3.實驗-創(chuàng)建nfs和mysql的pv及pvc
3.1 master節(jié)點安裝nfs
yum install nfs-utils -y
mkdir /data/nfs-volume/mysql -p
vim /etc/exports
/data/nfs-volume 10.0.0.0/24(rw,async,no_root_squash,no_all_squash)
systemctl start rpcbind
systemctl start nfs
showmount -e 127.0.0.1
3.2 所有node節(jié)點安裝nfs
yum install nfs-utils.x86_64 -y
showmount -e 10.0.0.11
3.3 編寫并創(chuàng)建nfs-pv資源
cat >nfs-pv.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /data/nfs-volume/mysql
server: 10.0.0.11
EOF
kubectl create -f nfs-pv.yaml
kubectl get persistentvolume
3.4 創(chuàng)建mysql-pvc資源
cat >mysql-pvc.yaml <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
EOF
kubectl create -f mysql-pvc.yaml
kubectl get pvc
3.5 創(chuàng)建mysql-deployment資源
cat >mysql-dp.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
volumeMounts:
- name: mysql-pvc
mountPath: /var/lib/mysql
- name: mysql-log
mountPath: /var/log/mysql
volumes:
- name: mysql-pvc
persistentVolumeClaim:
claimName: mysql-pvc
- name: mysql-log
hostPath:
path: /var/log/mysql
nodeSelector:
disktype: SSD
EOF
kubectl create -f mysql-dp.yaml
kubectl get pod -o wide
3.6 測試方法
1.創(chuàng)建nfs-pv
2.創(chuàng)建mysql-pvc
3.創(chuàng)建mysql-deployment并掛載mysq-pvc
4.登陸到mysql的pod里創(chuàng)建一個數(shù)據(jù)庫
5.將這個pod刪掉颤芬,因為deployment設(shè)置了副本數(shù)悲幅,所以會自動再創(chuàng)建一個新的pod
6.登錄這個新的pod,查看剛才創(chuàng)建的數(shù)據(jù)庫是否依然能看到
7.如果仍然能看到站蝠,則說明數(shù)據(jù)是持久化保存的
3.7 accessModes字段說明
ReadWriteOnce 單路讀寫
ReadOnlyMany 多路只讀
ReadWriteMany 多路讀寫
resources 資源的限制汰具,比如至少5G
3.8 volumeName精確匹配
capacity 限制存儲空間大小
reclaim policy pv的回收策略
retain pv被解綁后上面的數(shù)據(jù)仍保留
recycle pv上的數(shù)據(jù)被釋放
delete pvc和pv解綁后pv就被刪除
備注:用戶在創(chuàng)建pod所需要的存儲空間時,前提是必須要有pv存在
才可以菱魔,這樣就不符合自動滿足用戶的需求留荔,而且之前在k8s 9.0
版本還可刪除pv,這樣造成數(shù)據(jù)不安全性
``
第20章 configMap資源
1.為什么要用configMap澜倦?
將配置文件和POD解耦
2.congiMap里的配置文件是如何存儲的聚蝶?
鍵值對
key:value
文件名:配置文件的內(nèi)容
3.configMap支持的配置類型
直接定義的鍵值對
基于文件創(chuàng)建的鍵值對
4.configMap創(chuàng)建方式
命令行
資源配置清單
5.configMap的配置文件如何傳遞到POD里
變量傳遞
數(shù)據(jù)卷掛載
6.命令行創(chuàng)建configMap
kubectl create configmap --help
kubectl create configmap nginx-config --from-literal=nginx_port=80 --from-literal=server_name=nginx.cookzhang.com
kubectl get cm
kubectl describe cm nginx-config
7.POD環(huán)境變量形式引用configMap
kubectl explain pod.spec.containers.env.valueFrom.configMapKeyRef
cat >nginx-cm.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx-cm
spec:
containers:
- name: nginx-pod
image: nginx:1.14.0
ports:
- name: http
containerPort: 80
env:
- name: NGINX_PORT
valueFrom:
configMapKeyRef:
name: nginx-config
key: nginx_port
- name: SERVER_NAME
valueFrom:
configMapKeyRef:
name: nginx-config
key: server_name
EOF
kubectl create -f nginx-cm.yaml
8.查看pod是否引入了變量
[root@node1 ~/confimap]# kubectl exec -it nginx-cm /bin/bash
root@nginx-cm:~# echo ${NGINX_PORT}
80
root@nginx-cm:~# echo ${SERVER_NAME}
nginx.cookzhang.com
root@nginx-cm:~# printenv |egrep "NGINX_PORT|SERVER_NAME"
NGINX_PORT=80
SERVER_NAME=nginx.cookzhang.com
注意:
變量傳遞的形式,修改confMap的配置藻治,POD內(nèi)并不會生效
因為變量只有在創(chuàng)建POD的時候才會引用生效碘勉,POD一旦創(chuàng)建好,環(huán)境變量就不變了
9.文件形式創(chuàng)建configMap
創(chuàng)建配置文件:
cat >www.conf <<EOF
server {
listen 80;
server_name www.cookzy.com;
location / {
root /usr/share/nginx/html/www;
index index.html index.htm;
}
}
EOF
創(chuàng)建configMap資源:
kubectl create configmap nginx-www --from-file=www.conf=./www.conf
查看cm資源
kubectl get cm
kubectl describe cm nginx-www
編寫pod并以存儲卷掛載模式引用configMap的配置
cat >nginx-cm-volume.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx-cm
spec:
containers:
- name: nginx-pod
image: nginx:1.14.0
ports:
- name: http
containerPort: 80
volumeMounts:
- name: nginx-www
mountPath: /etc/nginx/conf.d/
volumes:
- name: nginx-www
configMap:
name: nginx-www
items:
- key: www.conf
path: www.conf
EOF
測試:
1.進(jìn)到容器內(nèi)查看文件
kubectl exec -it nginx-cm /bin/bash
cat /etc/nginx/conf.d/www.conf
2.動態(tài)修改
configMap
kubectl edit cm nginx-www
3.再次進(jìn)入容器內(nèi)觀察配置會不會自動更新
cat /etc/nginx/conf.d/www.conf
nginx -T
10.配置文件內(nèi)容直接以數(shù)據(jù)格式直接存儲在configMap里
創(chuàng)建config配置清單:
cat >nginx-configMap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: default
data:
www.conf: |
server {
listen 80;
server_name www.cookzy.com;
location / {
root /usr/share/nginx/html/www;
index index.html index.htm;
}
}
blog.conf: |
server {
listen 80;
server_name blog.cookzy.com;
location / {
root /usr/share/nginx/html/blog;
index index.html index.htm;
}
}
應(yīng)用并查看清單:
kubectl create -f nginx-configMap.yaml
kubectl get cm
kubectl describe cm nginx-config
創(chuàng)建POD資源清單并引用configMap
cat >nginx-cm-volume-all.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx-cm
spec:
containers:
- name: nginx-pod
image: nginx:1.14.0
ports:
- name: http
containerPort: 80
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d/
volumes:
- name: nginx-config
configMap:
name: nginx-config
items:
- key: www.conf
path: www.conf
- key: blog.conf
path: blog.conf
EOF
應(yīng)用并查看:
kubectl create -f nginx-cm-volume-all.yaml
kubectl get pod
kubectl describe pod nginx-cm
進(jìn)入容器內(nèi)并查看:
kubectl exec -it nginx-cm /bin/bash
ls /etc/nginx/conf.d/
cat /etc/nginx/conf.d/www.conf
測試動態(tài)修改configMap會不會生效
kubectl edit cm nginx-config
kubectl exec -it nginx-cm /bin/bash
ls /etc/nginx/conf.d/
cat /etc/nginx/conf.d/www.conf
第21章 安全認(rèn)證和RBAC
API Server是訪問控制的唯一入口
在k8s平臺上的操作對象都要經(jīng)歷三種安全相關(guān)的操作
1.認(rèn)證操作
http協(xié)議 token 認(rèn)證令牌
ssl認(rèn)證 kubectl需要證書雙向認(rèn)證
2.授權(quán)檢查
RBAC 基于角色的訪問控制
3.準(zhǔn)入控制
進(jìn)一步補(bǔ)充授權(quán)機(jī)制桩卵,一般在創(chuàng)建验靡,刪除,代理操作時作補(bǔ)充
k8s的api賬戶分為2類
1.實實在在的用戶 人類用戶 userAccount
2.POD客戶端 serviceAccount 默認(rèn)每個POD都有認(rèn)真信息
RBAC就是基于角色的訪問控制
你這個賬號可以擁有什么權(quán)限
以traefik舉例:
1.創(chuàng)建了賬號 ServiceAccount:traefik-ingress-controller
2.創(chuàng)建角色 ClusterRole: traefik-ingress-controller
Role POD相關(guān)的權(quán)限
ClusterRole namespace級別操作
3.將賬戶和權(quán)限角色進(jìn)行綁定 traefik-ingress-controller
RoleBinding
ClusterRoleBinding
4.創(chuàng)建POD時引用ServiceAccount
serviceAccountName: traefik-ingress-controller
注意3凇Jどぁ!
kubeadm安裝的k8s集群钩乍,證書默認(rèn)只有1年
第22章 k8s dashboard
1.官方項目地址
https://github.com/kubernetes/dashboard
2.下載配置文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
3.修改配置文件
39 spec:
40 type: NodePort
41 ports:
42 - port: 443
43 targetPort: 8443
44 nodePort: 30000
4.應(yīng)用資源配置
kubectl create -f recommended.yaml
5.創(chuàng)建管理員賬戶并應(yīng)用
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
cat > dashboard-admin.yaml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
kubectl create -f dashboard-admin.yaml
6.查看資源并獲取token
kubectl get pod -n kubernetes-dashboard -o wide
kubectl get svc -n kubernetes-dashboard
kubectl get secret -n kubernetes-dashboard
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
7.瀏覽器訪問
https://10.0.0.11:30000
google瀏覽器打不開就換火狐瀏覽器
黑科技
8.報錯總結(jié)
{"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2020-03-03T09:57:00Z"}
Skipping metric because of error: Metric label not set
原因:
沒喲安裝metrics監(jiān)控組建
第23章 prometheus
1.官網(wǎng)地址
https://github.com/prometheus/prometheus
2.監(jiān)控k8s需要的組件
使用metric-server收集數(shù)據(jù)給k8s集群內(nèi)使用兼蕊,如kubectl,hpa,scheduler等
使用prometheus-operator部署prometheus,存儲監(jiān)控數(shù)據(jù)
使用kube-state-metrics收集k8s集群內(nèi)資源對象數(shù)據(jù)
使用node_exporter收集集群中各節(jié)點的數(shù)據(jù)
使用prometheus收集apiserver件蚕,scheduler孙技,controller-manager,kubelet組件數(shù)據(jù)
使用alertmanager實現(xiàn)監(jiān)控報警
使用grafana實現(xiàn)數(shù)據(jù)可視化
metrics-server 主要關(guān)注的是資源度量 API 的實現(xiàn)排作,比如 CPU牵啦、文件描述符、內(nèi)存妄痪、請求延時等指標(biāo)哈雏。
kube-state-metrics 主要關(guān)注的是業(yè)務(wù)相關(guān)的一些元數(shù)據(jù),比如 Deployment、Pod裳瘪、副本狀態(tài)等
3.安裝部署prometheus
導(dǎo)入鏡像
docker load < prom-prometheusv2_2_1.tar
創(chuàng)建命名空間
kubectl create namespace prom
創(chuàng)建資源
cd prometheus
kubectl create -f ./
檢查資源
kubectl -n prom get all -o wide
web瀏覽器查看
http://10.0.0.11:30090/targets
4.安裝部署metrics-server
導(dǎo)入鏡像
docker load < k8s-gcr-io-addon-resizer1_8_6.tar
docker load < k8s-gcr-io-metrics-server-amd64v0-3-6.tar
創(chuàng)建資源
kubectl create -f ./
檢查
kubectl top node
kubectl top pod
5.安裝node-exporterv
導(dǎo)入鏡像
docker load < prom-node-exporterv0_15_2.tar
創(chuàng)建資源
kubectl create -f ./
查看資源
kubectl -n prom get pod -o wide
kubectl -n prom get svc
瀏覽器查看
http://10.0.0.12:9100/metrics
http://10.0.0.13:9100/metrics
6.安裝kube-state-metrics
導(dǎo)入鏡像
docker load < gcr-io-google_containers-kube-state-metrics-amd64v1-3-1.tar
創(chuàng)建資源
kubectl create -f ./
查看
kubectl -n prom get pod
kubectl -n prom get svc
curl 10.1.232.109:8080/metrics
7.安裝grafna和k8s-prometheus-adapter
導(dǎo)入鏡像
docker load < directxman12-k8s-prometheus-adapter-amd64-latest.tar
docker load < k8s-gcr-io-heapster-grafana-amd64v5_0_4.tar
修改grafana資源配置清單
1 apiVersion: apps/v1
2 kind: Deployment
3 metadata:
4 name: monitoring-grafana
5 namespace: prom
6 spec:
7 selector:
8 matchLabels:
9 k8s-app: grafana
10 replicas: 1
11 template:
創(chuàng)建資源
cd k8s-prometheus-adapter
kubectl create -f ./
檢查創(chuàng)建的資源
kubectl -n prom get pod -o wide
kubectl -n prom get svc
瀏覽器查看
http://10.0.0.11:32725
導(dǎo)入dashboard
https://grafana.com/grafana/dashboards/10000
prometheus查詢語句
sum by (name) (rate (container_cpu_usage_seconds_total{image!=""}[1m]))
container_cpu_usage_seconds_total{name =~ "^k8s_POD.*",namespace="default"}
正則表達(dá)式:
=~ 模糊匹配
== 完全匹配
!= 不匹配
!~ 不匹配正則表達(dá)式
查詢語句:
sum (container_memory_working_set_bytes{image!="",name=~"^k8s_.*",kubernetes_io_hostname=~".*",namespace="default"}) by (pod)
翻譯:
sum (監(jiān)控指標(biāo){字段1!="字段1配置的值",字段2!="字段2配置的值"}) by (分組字段名)
添加namespace標(biāo)簽后grafana修改圖標(biāo)
sum (container_memory_working_set_bytes{image!="",name=~"^k8s_.*",kubernetes_io_hostname=~"^$Node$",namespace=~"^$Namespace$"}) by (pod)
第24章 HPA資源自動擴(kuò)容
https://kubernetes.io/zh/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
1.生成測試鏡像
創(chuàng)建測試首頁
cat index.php
<?php
$x = 0.0001;
for ($i = 0; $i <= 1000000; $i++) {
$x += sqrt($x);
}
echo "OK!";
?>
創(chuàng)建dockerfile
cat dockerfile
FROM php:5-apache
ADD index.php /var/www/html/index.php
RUN chmod a+rx index.php
生成鏡像
docker build -t php:v1 .
2.創(chuàng)建php-deployment資源
cat >php-dp.yaml<<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: php-apache
name: php-apache
namespace: default
replicas: 1
selector:
matchLabels:
run: php-apache
template:
metadata:
labels:
run: php-apache
spec:
containers:
- image: php:v1
imagePullPolicy: IfNotPresent
name: php-apache
ports:
- containerPort: 80
protocol: TCP
resources:
requests:
cpu: 200m
EOF
3.創(chuàng)建hpa資源
cat >php-hpa.yaml<<EOF
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
maxReplicas: 10
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
targetCPUUtilizationPercentage: 50
EOF
4.查看
kubectl get svc
kubectl get pod
kubectl get hpa
5.壓測
while true; do wget -q -O- http://10.1.28.100; done
6.觀察hpa數(shù)據(jù)
kubectl get hpa -w
kubectl get pod -w
7.如果覺得操作麻煩土浸,可以使用下面的命令,效果一樣
創(chuàng)建dp
kubectl run php-apache --image=php:v1 --requests=cpu=200m --expose --port=80
創(chuàng)建hpa
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
壓測
while true; do wget -q -O- http://10.1.28.100; done