centos7.9安裝k8s集群v1.28.x版本

一、系統(tǒng)情況

虛擬機(jī)版本:pve
系統(tǒng)版本:centos7.9_2009_x86
鏡像地址:http://isoredirect.centos.org/centos/7/isos/x86_64/
配置:4核8G(官網(wǎng)最低要求2核2G)

主機(jī) 說明
192.168.1.32 master節(jié)點(diǎn)
192.168.1.33 node1節(jié)點(diǎn)
192.168.1.34 node2節(jié)點(diǎn)

二鳄虱、環(huán)境配置

編輯器安裝:因?yàn)榱?xí)慣拙已,我使用nano作為編輯器倍踪,也可以根據(jù)喜好使用vi或vim惭适。

yum install -y nano #安裝nano

2.1癞志、所有節(jié)點(diǎn)修改防火墻

本次是實(shí)驗(yàn)環(huán)境,圖省事選擇關(guān)閉防火墻戒突,如果是生產(chǎn)膊存,除非做了公網(wǎng)和內(nèi)網(wǎng)隔離,還是別關(guān)閉吧爵卒,做好相關(guān)接口開發(fā)就行钓株。

systemctl stop firewalld  #停止防火墻
systemctl disable firewalld #設(shè)置開機(jī)不啟動(dòng)

2.2、所有節(jié)點(diǎn)禁用selinux

#修改/etc/selinux/config文件中的SELINUX=permissive
nano /etc/selinux/config
或
# 將 SELinux 設(shè)置為 permissive 模式(相當(dāng)于將其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
禁用selinux

2.3、所有節(jié)點(diǎn)關(guān)閉swap分區(qū)

#永久禁用swap,刪除或注釋掉/etc/fstab里的swap設(shè)備的掛載命令即可
nano /etc/fstab
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
關(guān)閉swap分區(qū)

修改后重啟服務(wù)器

reboot

2.4、所有節(jié)點(diǎn)時(shí)間同步

yum -y install ntp
systemctl start ntpd
systemctl enable ntpd

2.5咳秉、開啟bridge-nf-call-iptalbes

執(zhí)行下述指令

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 設(shè)置所需的 sysctl 參數(shù)澜建,參數(shù)在重新啟動(dòng)后保持不變
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 應(yīng)用 sysctl 參數(shù)而不重新啟動(dòng)
sudo sysctl --system

通過運(yùn)行以下指令確認(rèn) `br_netfilter` 和 `overlay` 模塊被加載:

lsmod | grep br_netfilter
lsmod | grep overlay

通過運(yùn)行以下指令確認(rèn) net.bridge.bridge-nf-call-iptables何之、net.bridge.bridge-nf-call-ip6tablesnet.ipv4.ip_forward 系統(tǒng)變量在你的 sysctl 配置中被設(shè)置為 1:

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

三、所有節(jié)點(diǎn)安裝containerd

3.1奸攻、安裝containerd

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum -y install containerd.io

3.2辐赞、生成config.toml配置

containerd config default > /etc/containerd/config.toml

3.3响委、配置 systemd cgroup 驅(qū)動(dòng)

/etc/containerd/config.toml 中設(shè)置

sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

將sandbox_image下載地址改為阿里云地址

  [plugins."io.containerd.grpc.v1.cri"]
    ...
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

3.4、啟動(dòng)containerd 并設(shè)置開機(jī)自啟動(dòng)

systemctl restart containerd && systemctl enable containerd

四、k8s配置阿里云yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name = Kubernetes
baseurl = https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled = 1
gpgcheck = 0
repo_gpgcheck = 0
gpgkey = https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

五蛔翅、yum安裝kubeadm山析、kubelet、kubectl

在所有服務(wù)器上都安裝kubeadm爵政、kubelet、kubectl

5.1掺出、刪除歷史版本

如果之前沒裝過就跳過此步驟

yum -y remove kubelet kubeadm kubectl

5.2双抽、安裝kubeadm、kubelet、kubectl

這些說明適用于 Kubernetes 1.28聂抢,阿里的yum源,kubelet版本只更新到1.28.0版本空盼,所以下面命令需要加上版本號(hào)。

yum install -y kubelet-1.28.0 kubeadm-1.28.0 kubectl-1.28.0 --disableexcludes=kubernetes
systemctl enable kubelet

六新荤、初始化master節(jié)點(diǎn)

kubeadm init \
--apiserver-advertise-address=192.168.1.32 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.28.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \

得到以下內(nèi)容揽趾,就為成功

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.32:6443 --token ew2io9.4iw5iw110z880t7f \
        --discovery-token-ca-cert-hash sha256:4d7754e0b61037862d8a6c7f07f6467d7c263e7443c38f1f7b57c1eb739d2fe7

然后按照上面提示,一步步執(zhí)行命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

現(xiàn)在可以看到master節(jié)點(diǎn)了

kubectl get node
主節(jié)點(diǎn)

七苛骨、子節(jié)點(diǎn)加入master節(jié)點(diǎn)

kubeadm join 192.168.1.31:6443 --token bhtq9s.dr8txafrpnncpfj8         --discovery-token-ca-cert-hash sha256:875a3dad7491c653ab7cabcbd1e80cbcc2e91a42263bb09e9703d39cdc490b3c
加入master節(jié)點(diǎn)

這里面經(jīng)常遇到的情況是命令卡住不動(dòng)篱瞎,大概率是token過期了,回到master節(jié)點(diǎn)痒芝,執(zhí)行

kubeadm token create

創(chuàng)建新的token俐筋,替換后重新執(zhí)行就行
現(xiàn)在可以看到master節(jié)點(diǎn)和子節(jié)點(diǎn)了

kubectl get node
節(jié)點(diǎn)信息

8、部署CNI網(wǎng)絡(luò)

雖然現(xiàn)在有了master和node節(jié)點(diǎn)嘀倒,但是所有節(jié)點(diǎn)狀態(tài)都是NotReady,這是因?yàn)闆]有cni網(wǎng)絡(luò)插件。

8.1、下載cni插件

wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
mkdir -pv /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v1.3.0.tgz -C /opt/cni/bin/

8.2馆衔、master安裝flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
master安裝flannel成功

這時(shí)候再看節(jié)點(diǎn)狀態(tài)


節(jié)點(diǎn)狀態(tài)

都已經(jīng)成為ready了陕壹,在master服務(wù)器執(zhí)行

kubectl get pods -n kube-system

查看pod狀態(tài)绊袋,如果是


pod狀態(tài)

說明可用了剖毯。

九土铺、安裝dashboard

9.1部宿、下載recommended.yaml文件

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

由于外網(wǎng)比較慢藏雏,這兒提供我目前用的,和原版唯一不同的悲酷,就是以下內(nèi)容渣蜗,目的是為了暴露端口,外網(wǎng)直接訪問凭戴。

spec:
  ports:
    - port: 443
      targetPort: 8443
      name: https  # 原版沒有name
      nodePort: 32001 # 原版沒有nodePort
   type: NodePort # 原版沒有nodePort
kubectl apply -f [你的本地路徑]/recommended.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      name: https
      nodePort: 32001
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.7.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.8
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

9.2、創(chuàng)建用戶示例

本地創(chuàng)建dashboard-adminuser.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
kubectl apply -f [你的文件路徑]/dashboard-adminuser.yaml
kubectl -n kubernetes-dashboard create token admin-user
token
輸入token

首頁

至此,安裝完成朽们!

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市蚊锹,隨后出現(xiàn)的幾起案子丢烘,更是在濱河造成了極大的恐慌,老刑警劉巖余耽,帶你破解...
    沈念sama閱讀 217,185評論 6 503
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異苹熏,居然都是意外死亡碟贾,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,652評論 3 393
  • 文/潘曉璐 我一進(jìn)店門轨域,熙熙樓的掌柜王于貴愁眉苦臉地迎上來袱耽,“玉大人,你說我怎么就攤上這事干发≈炀蓿” “怎么了?”我有些...
    開封第一講書人閱讀 163,524評論 0 353
  • 文/不壞的土叔 我叫張陵枉长,是天一觀的道長冀续。 經(jīng)常有香客問我,道長必峰,這世上最難降的妖魔是什么洪唐? 我笑而不...
    開封第一講書人閱讀 58,339評論 1 293
  • 正文 為了忘掉前任,我火速辦了婚禮吼蚁,結(jié)果婚禮上凭需,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好功炮,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,387評論 6 391
  • 文/花漫 我一把揭開白布溅潜。 她就那樣靜靜地躺著,像睡著了一般薪伏。 火紅的嫁衣襯著肌膚如雪滚澜。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,287評論 1 301
  • 那天嫁怀,我揣著相機(jī)與錄音孔轴,去河邊找鬼墩剖。 笑死界轩,一個(gè)胖子當(dāng)著我的面吹牛反症,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播存捺,決...
    沈念sama閱讀 40,130評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼槐沼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了捌治?” 一聲冷哼從身側(cè)響起岗钩,我...
    開封第一講書人閱讀 38,985評論 0 275
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎肖油,沒想到半個(gè)月后兼吓,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,420評論 1 313
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡森枪,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,617評論 3 334
  • 正文 我和宋清朗相戀三年视搏,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片县袱。...
    茶點(diǎn)故事閱讀 39,779評論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡浑娜,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出显拳,到底是詐尸還是另有隱情棚愤,我是刑警寧澤搓萧,帶...
    沈念sama閱讀 35,477評論 5 345
  • 正文 年R本政府宣布杂数,位于F島的核電站,受9級(jí)特大地震影響瘸洛,放射性物質(zhì)發(fā)生泄漏揍移。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,088評論 3 328
  • 文/蒙蒙 一反肋、第九天 我趴在偏房一處隱蔽的房頂上張望那伐。 院中可真熱鬧,春花似錦、人聲如沸罕邀。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,716評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽诉探。三九已至日熬,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間肾胯,已是汗流浹背竖席。 一陣腳步聲響...
    開封第一講書人閱讀 32,857評論 1 269
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留敬肚,地道東北人毕荐。 一個(gè)月前我還...
    沈念sama閱讀 47,876評論 2 370
  • 正文 我出身青樓,卻偏偏與公主長得像艳馒,于是被迫代替她去往敵國和親憎亚。 傳聞我的和親對象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,700評論 2 354

推薦閱讀更多精彩內(nèi)容