- 假設(shè)
- 安裝前
- 每個節(jié)點(diǎn)安裝 Docker
- 每個節(jié)點(diǎn)安裝 kubeadm, kubelet, kubectl
- 在 master 節(jié)點(diǎn)
- 在節(jié)點(diǎn)上
- Dashboard
- Istio
- MetalLB
- 實(shí)例
- 參考
假設(shè)
Role | IP | OS | RAM | CPU |
---|---|---|---|---|
Master | 172.16.50.146 | Ubuntu 20.04 | 4G | 2 |
Node1 | 172.16.50.147 | Ubuntu 20.04 | 4G | 2 |
Node2 | 172.16.50.148 | Ubuntu 20.04 | 4G | 2 |
安裝前
更改 hostname
-
Master
$ sudo vim hostname $ cat /etc/hostname master
-
Node1
$ sudo vim hostname $ cat /etc/hostname node1
-
Node2
$ sudo vim hostname $ cat /etc/hostname node2
注:直接使用
sudo hostname xxx
只能臨時改變hostname
渣窜,重啟機(jī)器后還是會變成舊的hostname
纤虽。因此钠四,這里推薦大家直接修改/etc/hostname
存皂,永久改變hostname
化焕。
如果因?yàn)闄C(jī)器重啟改變
hostname
唉擂,而導(dǎo)致kubelet.go:2268] node "xxx" not found
熙参,可以修改完hostname
后使用systemctl restart kubelet
重啟解決問題析命。
驗(yàn)證每個節(jié)點(diǎn)的 MAC 地址和 product_uuid 是唯一的
$ ip link
$ sudo cat /sys/class/dmi/id/product_uuid
關(guān)閉每個節(jié)點(diǎn)的防火墻
$ sudo ufw disable
關(guān)閉每個節(jié)點(diǎn)的 swap
$ sudo swapoff -a; sudo sed -i '/swap/d' /etc/fstab
提示: 如果需要在多臺計(jì)算機(jī)上同時執(zhí)行命令兴泥,則可以使用 iTerm
?+ Shift + i
跨所有選項(xiàng)卡輸入工育。
讓 iptables 查看每個節(jié)點(diǎn)的橋接流量
加載 br_netfilter
$ sudo modprobe br_netfilter
$ lsmod | grep br_netfilter
br_netfilter 28672 0
bridge 176128 1 br_netfilter
$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
> br_netfilter
> EOF
br_netfilter
設(shè)置 net.bridge.bridge-nf-call-iptables
$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
$ sudo sysctl --system
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
......
* Applying /etc/sysctl.conf ...
每個節(jié)點(diǎn)安裝 Docker
根據(jù) https://docs.docker.com/engine/install/ubuntu/ 和 https://docs.docker.com/engine/install/linux-postinstall/ 指導(dǎo)安裝 Docker。
配置
配置 Docker
守護(hù)程序搓彻,尤其是使用 systemd
來管理容器的 cgroup
如绸。
sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
重啟 Docker
并在啟動時啟用:
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
每個節(jié)點(diǎn)安裝 kubeadm, kubelet, kubectl
更新 apt
軟件包索引并安裝使用 Kubernetes
apt
存儲庫所需的軟件包:
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
下載 Google Cloud
公共簽名密鑰:
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
添加 Kubernetes
apt
存儲庫:
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
更新 apt
軟件包索引,安裝 kubelet
旭贬,kubeadm
和 kubectl
怔接,并固定其版本:
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
在 master 節(jié)點(diǎn)
初始化 kubernetes 集群
使用 master
的 IP
地址替換以下命令中的 172.16.50.146
。
$ sudo kubeadm init --apiserver-advertise-address=172.16.50.146 --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.20.5
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 172.16.50.146]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [172.16.50.146 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [172.16.50.146 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 82.502493 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: en2kq9.2basuxxemkuv1yvu
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.50.146:6443 --token en2kq9.2basuxxemkuv1yvu \
--discovery-token-ca-cert-hash sha256:97e84ca61b5d888476f5cdfd36fa141eaf2631e78e7d32c8c3d209e54be72870
配置 kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
部署 calico 網(wǎng)絡(luò)
安裝 Tigera
Calico operator
和自定義資源定義稀轨。
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
通過創(chuàng)建必要的自定義資源來安裝 Calico
扼脐。有關(guān)此清單中可用配置選項(xiàng)的更多信息,請參見安裝參考奋刽。
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
注: 在創(chuàng)建此清單之前瓦侮,請閱讀其內(nèi)容并確保其設(shè)置適合你的環(huán)境艰赞。例如,你可能需要更改默認(rèn)
IP
池CIDR
以匹配你的Pod
網(wǎng)絡(luò)CIDR
脏榆。
使用以下命令確認(rèn)所有 Pod
都在運(yùn)行猖毫。
kubectl get pods -n calico-system -w
等到每個 Pod
都正常運(yùn)行。
注:
Tigera operator
將資源安裝在calico-system
名稱空間中须喂。其他安裝方法可以改用kube-system
命名空間。
移除 master
節(jié)點(diǎn)上的 taints
趁蕊,以便可以在其上安排 pod
坞生。
kubectl taint nodes --all node-role.kubernetes.io/master-
應(yīng)該返回以下內(nèi)容。
node/master untainted
確認(rèn)集群中現(xiàn)在有一個節(jié)點(diǎn)掷伙。 它應(yīng)該返回類似以下的內(nèi)容是己。
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master Ready control-plane,master 8m57s v1.20.5 172.16.50.146 <none> Ubuntu 20.04.2 LTS 5.4.0-70-generic docker://20.10.5
在節(jié)點(diǎn)上
加入集群
在每個節(jié)點(diǎn)中運(yùn)行 join
命令(初始化 Kubernetes 集群輸出中可以找到)。
$ sudo kubeadm join 172.16.50.146:6443 --token en2kq9.2basuxxemkuv1yvu \
> --discovery-token-ca-cert-hash sha256:97e84ca61b5d888476f5cdfd36fa141eaf2631e78e7d32c8c3d209e54be72870
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
如果 token
已過期任柜,則可以從 master
節(jié)點(diǎn)創(chuàng)建新的 token
卒废。
kubeadm token create --print-join-command
驗(yàn)證集群
$ kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 19m v1.20.5
node1 Ready <none> 5m48s v1.20.5
node2 Ready <none> 4m57s v1.20.5
使用其他機(jī)器控制集群
在 master 節(jié)點(diǎn)上復(fù)制 admin.conf
到 $HOME
目錄下。
$ sudo cp /etc/kubernetes/admin.conf $HOME
$ sudo chown {user} /home/{user}/admin.conf
Scp $HOME/admin.conf
到其他機(jī)器宙地。
$ scp {user}@172.16.50.146:/home/{user}/admin.conf .
$ kubectl --kubeconfig ./admin.conf get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 45m v1.20.5
node1 Ready <none> 31m v1.20.5
node2 Ready <none> 31m v1.20.5
Metric Server
部署 Metric Server 的目的是使用 top
命令查看簡單的指標(biāo)摔认。
在沒有 Metric server
的情況下,使用 top
命令將會返回錯誤宅粥。
$ kubectl top node
error: Metrics API not available
安裝
$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
配置
$ kubectl edit deploy -n kube-system metrics-server
這將打開一個部署文件的文本編輯器参袱,你需要做以下修改。
在 spec.template.spec.containers
增加如下參數(shù):
- --kubelet-insecure-tls
修改后秽梅,部署文件應(yīng)該大致如下:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
# Add this line
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
image: k8s.gcr.io/metrics-server/metrics-server:v0.4.2
等待 metrics-server
狀態(tài)更新為 Running
抹蚀。
$ kubectl get pod -n kube-system -w
NAME READY STATUS RESTARTS AGE
...
metrics-server-76f8d9fc69-jb94v 0/1 ContainerCreating 0 43s
metrics-server-76f8d9fc69-jb94v 1/1 Running 0 81s
再次使用 top
命令查看。
? kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master 242m 12% 2153Mi 56%
node1 143m 7% 2158Mi 56%
node2 99m 4% 1665Mi 43%
Dashboard
安裝
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
使用以下命令確認(rèn)所有 Pod
都在運(yùn)行企垦。
? kubectl get pod -n kubernetes-dashboard -w
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-79c5968bdc-w2gmc 1/1 Running 0 8m
kubernetes-dashboard-9f9799597-w9fbz 1/1 Running 0 8m
訪問
要從本地站訪問 Dashboard
环壤,必須創(chuàng)建一個通往 Kubernetes
集群的安全通道。運(yùn)行以下命令:
kubectl proxy
然后通過 http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ 訪問 Dashboard钞诡。
創(chuàng)建示例用戶
創(chuàng)建 Service Account
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
EOF
創(chuàng)建 ClusterRoleBinding
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
創(chuàng)建 Bearer Token
$ kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
現(xiàn)在郑现,復(fù)制 token
并將其粘貼到登錄屏幕上的輸入 Token
字段中。
單擊登錄按鈕臭增,僅此而已懂酱。你現(xiàn)在已以管理員身份登錄。
Istio
安裝 istioctl
curl -L https://istio.io/downloadIstio | sh -
cd istio-1.9.2
export PATH=$PWD/bin:$PATH
部署 Istio operator
$ istioctl operator init
Installing operator controller in namespace: istio-operator using image: docker.io/istio/operator:1.9.2
Operator controller will watch namespaces: istio-system
? Istio operator installed
? Installation complete
安裝 Istio
$ kubectl create ns istio-system
namespace/istio-system created
$ kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: example-istiocontrolplane
spec:
profile: default
EOF
istiooperator.install.istio.io/example-istiocontrolplane created
使用以下命令確認(rèn)所有 Pod
都在運(yùn)行誊抛。
$ kubectl get pod -n istio-system -w
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-7cc49dcd99-c4mtf 1/1 Running 0 94s
istiod-687f965684-n8rkv 1/1 Running 0 3m26s
MetalLB
Kubernetes
不為裸機(jī)集群提供網(wǎng)絡(luò)負(fù)載均衡器的實(shí)現(xiàn)(LoadBalancer 類型的服務(wù))列牺。Kubernetes
附帶的Network LB
的實(shí)現(xiàn)都是調(diào)用各種IaaS
平臺(GCP,AWS拗窃,Azure 等)的粘合代碼瞎领。 如果你未在受支持的IaaS
平臺(GCP泌辫,AWS,Azure 等)上運(yùn)行九默,則LoadBalancers
在創(chuàng)建后將無限期保持“pending”
狀態(tài)震放。
MetalLB 可以解決 istio ingress gateway EXTERNAL-IP
“pending”
的問題.
安裝
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
配置
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.16.50.147-172.16.50.148 #Update this with your Nodes IP range
EOF
實(shí)例
啟用 istio
自動注入。
$ kubectl label namespace default istio-injection=enabled
namespace/default labeled
部署 book info
實(shí)例驼修。
$ cd istio-1.9.2
$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
部署 book info gateway
殿遂。
$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created
使用以下命令確認(rèn)所有 Pod
都在運(yùn)行。
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
details-v1-79f774bdb9-62x6b 2/2 Running 0 19m
productpage-v1-6b746f74dc-4g4hk 2/2 Running 0 19m
ratings-v1-b6994bb9-rz6pq 2/2 Running 0 19m
reviews-v1-545db77b95-bcnd8 2/2 Running 0 19m
reviews-v2-7bf8c9648f-zcgfx 2/2 Running 0 19m
reviews-v3-84779c7bbc-78bk7 2/2 Running 0 19m
獲取 istio ingress gateway
的 EXTERNAL-IP
乙各。
$ kubectl get service -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.99.204.213 172.16.50.147 15021:32373/TCP,80:30588/TCP,443:31095/TCP,15012:31281/TCP,15443:32738/TCP 73m
istiod ClusterIP 10.103.238.79 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 75m
從 http://EXTERNAL-IP/productpage 訪問 productpage
墨礁。
參考
- https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/
- https://github.com/justmeandopensource/kubernetes/blob/master/docs/install-cluster-ubuntu-20.md
- https://docs.docker.com/engine/install/ubuntu/
- https://docs.projectcalico.org/getting-started/kubernetes/quickstart
- https://github.com/kubernetes/dashboard
- https://stackoverflow.com/questions/57137683/how-to-troubleshoot-metrics-server-on-kubeadm