Kubernetes炎辨,簡(jiǎn)稱 k8s(k垫释,8 個(gè)字符丝格,s——明白了?)或者 “kube”棵譬,是一個(gè)開源的?Linux 容器自動(dòng)化運(yùn)維平臺(tái)显蝌,它消除了容器化應(yīng)用程序在部署、伸縮時(shí)涉及到的許多手動(dòng)操作。換句話說曼尊,你可以將多臺(tái)主機(jī)組合成集群來運(yùn)行 Linux 容器酬诀,而 Kubernetes 可以幫助你簡(jiǎn)單高效地管理那些集群。構(gòu)成這些集群的主機(jī)還可以跨越公有云骆撇、私有云以及混合云瞒御。目前已經(jīng)是容器編排的標(biāo)準(zhǔn),背后主要有g(shù)oogle和紅帽支持神郊。
kubeadm是Kubernetes官方提供的快速搭建k8s集群的工具肴裙,比目前網(wǎng)上使用其他方法搭建要簡(jiǎn)單快速的多,而且碰都的問題也少涌乳。如果大家按照本文操作出現(xiàn)任何問題蜻懦,請(qǐng)留言,我會(huì)盡量及時(shí)回復(fù)夕晓。本文主要是參考的Kubernetes官方文檔Using kubeadm to Create a Cluster宛乃。按照本文搭建的集群不夠安全,也做不到高可用蒸辆,僅建議個(gè)人學(xué)習(xí)研究用途征炼,不建議部署至生產(chǎn)環(huán)境。
前提條件
1. 2臺(tái)以上安裝了centos7 x64的服務(wù)器吁朦,可以是物理機(jī)柒室,虛擬機(jī)或者vps。其中一臺(tái)作為master節(jié)點(diǎn)逗宜,其他的作為node節(jié)點(diǎn)雄右。
2. 每臺(tái)機(jī)器至少2g內(nèi)存,作為master的服務(wù)器至少要2個(gè)核纺讲。
3. 所有服務(wù)器之間的網(wǎng)絡(luò)是互通的擂仍,hostname不能相同,并且不含有下劃線熬甚。
4. 服務(wù)器是在墻外的逢渔,因?yàn)榇罱ǖ倪^程中要下載的一些文件,墻內(nèi)可能會(huì)很慢或者根本下載不了乡括,這意味著使用國(guó)內(nèi)的阿里云肃廓,騰訊云之類的vps搭建會(huì)很麻煩。如果還沒有墻外的服務(wù)器诲泌,可以去看下vultr盲赊,我用的就是這個(gè),性價(jià)比比較高敷扫,也很穩(wěn)定,? ?推薦使用東京或美國(guó)西部的節(jié)點(diǎn)哀蘑,連接比較快,有時(shí)ip會(huì)ping不通,應(yīng)該是被墻了绘迁,在其他區(qū)域再創(chuàng)建一個(gè)再試就ok了合溺。
5. 會(huì)用ssh連接服務(wù)器,并能執(zhí)行簡(jiǎn)單的命令缀台,以及編輯保存文件棠赛。下文的命令有些可能需要root權(quán)限,如果提示沒有權(quán)限将硝,在命令行前面加sudo再執(zhí)行一次恭朗。
搭建步驟
第1-6步是每臺(tái)服務(wù)器都需要的。
升級(jí)系統(tǒng)依疼,在命令行運(yùn)行
yum update -y
?????2. 關(guān)閉防火墻,swap痰腮,因?yàn)閗8s需要運(yùn)行多個(gè)服務(wù)在不同的服務(wù)器上通訊,需要開放多個(gè)端口律罢,簡(jiǎn)單起見膀值,直接把防火墻關(guān)了,不推薦在生產(chǎn)環(huán)境這么做误辑。關(guān)掉swap沧踏,k8s的組件kebelet才可以正常工作。
systemctl disable firewalld
systemctl stop firewalld
swapoff -a????
??3.安裝docker
yum install -y docker
systemctl enable docker && systemctl start docker
????4.安裝kubeadm,kubelet,kubectl
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
5.關(guān)閉selinux巾钉,因?yàn)閗ubelet目前支持selinux還有點(diǎn)問題
setenforce 0
打開/etc/sysconfig/selinux文件
vi /etc/sysconfig/selinux
找到SELINUX那行翘狱,改為
SELINUX=disabled
保存文件
????6.設(shè)置net.bridge.bridge-nf-call-iptables為1
cat <? /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
????7.初始化master,在master的節(jié)點(diǎn)上運(yùn)行
kubeadm init --pod-network-cidr=192.168.0.0/16
如果你看到類似下面的信息說明master初始化成功了
[init] Using Kubernetes version: v1.8.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubeadm-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.138.0.4]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 39.511972 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node master as master by adding a label and a taint
[markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token:
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
? mkdir -p $HOME/.kube
? sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
? sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
? http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
? kubeadm join --token : --discovery-token-ca-cert-hash sha256:
把上面輸出的最后一行 kubeadm join復(fù)制保存下來砰苍,后面在node節(jié)點(diǎn)加入到集群中需要用到
運(yùn)行下面的命令初始化kebectl配置文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
????8.安裝網(wǎng)絡(luò)插件潦匈,以使pod能相互通訊,這里我們安裝的是Calico.在master節(jié)點(diǎn)運(yùn)行
kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
運(yùn)行以下命令檢查kube-dns pod 已經(jīng)運(yùn)行赚导,一般需要幾十秒
kubectl get pods --all-namespaces
如果輸出中有名字以kube-dns的pod狀態(tài)是Running茬缩,說明網(wǎng)絡(luò)插件已經(jīng)正常工作,然后就可以把node節(jié)點(diǎn)加入到集群
[root@kube-master ~]# kubectl get pods --all-namespaces
NAMESPACE? ? NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? READY? ? STATUS? ? RESTARTS? AGE
kube-system? calico-etcd-dfpnn? ? ? ? ? ? ? ? ? ? ? ? 1/1? ? ? Running? 0? ? ? ? ? 13h
kube-system? calico-kube-controllers-5449fdfcd-z8n45? 1/1? ? ? Running? 0? ? ? ? ? 13h
kube-system? calico-node-8jmzt? ? ? ? ? ? ? ? ? ? ? ? 2/2? ? ? Running? 0? ? ? ? ? 13h
kube-system? calico-node-b4x99? ? ? ? ? ? ? ? ? ? ? ? 2/2? ? ? Running? 0? ? ? ? ? 13h
kube-system? etcd-kube-master? ? ? ? ? ? ? ? ? ? ? ? ? 1/1? ? ? Running? 0? ? ? ? ? 13h
kube-system? kube-apiserver-kube-master? ? ? ? ? ? ? ? 1/1? ? ? Running? 0? ? ? ? ? 13h
kube-system? kube-controller-manager-kube-master? ? ? 1/1? ? ? Running? 0? ? ? ? ? 13h
kube-system? kube-dns-86f4d74b45-v6qr5? ? ? ? ? ? ? ? 3/3? ? ? Running? 0? ? ? ? ? 14h
kube-system? kube-proxy-8nl2w? ? ? ? ? ? ? ? ? ? ? ? ? 1/1? ? ? Running? 0? ? ? ? ? 13h
kube-system? kube-proxy-klnjb? ? ? ? ? ? ? ? ? ? ? ? ? 1/1? ? ? Running? 0? ? ? ? ? 14h
kube-system? kube-scheduler-kube-master? ? ? ? ? ? ? ? 1/1? ? ? Running? 0? ? ? ? ? 13h
????9.使pod能運(yùn)行在master上吼旧,在master運(yùn)行如下命令凰锡。否則k8s不會(huì)調(diào)度非系統(tǒng)pod到master節(jié)點(diǎn)上
kubectl taint nodes --all node-role.kubernetes.io/master-
?10.kube-dns運(yùn)行后加入node節(jié)點(diǎn),在node節(jié)點(diǎn)運(yùn)行第7步保存的kubeadm join圈暗,類似下面的語句掂为。
kubeadm join --token : --discovery-token-ca-cert-hash sha256:
如果成功,輸出類似下面
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "10.138.0.4:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.138.0.4:6443"
[discovery] Requesting info from "https://10.138.0.4:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.138.0.4:6443"
[discovery] Successfully established connection with API Server "10.138.0.4:6443"
[bootstrap] Detected server version: v1.8.0
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
Node join complete:
* Certificate signing request sent to master and response
? received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
???11.驗(yàn)證node成功加入集群员串,在master命令行運(yùn)行
kubectl get node
如果集群正常運(yùn)行菩掏,輸出類似
NAME? ? ? ? ? STATUS? ? ROLES? ? AGE? ? ? VERSION
kube-master? Ready? ? master? ? 1h? ? ? ? v1.10.0
kube-node? ? Ready? ? ? ? 2m? ? ? ? v1.10.0