本文在
centos7.3
環(huán)境中進(jìn)行安裝
kubernetes
版本為1.11.1
etcd
版本為3.3.9
1. 下載相關(guān)文件
1.1 首先下載k8s相關(guān)文件
登錄github-kubernetes,點(diǎn)擊releases
連接:
找到1.11.1
版本锐秦,并點(diǎn)擊其中的CHANGELOG-1.11.md
連接啤覆,如圖:
就能進(jìn)入具體的下載列表界面怎炊,我也不知道要哪些東東潮峦,先下載下來(lái)入偷,如圖:
下載下來(lái)的文件如下圖所示:
1.2 下載etcd
登錄github-etcd嘹屯,找到對(duì)應(yīng)的版本,如下圖所示:
2. 安裝前的配置
2.1 防火墻
查看防火墻的狀態(tài):
firewall-cmd --state
關(guān)閉防火墻:
systemctl stop firewall
systemctl disable firewall
2.2 selinux
關(guān)閉selinux:
vi /etc/selinux/config
將SELINUX=enforcing改為SELINUX=disabled低葫,wq保存退出详羡。
2.3 swap
關(guān)閉swap
swapoff -a
vi /etc/fstab
將swap那一行注釋掉。
3. 安裝Docker
刪除舊的docker
$sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
安裝需要的組件:
sudo yum install -y yum-utils \
device-mapper-persistent-data \ lvm2
在yum中配置docker地址:
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
如果連不上嘿悬,則使用下面這個(gè):
sudo yum-config-manager --add-repo http://mirrors/aliyun.com/docker-ce/linux/centos/docker-ce.repo
再配置兩個(gè):
sudo yum-config-manager --enable docker-ce-edge
sudo yum-config-manager --enable docker-ce-test
安裝docker实柠,這一步需要一點(diǎn)時(shí)間,有可能還會(huì)連不上docker鏡像網(wǎng)站
sudo yum install docker-ce
安裝完成之后善涨,啟動(dòng)docker:
sudo systemctl start docker
啟動(dòng)完成之后窒盐,使用下面的命令測(cè)試docker有沒(méi)有啟動(dòng)成功:
sudo docker run hello-world
第一次安裝時(shí),會(huì)去拉鏡像钢拧,然后會(huì)看到Hello from Docker!
這樣的信息蟹漓,表示安裝成功。
4. Node節(jié)點(diǎn)安裝
我只有一臺(tái)服務(wù)器源内,今天第一次安裝葡粒,不一定能成功,尤其是因?yàn)椴恢?code>kubernetes的master
和node
能不能安裝在一起膜钓,錯(cuò)了再說(shuō)吧嗽交。
4.1 準(zhǔn)備工作
Node
節(jié)點(diǎn)主要是需要安裝kubelet
和kube-proxy
。這兩個(gè)文件在上面下面的安裝包里有颂斜。
之前下面的安裝包有好幾個(gè)轮纫,好像server
的那個(gè)安裝包,里面什么都有了焚鲜,我們先解壓那個(gè)安裝包:
tar -zxvf kubernetes-server-linux-amd64.tar.gz
解壓之后掌唾,進(jìn)入server/bin
目錄,看一下有哪些東東:
我們需要將
kubelet
和kube-proxy
兩個(gè)文件復(fù)制到/usr/bin
目錄下面去忿磅,因?yàn)楫?dāng)前命令行已經(jīng)在bin
目錄了糯彬,使用下面的命令進(jìn)行復(fù)制:
cp kubelet kube-proxy /usr/bin/
4.2 安裝kube-proxy
首先編輯proxy
的配置文件:/usr/lib/systemd/system/kube-proxy.service
:
vi /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
Requires=network.service
[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
編輯完了之后,wq
保存退出葱她,接著開(kāi)始編輯EnvironmentFile
參數(shù)中指定的兩個(gè)配置文件撩扒,編輯這兩個(gè)配置文件之前,首先需要?jiǎng)?chuàng)建一個(gè)配置文件目錄:
mkdir -p /etc/kubernetes
后面所有的配置文件都放在這里:
vi /etc/kubernetes/proxy
KUBE_PROXY_ARGS=""
vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_LOG_DIR="--log-dir=/var/log/kubernetes"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://172.28.8.193:8080"
這兩個(gè)文件吨些,分別都是編輯完了之后搓谆,wq
保存退出。
接下來(lái)豪墅,啟動(dòng)服務(wù)泉手,并驗(yàn)證是否啟動(dòng)成功:
[root@greenvm-y16558v2 kubernetes]# systemctl daemon-reload
[root@greenvm-y16558v2 kubernetes]# systemctl start kube-proxy.service
[root@greenvm-y16558v2 kubernetes]# netstat -lntp | grep kube-proxy
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 27624/kube-proxy
tcp6 0 0 :::10256 :::* LISTEN 27624/kube-proxy
4.3 安裝kubelet服務(wù)
跟上面的類(lèi)似,首先需要編輯服務(wù)的配置文件偶器,vim /usr/lib/systemd/system/kubelet.service
:
vi /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
上面的參數(shù)WorkingDirectory
所指定的目錄需要?jiǎng)?chuàng)建:
mkdir -p /var/lib/kubelet
接下來(lái)開(kāi)始編輯配置文件:
vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=172.28.8.193"
KUBELET_API_SERVER="--api-servers=http://172.28.8.193:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=reg.docker.tb/harbor/pod-infrastructure:latest"
KUBELET_ARGS="--enable-server=true --enable-debugging-handlers=true --fail-swap-on=false --kubeconfig=/var/lib/kubelet/kubeconfig"
上面的配置中斩萌,hostname
表示的是當(dāng)前node的名稱(chēng)缝裤。
再編輯一下kubeconfig的配置:
vi /var/lib/kubelet/kubeconfig
apiVersion: v1
kind: Config
users:
- name: kubelet
clusters:
- name: kubernetes
cluster:
server: http://172.28.8.193:8080
contexts:
- context:
cluster: kubernetes
user: kubelet
name: service-account-context
current-context: service-account-context
最后,啟動(dòng)并驗(yàn)證服務(wù):
[root@greenvm-y16558v2 kubernetes]# swapoff -a
[root@greenvm-y16558v2 kubernetes]# systemctl daemon-reload
[root@greenvm-y16558v2 kubernetes]# systemctl start kubelet.service
[root@greenvm-y16558v2 kubernetes]# netstat -tnlp | grep kubelet
tcp 0 0 127.0.0.1:38496 0.0.0.0:* LISTEN 27972/kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 27972/kubelet
tcp6 0 0 :::10250 :::* LISTEN 27972/kubelet
tcp6 0 0 :::10255 :::* LISTEN 27972/kubelet
5. 安裝Master
5.1 安裝etcd
在安裝master
上的其它k8s組件之前颊郎,首先要安裝etcd
憋飞,前面我們已經(jīng)下載過(guò)了,現(xiàn)在需要解壓一下:
tar -zxvf etcd-v3.3.9-linux-amd64.tar.gz
然后將etcd
和etcdctl
復(fù)制到/usr/bin
目錄下:
cp etcd etcdctl /usr/bin/
接下來(lái)編輯etcd的服務(wù)配置文件:
vi /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd.service
[Service]
Type=notify
TimeoutStartSec=0
Restart=always
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd
[Install]
WantedBy=multi-user.target
創(chuàng)建上面配置中的兩個(gè)目錄:
mkdir -p /var/lib/etcd && mkdir -p /etc/etcd/
編輯環(huán)境文件:
vi /etc/etcd/etcd.conf
ETCD_NAME=ETCD Server
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
ETCD_ADVERTISE_CLIENT_URLS="http://172.28.8.193:2379"
最后姆吭,啟動(dòng)etcd
服務(wù)榛做,并驗(yàn)證其正確性:
[root@greenvm-y16558v2 k8s]# systemctl daemon-reload
[root@greenvm-y16558v2 k8s]# systemctl start etcd.service
[root@greenvm-y16558v2 k8s]# etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://172.28.8.193:2379
cluster is healthy
5.2 安裝kube-apiserver
首先,進(jìn)入之前解壓的目錄中内狸,/server/bin
瘤睹,把kube-apiserver
可執(zhí)行文件復(fù)制到/usr/bin
目錄中:
cp kube-apiserver /usr/bin/
編輯服務(wù)文件:
vi /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_LOG \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
編輯 環(huán)境文件:
vi /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://172.28.8.193:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.0.0/24"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""
啟動(dòng)服務(wù),并驗(yàn)證其正確性:
[root@greenvm-y16558v2 bin]# systemctl daemon-reload
[root@greenvm-y16558v2 bin]# systemctl start kube-apiserver.service
[root@greenvm-y16558v2 bin]# netstat -tnlp | grep kube-api
tcp6 0 0 :::6443 :::* LISTEN 29228/kube-apiserve
tcp6 0 0 :::8080 :::* LISTEN 29228/kube-apiserve
5.3 安裝kube-controller-manager
首先將kube-controller-namager
可執(zhí)行文件復(fù)制到/usr/lib
目錄中:
cp kube-controller-manager /usr/bin/
編輯啟動(dòng)文件:
vi /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
編輯環(huán)境文件:
vi /etc/kubernetes/controller-manager
KUBE_MASTER="--master=http://172.28.8.193:8080"
KUBE_CONTROLLER_MANAGER_ARGS=""
啟動(dòng)服務(wù)并驗(yàn)證其正確性:
[root@greenvm-y16558v2 bin]# systemctl daemon-reload
[root@greenvm-y16558v2 bin]# systemctl start kube-controller-manager.service
[root@greenvm-y16558v2 bin]# netstat -lntp | grep kube-controll
tcp6 0 0 :::10252 :::* LISTEN 29431/kube-controll
5.4 安裝kube-scheduler
首先將kube-scheduler
可執(zhí)行文件復(fù)制到/usr/bin
目錄下:
cp kube-scheduler /usr/bin/
編輯啟動(dòng)文件:
vi /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
User=root
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
編輯環(huán)境配置文件:
vi /etc/kubernetes/scheduler
KUBE_MASTER="--master=http://172.28.8.193:8080"
KUBE_SCHEDULER_ARGS="--logtostderr=true --log-dir=/var/log/kubernetes --v=2"
啟動(dòng)服務(wù)并驗(yàn)證其正確性:
[root@greenvm-y16558v2 bin]# systemctl daemon-reload
[root@greenvm-y16558v2 bin]# systemctl start kube-scheduler.service
[root@greenvm-y16558v2 bin]# netstat -lntp | grep kube_scheduler
[root@greenvm-y16558v2 bin]# netstat -lntp | grep kube_schedule
[root@greenvm-y16558v2 bin]# netstat -lntp | grep kube-schedule
tcp6 0 0 :::10251 :::* LISTEN 29629/kube-schedule
5.5 配置Profile
將server/bin
設(shè)置為默認(rèn)搜索路徑答倡,應(yīng)該就是像java設(shè)置環(huán)境變量一樣:
[root@greenvm-y16558v2 bin]# pwd
/home/software/k8s/kubernetes-server/server/bin
[root@greenvm-y16558v2 bin]# sed -i '$a export PATH=$PATH:/home/software/k8s/kubernetes-server/server/bin/' /etc/profile[root@greenvm-y16558v2 bin]# source /etc/profile
5.6 安裝kubectl
這個(gè)最簡(jiǎn)單轰传,將server/bin
目錄下的kubectl
可執(zhí)行文件復(fù)制到/usr/bin
目錄下即可:
[root@greenvm-y16558v2 bin]# cp kubectl /usr/bin/
[root@greenvm-y16558v2 bin]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
6 驗(yàn)證
最后查看一下有沒(méi)有安裝成功:
[root@greenvm-y16558v2 bin]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
greenvm-y16558v2 Ready <none> 5m v1.11.1
7 后續(xù)
本來(lái)以為一切都好了,結(jié)果發(fā)現(xiàn)在拉鏡像創(chuàng)建pod時(shí)瘪撇,一直是不成功的获茬,比如官網(wǎng)上的例子:
kubectl run kubernetes-bootcamp --image=jocatalin/kubernetes-bootcamp:v1 --port=8080
通過(guò)如下命令查看時(shí),會(huì)發(fā)現(xiàn)pod一直在創(chuàng)建中:
[root@greenvm-y16558v2 bin]# kubectl get pods
NAME READY STATUS RESTARTS AGE
kubernetes-bootcamp-7d48d75958-w9hgp 0/1 ContainerCreating 0 52s
然后通過(guò)kubectl describe pod xx
查看具體的原因倔既,發(fā)現(xiàn)了如下截圖中的錯(cuò)誤:
即:
Warning FailedCreatePodSandBox ...failed pulling image "k8s.gcr.io/pause:3.1"...
出現(xiàn)這種問(wèn)題恕曲,是因?yàn)閲?guó)內(nèi)防問(wèn)不了國(guó)外呀,這個(gè)可以理解為是一個(gè)初始鏡像渤涌,后續(xù)的所有用戶(hù)創(chuàng)建的鏡像佩谣,都需要通過(guò)這個(gè)默認(rèn)的鏡像來(lái)啟動(dòng),而這個(gè)pull不下來(lái)的話(huà)实蓬,一切都白瞎茸俭。
所以需要從國(guó)內(nèi)的倉(cāng)庫(kù)取下來(lái),然后打個(gè)tag安皱。
[root@greenvm-y16558v2 docker]# docker pull registry.cn-qingdao.aliyuncs.com/minsec/pause-amd64:3.1
3.1: Pulling from minsec/pause-amd64
7675586df687: Pull complete
Digest: sha256:fcaff905397ba63fd376d0c3019f1f1cb6e7506131389edbcb3d22719f1ae54d
Status: Downloaded newer image for registry.cn-qingdao.aliyuncs.com/minsec/pause-amd64:3.1
[root@greenvm-y16558v2 docker]# docker tag registry.cn-qingdao.aliyuncs.com/minsec/pause-amd64:3.1 k8s.gcr.io/pause:3.1
重啟kubelet
:
[root@greenvm-y16558v2 bin]# systemctl stop kubelet.service
[root@greenvm-y16558v2 bin]# systemctl daemon-reload
[root@greenvm-y16558v2 bin]# systemctl start kubelet.service
再重新創(chuàng)建pod调鬓,查看pod
及deployment
:
[root@greenvm-y16558v2 bin]# kubectl run kubernetes-bootcamp --image=jocatalin/kubernetes-bootcamp:v1 --port=8080
deployment.apps/kubernetes-bootcamp created
[root@greenvm-y16558v2 bin]# kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kubernetes-bootcamp 1 1 1 1 9s
[root@greenvm-y16558v2 bin]# kubectl get pods
NAME READY STATUS RESTARTS AGE
kubernetes-bootcamp-7d48d75958-nllfb 1/1 Running 0 15s
一切正常!