1.安裝docker
- 參考以下鏈接
https://docs.docker.com/install/linux/docker-ce/centos/#install-using-the-repository - (建議用阿里云鏡像)
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum install epel-release
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io
- 將docker的storage driver改為 deivicemapper
cat /etc/docker/daemon.json;
{
"storage-driver": "devicemapper"
}
2.下載kubeadm, kubelet, kubectl
- 使用阿里鏡像源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF;
- yum安裝
yum install -y kubelet kubeadm kubectl
3.使用kubeadmin初始化master
- 系統(tǒng)參數(shù)調(diào)整
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
swapoff -a
- 初始化
kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers
(注意: 爆出來的各種問題自己解決)
如果你希望master被調(diào)度或者是單機(jī)部署k8s請執(zhí)行以下命令,讓創(chuàng)建的pod可以被調(diào)度到master節(jié)點(diǎn)
kubectl taint node kvm-10-115-40-126 node-role.kubernetes.io/master-
讓master重新不被調(diào)度執(zhí)行:
kubectl taint node k8s-master node-role.kubernetes.io/master=""
- 安裝flannel
mkdir -p $HOME/.kube && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
- 安裝dashboard
(https://github.com/kubernetes/dashboard)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
這里可能會受到網(wǎng)絡(luò)因素,所以需要改鏡像
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
修改已經(jīng)下載好的 kubernetes-dashboard.yaml 文件溪北,做如下修改:
image: gcr.io/kubernetes-dashboard-amd64:v1.10.1
改為
image: loveone/kubernetes-dashboard-amd64:v1.10.1
service修改暴露nodeport
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 32765
- 訪問dashboard
# 創(chuàng)建用戶token后訪問https://x.x.x.x:32765輸入鏈接里操作流程產(chǎn)出的對應(yīng)的token對應(yīng)的token
https://github.com/kubernetes/dashboard/wiki/Creating-sample-user
# 創(chuàng)建用戶的yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
#授權(quán)用戶cluster-admin 權(quán)限的yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
# 獲取token命令
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
# 登陸即可
https://10.115.40.126:32765
- 安裝helm
官網(wǎng)(https://helm.sh/docs/using_helm/#installing-helm)
- 安裝
curl -L https://git.io/get_helm.sh | bash
或者到https://github.com/helm/helm/releases下載包
4.安裝slave節(jié)點(diǎn)加入集群
- 從master上獲取加入節(jié)點(diǎn)的命令
kubeadm token create --print-join-command
- 在slave上執(zhí)行上面返回的命令
kubeadm join 10.115.40.126:6443 --token yvfhlq.80ivjc67syz36msc --discovery-token-ca-cert-hash sha256:670236660c8d4d01b1a4fd6fab178276a05bac550183a6987062fd49a4cd3854
- 刪除某個(gè)slave
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
5.日常應(yīng)用
- 創(chuàng)建一個(gè)namespace
kind: Namespace
apiVersion: v1
metadata:
name: ops-prod
- 創(chuàng)建configmap
# 將配置settings跟opscd路徑匹配,key為文件名value為文件內(nèi)容
kubectl create configmap opscd-config --from-file=settings=opscd/ -n ops-prod
- 配置deployment使用configmap
apiVersion: apps/v1
kind: Deployment
metadata:
name: opscd
namespace: ops-prod
labels:
app: opscd
spec:
replicas: 2
revisionHistoryLimit: 1
selector:
matchLabels:
app: opscd
minReadySeconds: 0
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: opscd
spec:
containers:
- name: opscd
image: 127.0.0.1/ops/opscd:0.4
// 以pod內(nèi)部環(huán)境變量的方式使用configmap
// envFrom:
// - configMapRef:
// name: example-configmap
imagePullPolicy: IfNotPresent
resources:
limits:
memory: "256Mi"
cpu: "300m"
readinessProbe:
tcpSocket:
port: 8001
initialDelaySeconds: 60
periodSeconds: 30
livenessProbe:
httpGet:
path: /v1
port: 8001
initialDelaySeconds: 60
periodSeconds: 60
ports:
- containerPort: 8001
volumeMounts:
- name: settings
//這里是把整個(gè)路徑掛到configmap,里面對應(yīng)的key為文件名value為文件內(nèi)容,自動去容器內(nèi)替換整個(gè)路徑內(nèi)容
mountPath: /app/opscd/config
volumes:
- name: settings
configMap:
name: opscd-config
- 關(guān)于configmap的熱更新問題
首先赊窥,configmap更新后,Env 不會同步更新邓深,如果你使用的文件掛載方式,pod里的配置是立刻發(fā)生變化的,這個(gè)時(shí)候需要做的是重啟服務(wù)即可利耍。
1.方案一, 服務(wù)加入sidecar去保證文件有變化重啟服務(wù).如configmap-reload,配合程序使用更加的靈活.
2.方案二望抽,通過controller暴力rolling upgrade pod
,簡單容易(請參詳https://github.com/stakater/Reloader)
由于好奇就看了下他的源碼,大概流程就是用kubernets的go-client去消費(fèi)configmap資源的event,然后針對event去遍歷資源找設(shè)計(jì)configmap管理的資源,然后進(jìn)行update操作,然后剩下的操作就交給k8s的deployment了
- 使用k8s集群外部資源
# 創(chuàng)建一個(gè)endpoint
apiVersion: v1
kind: Endpoints
metadata:
name: opsdb-mysql
namespace: istio-system
subsets:
- addresses:
- ip: 10.115.254.14
ports:
- port: 3306
protocol: TCP
6.創(chuàng)建ingress
- 創(chuàng)建ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-myapp
namespace: ingress-nginx
annotations:
// 注解的具體含義用處很重要,請參考(https://git.k8s.io/ingress-gce/examples/PREREQUISITES.md#ingress-class)
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: myapp.test.com
http:
paths:
- path:
backend:
serviceName: myapp
servicePort: 80
- 創(chuàng)建ingress-nginx deployment
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
- 創(chuàng)建ingress-nginx的service暴露nodeport
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
nodePort: 30080
- name: https
port: 443
targetPort: 443
protocol: TCP
nodePort: 30443
- name: proxied-tcp-9000
port: 9000
targetPort: 9000
protocol: TCP
nodePort: 30900
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
- 驗(yàn)證
本地配置個(gè)myapp.test.com解析,然后訪問即可
http://myapp.test.com:30080/
7.安裝私有倉庫harbor
- 離線安裝
# 安裝文檔 https://github.com/goharbor/harbor/blob/master/docs/installation_guide.md
- push鏡像到habor
docker pull nginx:latest
docker tag nginx:latest harbor.xxx.xxx/ops/nginx:lastest
docker push
- dockerfile一個(gè)打包樣例
cat Dockerfile
FROM python:3.7.3
ADD opscd opscd
ADD manage.py manage.py
ADD requirements.txt requirements.txt
RUN pip install -r requirements.txt
CMD python3 manager.py runserver 0.0.0.0:8001
EXPOSE 8001 8001
# 打包
docker build . -t 127.0.0.1/ops/opscd:ae87986
# 上傳
docker push 127.0.0.1/ops/opscd:ae87986
#運(yùn)行容器
docker run 127.0.0.1/ops/opscd:ae87986
# 進(jìn)入一個(gè)容器
docker exec -it 0383eb8b45d6 /bin/sh
8.安裝rancher
- 安裝
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
-v /host/certs:/container/certs \
-e SSL_CERT_DIR="/container/certs" \
rancher/rancher:latest
- 設(shè)置
登陸之后按照導(dǎo)航步驟一步一步進(jìn)行,導(dǎo)入已經(jīng)有的集群