docker實(shí)現(xiàn)了更便捷的單機(jī)容器虛擬化的管理, docker的位置處于操作系統(tǒng)層與應(yīng)用層之間;
-
相對傳統(tǒng)虛擬化(KVM,XEN):
docker可以更加靈活的去實(shí)現(xiàn)一些應(yīng)用層功能, 同時(shí)對資源的利用率也更高
-
相對應(yīng)用:
docker可以把應(yīng)用更操作系統(tǒng)(鏡像)做更好的結(jié)合, 降低部署與維護(hù)的的成本
處于這樣一個(gè)位置在單機(jī)使用docker進(jìn)行業(yè)務(wù)部署是可以感覺到質(zhì)的提升; 但是針對跨機(jī)器, 大規(guī)模, 需要對業(yè)務(wù)質(zhì)量進(jìn)行保證的時(shí)候, docker本身又有些不足, 而傳統(tǒng)的運(yùn)維自動(dòng)化工具無論是在docker內(nèi)部部署還是用來管理docker都顯得有些不倫不類.
Kubernetes則實(shí)現(xiàn)了大規(guī)模,分布式, 并且確保高可用的docker集群的管理.
1: 理解Kubernets
理念:
可以把kuberntes理解為容器級別的自動(dòng)化運(yùn)維工具, 之前的針對操作系統(tǒng)(linux, windows)的自動(dòng)化運(yùn)維工具比如puppet, saltstack, chef所做的工作是確保代碼狀態(tài)的正確, 配置文件狀態(tài)的正確, 進(jìn)程狀態(tài)的正確, 本質(zhì)是狀態(tài)本身的維護(hù); 而kubernetes實(shí)際上也是狀態(tài)的維護(hù), 只不過是容器級別的狀態(tài)維護(hù); 不過kubernetes在容器級別要做到不僅僅狀態(tài)的維護(hù), 還需要docker跨機(jī)器之間通信的問題.
相關(guān)概念
1: pod
pod是容器的集合, 每個(gè)pod可以包含一個(gè)或者多個(gè)容器; 為了便于管理一般情況下同一個(gè)pod里運(yùn)行相同業(yè)務(wù)的容器
同一個(gè)pod的容器共享相同的系統(tǒng)棧(網(wǎng)絡(luò),存儲)
同一個(gè)pod只能運(yùn)行在同一個(gè)機(jī)器上
2: Replicateion controller
由于這個(gè)名字實(shí)在是太長了, 以下均用rc代替(kubernetes也知道這個(gè)名字比較長, 也是用rc代替)
rc是管理pod的, rc負(fù)責(zé)集群中在任何時(shí)候都有一定數(shù)量的pod在運(yùn)行, 多了自動(dòng)殺, 少了自動(dòng)加;
rc會(huì)用預(yù)先定義好的pod模版來創(chuàng)建pod; 創(chuàng)建成功后正在運(yùn)行的pod實(shí)例不會(huì)隨著模版的改變而改變;
rc通過SELECTOR(一種系統(tǒng)label)與pod對應(yīng)起來
當(dāng)rc中定義的pod數(shù)量改變是, rc會(huì)自動(dòng)是運(yùn)行中的pod數(shù)量與定義的數(shù)量一致
-
rc還有一種神奇的機(jī)制:
- rolling updates; 比如現(xiàn)在某個(gè)服務(wù)有5個(gè)正在運(yùn)行的pod, 現(xiàn)在pod本身的業(yè)務(wù)要更新了, 可以以逐個(gè)替換的機(jī)制來實(shí)現(xiàn)整個(gè)rc的更新
3: service
services即服務(wù), 真正提供服務(wù)的接口,將pod提供的服務(wù)暴力到外網(wǎng), 每個(gè)服務(wù)后端可以有一個(gè)或者多個(gè)pod
4: lable
- label就是標(biāo)簽, kubernetes在pod, service, rc上打了很多個(gè)標(biāo)簽(K/V形式的鍵值對); lable的存儲在etcd(一個(gè)分布式的高性能,持久化緩存)中; kubernetes用etcd一下子解決了傳統(tǒng)服務(wù)中的服務(wù)之間通信(消息服務(wù))與數(shù)據(jù)存儲(數(shù)據(jù)庫)的問題
架構(gòu)實(shí)現(xiàn)
整個(gè)架構(gòu)大體分為控制節(jié)點(diǎn)和計(jì)算節(jié)點(diǎn); 控制節(jié)點(diǎn)發(fā)命令, 計(jì)算節(jié)點(diǎn)干活.
首先試圖從圖本身試圖對架構(gòu)做一些理解
- 1: 真正提供服務(wù)的是node(計(jì)算節(jié)點(diǎn)), 計(jì)算節(jié)點(diǎn)的服務(wù)通過proxy,在通過防火墻后出去
- 2: 控制節(jié)點(diǎn)和計(jì)算節(jié)點(diǎn)通過REST的API通信
- 3: 用戶的命令需要授權(quán)后調(diào)用服務(wù)端的API發(fā)送到系統(tǒng)
- 4: 計(jì)算節(jié)點(diǎn)主要進(jìn)程為kubelet與proxy
- 5: 控制節(jié)點(diǎn)負(fù)責(zé)調(diào)度, 狀態(tài)維護(hù)
2: Kubernetes部署
主機(jī)環(huán)境
- 192.168.56.110
- etcd
- kubernetes master
- 192.168.56.111
- etcd
- kubernetes master
- 192.168.56.112
- kubernetes master
操作系統(tǒng): centos7
- kubernetes master
110和111部署etcd, 110作為kubenetes的控制節(jié)點(diǎn), 111和112作為計(jì)算節(jié)點(diǎn)
環(huán)境準(zhǔn)備:
- 安裝epel源:
<pre>
yum install epel-release
</pre> - 關(guān)閉防火墻
<pre>
systemctl stop firewalld
systemctl disable firewalld
</pre>
1: etcd
etcd是一個(gè)分布式, 高性能, 高可用的鍵值存儲系統(tǒng),由CoreOS開發(fā)并維護(hù)的,靈感來自于 ZooKeeper 和 Doozer级遭,它使用Go語言編寫续语,并通過Raft一致性算法處理日志復(fù)制以保證強(qiáng)一致性厅瞎。
簡單: curl可訪問的用戶的API(HTTP+JSON)
安全: 可選的SSL客戶端證書認(rèn)證
快速: 單實(shí)例每秒 1000 次寫操作
可靠: 使用Raft保證一致性
1: 安裝包:
<pre>
yum install etcd -y
</pre>-
2: 編輯配置: /etc/etcd/etcd.conf
<pre>
# [member]
ETCD_NAME=192.168.56.110 #member節(jié)點(diǎn)名字 要與后面的ETCD_INITIAL_CLUSTER對應(yīng)
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #數(shù)據(jù)存儲目錄
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="http://192.168.56.110:2380" #集群同步地址與端口
ETCD_LISTEN_CLIENT_URLS="http://192.168.56.110:4001" #client通信端口
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.56.110:2380" #peer初始化廣播端口
ETCD_INITIAL_CLUSTER="192.168.56.110=http://192.168.56.110:2380,192.168.56.111=http:// 192.168.56.111:2380" #集群成員, 格式: $節(jié)點(diǎn)名字=$節(jié)點(diǎn)同步端口 節(jié)點(diǎn)之前用","隔開
ETCD_INITIAL_CLUSTER_STATE="new" #初始化狀態(tài), 初始化之后會(huì)變?yōu)閑xisting
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #集群名字
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.56.110:4001" #client廣播端口
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#
#[proxy]
#ETCD_PROXY="off"
#
#[security]
#ETCD_CA_FILE=""
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_PEER_CA_FILE=""
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
</pre>
除ETCD_INITIAL_CLUSTER項(xiàng)目所有節(jié)點(diǎn)保持一致外, 其他配置中的IP均為本機(jī)IP
etcd的配置文件不支持每行后面加注釋 哈哈, 所以在實(shí)際配置過程中需要把每行#后面的注釋刪掉 3: 啟動(dòng)服務(wù)
<pre>
systemctl enable etcd
systemctl start etcd
</pre>-
4: 驗(yàn)證
<pre>etcdctl member list
dial tcp 127.0.0.1:2379: connection refused
etcd默認(rèn)連接127.0.0.1的2379端口, 而咱們配置的是192.168.56.110的4001端口
etcdctl -C 192.168.56.110:4001 member list
no endpoints available
如果依然出現(xiàn)了上面的問題, 查看服務(wù)是否啟動(dòng)
netstat -lnp | grep etcd
tcp 0 0 192.168.56.110:4001 0.0.0.0:* LISTEN 18869/etcd
tcp 0 0 192.168.56.110:2380 0.0.0.0:* LISTEN 18869/etcd #然后查看端口是否暢通
telnet 192.168.56.111 4001
Trying 192.168.56.111...
Connected to 192.168.56.111.
Escape character is '^]'.
^Cetcdctl -C 192.168.56.110:4001 member list
10f1c239a15ba875: name=192.168.56.110 peerURLs=http://192.168.56.110:2380 clientURLs=http://192.168.56.110:4001
f7132cc88f7a39fa: name=192.168.56.111 peerURLs=http://192.168.56.111:2380 clientURLs=http://192.168.56.111:4001 </pre> 5: 準(zhǔn)備
<pre>
#etcdctl -C 192.168.56.110:4001 mk /coreos.com/network/config '{"Network":"10.0.0.0/16"}'
{"Network":"10.0.0.0/16"}
# etcdctl -C 192.168.56.110:4001 get /coreos.com/network/config
{"Network":"10.0.0.0/16"} </pre>
該配置后面的kubenetes會(huì)用到
2: kubenetes
-
1: 控制節(jié)點(diǎn)安裝
1: 包安裝
<pre>
yum -y install kubernetes
</pre>-
2: 配置文件: /etc/kubernetes/apiserver
<pre>
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
## The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"# Port minions listen on
KUBELET_PORT="--kubelet_port=10250"# Comma separated list of nodes in the etcd cluster
#KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"
KUBE_ETCD_SERVERS="--etcd_servers=http://192.168.56.110:4001,http://192.168.56.111:4001"
# 修改為咱們配置的etcd服務(wù)# Address range to use for services
KUBE_SERVICE_ADDRESSES="--portal_net=192.168.56.150/28"
# 外網(wǎng)網(wǎng)段, kubenetes通過改網(wǎng)絡(luò)把服務(wù)暴露出去# default admission control policies
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
</pre>
kubenetse的配置文件不支持每行后面加注釋, 實(shí)際生產(chǎn)中需要將每行后面的解釋刪掉 -
3: 啟動(dòng)服務(wù)
API的啟動(dòng)腳本有問題
/usr/lib/systemd/system/kube-apiserver.service
<pre>
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes[Service]
PermissionsStartOnly=true
ExecStartPre=-/usr/bin/mkdir /var/run/kubernetes
ExecStartPre=-/usr/bin/chown -R kube:kube /var/run/kubernetes/
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
User=kube
ExecStart=/usr/bin/kube-apiserver
$KUBE_LOGTOSTDERR
$KUBE_LOG_LEVEL
$KUBE_ETCD_SERVERS
$KUBE_API_ADDRESS
$KUBE_API_PORT
$KUBELET_PORT
$KUBE_ALLOW_PRIV
$KUBE_SERVICE_ADDRESSES
$KUBE_ADMISSION_CONTROL
$KUBE_API_ARGS
Restart=on-failure
LimitNOFILE=65536[Install]
WantedBy=multi-user.target
</pre>
啟動(dòng)服務(wù)
<pre>
systemctl enable kube-apiserver kube-controller-manager kube-scheduler
systemctl restart kube-apiserver kube-controller-manager kube-scheduler </pre> -
4: 驗(yàn)證
<pre>ps aux | grep kube
kube 20505 5.4 1.6 45812 30808 ? Ssl 22:05 0:07 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://192.168.56.110:2380,http://192.168.56.110:2380 --address=0.0.0.0 --allow_privileged=false --portal_net=192.168.56.0/24 --admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota
kube 20522 1.8 0.6 24036 12064 ? Ssl 22:05 0:02 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --machines=127.0.0.1 --master=http://127.0.0.1:8080
kube 20539 1.3 0.4 17420 8760 ? Ssl 22:05 0:01 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://127.0.0.1:8080kubectl cluster-info
Kubernetes master is running at http://localhost:8080
</pre>
-
2: 計(jì)算節(jié)點(diǎn)安裝
1: 包安裝
<pre>
yum -y install kubernetes docker flannel bridge-utils net-tools
</pre>-
2: 配置文件
- /etc/kubernetes/config
<pre>
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow_privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.56.110:8080" #將改IP改為控制節(jié)點(diǎn)IP
</pre> - /etc/kubernetes/kubelet
<pre>
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.56.111" #本機(jī)地址
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname_override=192.168.56.111" #本機(jī)地址
# location of the api-server
KUBELET_API_SERVER="--api_servers=http://192.168.56.110:8080" #控制節(jié)點(diǎn)地址
# Add your own!
KUBELET_ARGS="--pod-infra-container-image=docker.io/kubernetes/pause:latest"
#kubenet服務(wù)的啟動(dòng)需要依賴以pause這個(gè)鏡像, 默認(rèn)kubenet會(huì)從google鏡像服務(wù)下載, 而由于***原因, 下載不成功, 這里我們指定為的docker的鏡像
#鏡像下載: docker pull docker.io/kubernetes/pause
</pre> - /etc/sysconfig/flanneld
<pre>
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD="http://192.168.56.110:4001,http://192.168.56.111:4001" #修改為etcd服務(wù)地址
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/coreos.com/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
</pre>
- /etc/kubernetes/config
-
3: 服務(wù)修改
kubernetes的默認(rèn)服務(wù)啟動(dòng)有問題, 需要做寫調(diào)整
cat /usr/lib/systemd/system/kubelet.service
<pre>
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet
$KUBE_LOGTOSTDERR
$KUBE_LOG_LEVEL
$KUBELET_API_SERVER
$KUBELET_ADDRESS
$KUBELET_PORT
$KUBELET_HOSTNAME
$KUBE_ALLOW_PRIV
$KUBELET_ARGS
LimitNOFILE=65535
LimitNPROC=10240
Restart=on-failure[Install]
WantedBy=multi-user.target
</pre>
調(diào)整docker網(wǎng)絡(luò)
<pre>
systemctl start docker
systemctl stop docker
ifconfig docker0 down
brctl delbr docker0
</pre>啟動(dòng)服務(wù)
<pre>
systemctl enable kube-proxy kubelet flanneld docker
systemctl restart kube-proxy kubelet flanneld docker
</pre> 驗(yàn)證
<pre>
# kubectl get nodes
NAME LABELS STATUS
192.168.56.111 kubernetes.io/hostname=192.168.56.111 Ready
192.168.56.112 kubernetes.io/hostname=192.168.56.112 Ready
</pre>
3: Kubernetes使用
3.1 基本應(yīng)用
kubenetes的管理實(shí)際上就是針對pod, rc, services的管理, 命令行針對kubenetes的管理建議基于配置文件進(jìn)行, 這樣更便于管理, 也更規(guī)范
<pre>
kubectl create -h
Create a resource by filename or stdin.
JSON and YAML formats are accepted.
Usage:
kubectl create -f FILENAME [flags]
Examples:
// Create a pod using the data in pod.json.
$ kubectl create -f pod.json
// Create a pod based on the JSON passed into stdin.
$ cat pod.json | kubectl create -f -
</pre>
-
格式規(guī)范:
<pre>
apiVersion: v1beta3 #API版本, 要在 kubectl api-versions
kind: ReplicationController #Pod, ReplicationController, Service
metadata: #元數(shù)據(jù), 主要是name與label
name: test
spec: #配置, 根據(jù)不同的kind, 具體配置項(xiàng)會(huì)有所不同
***
</pre>
kubenetes支持yaml或者json的文件輸入, json的用API來處理的時(shí)候比較方便, yaml對人更友好一些, 以下用yaml格式.一個(gè)典型的業(yè)務(wù)大概架構(gòu)類似這樣:
<pre>
+-----------+
| |
| logic | #邏輯處理服務(wù)
| |
+---+--+----+
| |
+----+ +----+
| |
| |
+----v-----+ +----v----+
| | | |
| DB | | redis | #調(diào)用其他服務(wù)
| | | |
+----------+ +---------+
</pre>
思路: 每個(gè)pod內(nèi)提供一組完整的服務(wù)
-
1: 準(zhǔn)備鏡像
- postgres: 數(shù)據(jù)庫鏡像
- redis: 緩存服務(wù)鏡像
- wechat: 微信服務(wù)鏡像
2: rc配置wechat-rc.yaml:
<pre>
apiVersion: v1beta3
kind: ReplicationController
metadata:
name: wechatv4
labels:
name: wechatv4
spec:
replicas: 1
selector:
name: wechatv4
template:
metadata:
labels:
name: wechatv4
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
- name: postgres
image: opslib/wechat_db
ports:
- containerPort: 5432
- name: wechat
image: opslib/wechat1
ports:
- containerPort: 80
</pre>
導(dǎo)入rc
<pre>
# kubectl create -f wechat-rc.yaml
replicationcontrollers/wechat
</pre>
確認(rèn)
<img src="./getpods.png" width=800>
附:
在docker中可以利用link功能將容器之間連接起來, 而在kubenetes中是沒有這樣的系統(tǒng)的, 但是由于同一個(gè)pod內(nèi)是共享網(wǎng)絡(luò)存儲相關(guān)空間的,在wechat的鏡像中的配置文件中, 連接數(shù)據(jù)庫和redis的配置項(xiàng)中的IP可以直接寫'127.0.0.1', 類似這樣:
<pre>
sql_connection='postgresql://wechat:wechatpassword@127.0.0.1/wechat'
cached_backend='redis://127.0.0.1:6379/0'
</pre>3: 服務(wù)配置wechat-service.yaml
<pre>
apiVersion: v1beta3
kind: Service
metadata:
name: wechat
labels:
name: wechat
spec:
ports:
- port: 80
selector:
name: wechatv4
</pre>
導(dǎo)入
<pre>
# kubectl create -f wechat-service.yaml
services/wechat
</pre>
查看
<pre>
kubectl get service wechat
NAME LABELS SELECTOR IP(S) PORT(S)
wechat name=wechat name=wechatv4 192.168.56.156 80/TCP
</pre>
確認(rèn)
<pre>
# curl -i http://192.168.56.156
HTTP/1.1 200 OK
Content-Length: 0
Access-Control-Allow-Headers: X-Auth-Token, Content-type
Server: TornadoServer/4.2
Etag: "da39a3ee5e6b4b0d3255bfef95601890afd80709"
Date: Mon, 06 Jul 2015 09:04:49 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, DELETE
Content-Type: application/json
</pre>
3.2 業(yè)務(wù)更新
基本的業(yè)務(wù)部署完成后, 在服務(wù)要更新的時(shí)候, kubenetes可以利用滾動(dòng)更新,基本上實(shí)現(xiàn)了業(yè)務(wù)的熱更新.
<pre>
kubectl rolling-update wechatv3 -f wechatv3.yaml
Creating wechatv4
At beginning of loop: wechatv3 replicas: 0, wechatv4 replicas: 1
Updating wechatv3 replicas: 0, wechatv4 replicas: 1
At end of loop: wechatv3 replicas: 0, wechatv4 replicas: 1
Update succeeded. Deleting wechatv3
wechatv4
</pre>
3.3 應(yīng)用管理
當(dāng)需要同一服務(wù)需要啟動(dòng)多個(gè)實(shí)例, 服務(wù)本身一樣, 但是啟動(dòng)服務(wù)的配置不一樣時(shí)候
一般我們可能會(huì)有3種需求:
- 1: 不同的container設(shè)置不同的資源權(quán)限
- 2: 不同的container掛載不同的目錄
- 3: 不同的container執(zhí)行不同的啟動(dòng)命令
可以在配置文件中針對不同的container設(shè)置不同的設(shè)置.
<pre>
apiVersion: v1beta3
kind: ReplicationController
metadata:
name: new
labels:
name: new
spec:
replicas: 1
selector:
name: new
template:
metadata:
labels:
name: new
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
- name: postgres
image: opslib/wechat_db
ports:
- containerPort: 5432
- name: wechat
image: opslib/wechat1
command: #container的啟動(dòng)命令有外部定義
- '/bin/bash'
- '-c'
- '/usr/bin/wechat_api'
- '--config=/etc/wechat/wechat.conf'
resources: #限制container的資源
request: #請求的資源
cpu: "0.5"
memory: "512Mi"
limits: #最大可以使用的資源
cpu: "1"
memory: "1024Mi"
ports:
- containerPort: 80
volumeMounts: #掛載目錄
- name: data
mountPath: /data
volumes:
- name: data
</pre>
參考文章:
- Kubernetes系統(tǒng)架構(gòu)簡介: http://www.infoq.com/cn/articles/Kubernetes-system-architecture-introduction
- etcd:用于服務(wù)發(fā)現(xiàn)的鍵值存儲系統(tǒng): http://www.infoq.com/cn/news/2014/07/etcd-cluster-discovery
- kubenetes部署: http://blog.opskumu.com/k8s-cluster-centos7.html