紙上得來終覺淺,絕知此事要躬行糖驴。前面幾篇文章先后介紹了K8S的搭建僚祷,組件佛致、基本概念,網(wǎng)絡(luò)和存儲辙谜。這章是一個實戰(zhàn)篇俺榆,實現(xiàn)基于K8S的spring-cloud+nacos+MySQL服務(wù)部署。
1. 創(chuàng)建命名空間
之前為了簡單都是使用默認(rèn)的命名空間(default)装哆,本篇所有的部署都是基于dev命名空間的罐脊,所以首先要創(chuàng)建dev命名空間,namespace.yaml請參考附錄1蜕琴。
# 01 創(chuàng)建命名空間
# kubectl apply -f namespace.yaml
namespace/dev created
# 02 查看已有的命名空間
# kubectl get ns
NAME STATUS AGE
default Active 53d
dev Active 5s
ingress-nginx Active 17d
kube-node-lease Active 53d
kube-public Active 53d
kube-system Active 53d
2. 部署nfs-client-provisioner
在dev命名空間下增加nfs-client-provisioner ServiceAccount萍桌,否則后續(xù)會提示error looking up service account dev/nfs-client-provisioner: serviceaccount "nfs-client-provisioner" not found
。創(chuàng)建好ServiceAccount后凌简,在dev下創(chuàng)建nfs-client-provisioner和StorageClass上炎。 rbac.yaml、deployment.yaml雏搂、class.yaml請參考附錄2藕施、3、4凸郑。
# 01 將default命名空間替換為dev裳食,在dev下創(chuàng)建ServiceAccount
# sed -i'' "s/namespace:.*/namespace: dev/g" rbac.yaml
# kubectl apply -f rbac.yaml
# kubectl get ServiceAccount -n dev
NAME SECRETS AGE
default 1 3d19h
nfs-client-provisioner 1 47h
# 02 創(chuàng)建nfs-client-provisioner
# kubectl apply -f deployment.yaml
# kubectl get deployment -n dev
NAME READY UP-TO-DATE AVAILABLE AGE
nfs-client-provisioner 1/1 1 1 2d2h
# 03 創(chuàng)建StorageClass
# kubectl apply -f class.yaml
# kubectl get StorageClass -n dev
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 3d4h
3. 部署MySQL服務(wù)
3.1 創(chuàng)建secret
Secret 是存儲密碼或密鑰類敏感數(shù)據(jù)的對象。 使用 Secret 意味著不需要在應(yīng)用程序代碼中包含機(jī)密數(shù)據(jù)芙沥。
這里存儲MySQL的密碼诲祸,后續(xù)以環(huán)境變量的方式使用,secret.yaml內(nèi)容參考附錄5而昨。
# echo -n "123456" | base64
MTIzNDU2
# kubectl apply -f secret.yaml
secret/secret created
# kubectl get secret -n dev
NAME TYPE DATA AGE
default-token-xxk4j kubernetes.io/service-account-token 3 5h9m
secret Opaque
3.2 部署MySQL服務(wù)
mysql.yaml參考附錄6烦绳,部署之后進(jìn)入mysql pod,簡單測試MySQL服務(wù)是否部署成功配紫。
# 01 部署MySQL服務(wù)
# kubectl apply -f mysql.yaml
service/mysql created
persistentvolumeclaim/mysql-pv-claim created
deployment.apps/mysql created
# 02 查看部署的pv,pvc、掛載的目錄午阵、deployment和服務(wù)
# kubectl get pv,pvc -n dev
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-4432ca6f-b912-4aff-827b-a4573d11d2e3 2Gi RWO Delete Bound dev/mysql-pv-claim nfs-client 8s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/mysql-pv-claim Bound pvc-4432ca6f-b912-4aff-827b-a4573d11d2e3 2Gi RWO nfs-client 8s
# ls /nfs/data/dev-mysql-pv-claim/
auto.cnf ib_buffer_pool ibdata1 ib_logfile0 ib_logfile1 ibtmp1 mysql performance_schema sys
# kubectl get deployment -n dev
NAME READY UP-TO-DATE AVAILABLE AGE
mysql 1/1 1 1 47s
# kubectl get svc -n dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP None <none> 3306/TCP 55s
4. 部署nacos服務(wù)
采用集群模式部署nacos躺孝,使用MySQL存儲配置數(shù)據(jù),nacos雖然也支持內(nèi)嵌數(shù)據(jù)庫底桂,但是使用MySQL可以更方便觀察數(shù)據(jù)的基本情況植袍。nacos鏡像版本為v2.0.3,MySQL版本為5.7籽懦。
官方部署文檔Kubernetes Nacos寫的非常簡略于个,如果都按照文檔的流程一路走下來可能會一帆風(fēng)順,但是要想做些改變暮顺,分分鐘爆炸厅篓。只能一邊查資料秀存,一邊研究源碼,為此踩了超多坑羽氮。搭建到最后無意間發(fā)現(xiàn)官方更推薦nacos-operator, 不過這個時候已經(jīng)心力交瘁或链,沒有勇氣再去嘗試了。
本文與官方文檔的主要區(qū)別:
- namespace使用dev档押,而不是默認(rèn)的default澳盐,因此要有很多要修改;
- 沒有使用peerfinder擴(kuò)容插件:
- peerfinder本身是一個實驗性項目令宿,官方正在尋求替代方案叼耙, 具體請參考peer-finder
- 如果clusterIP不設(shè)置None會導(dǎo)致插件獲取到的是集群ip,無法獲取到下面節(jié)點的IP粒没,如果不想設(shè)置成None目前只能移除peer-finder插件筛婉;
- 未使用peerfinder,需要手動設(shè)置NACOS_SERVERS革娄, 但因為寫死了servers倾贰,所以擴(kuò)縮容會有問題;
- MySQL鏡像沒有使用nacos/mysql , 而是使用官方的mysql:5.7拦惋。
4.1 配置MySQL
創(chuàng)建nacos數(shù)據(jù)庫和用戶并授權(quán)匆浙,初始化nacos配置表,配置表參考:https://github.com/alibaba/nacos/blob/develop/distribution/conf/nacos-mysql.sql
# 01 進(jìn)入mysql pod
# kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
mysql-d45f868dd-qb5m2 1/1 Running 11 (45h ago) 3d
# kubectl exec -it mysql-d45f868dd-qb5m2 -n dev -- /bin/bash
# 02 用root用戶登錄厕妖,創(chuàng)建nacos數(shù)據(jù)庫和用戶首尼,并授權(quán)
# mysql -u root -p
mysql> create database nacos;
Query OK, 1 row affected (0.01 sec)
mysql> create user 'nacos'@'%' identified by 'nacos';
Query OK, 0 rows affected (0.00 sec)
mysql> grant all on nacos.* TO 'nacos'@'%';
# 03 使用nacos登錄,初始化配置表言秸,配置表參考: https://github.com/alibaba/nacos/blob/develop/distribution/conf/nacos-mysql.sql
# mysql -u nacos -p
mysql> use nacos;
4.2 部署nacos
nacos-pvc-nfs.yaml 請參考附錄7软能, 可自行與官方的nacos-pvc-nfs.yaml對比,查看差異举畸。
StatefulSet
需要特別注意的是nacos類型是StatefulSet查排,StatefulSet適合有狀態(tài)的應(yīng)用,有如下特點:
1.每個pod都有自己的存儲抄沮,所以都用volumeClaimTemplates跋核,為每個pod都生成一個自己的存儲,保存自己的狀態(tài)
2.pod名字始終是固定的
3.service沒有ClusterIP叛买,是headlessservice砂代,所以無法負(fù)載均衡,返回的都是pod名率挣,所以pod名字都必須固定刻伊,StatefulSet在Headless Service的基礎(chǔ)上又為StatefulSet控制的每個Pod副本創(chuàng)建了一個DNS域名:pod-name.headless-server-name.namespace.svc.cluster.local
configmap
保存配置信息,用于將配置文件與鏡像文件分離,使容器化的應(yīng)用程序具有可移植性捶箱。
# kubectl apply -f nacos-pvc-nfs.yaml
# kubectl get pv,pvc -n dev
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-4432ca6f-b912-4aff-827b-a4573d11d2e3 2Gi RWO Delete Terminating dev/mysql-pv-claim nfs-client 4d6h
persistentvolume/pvc-baffdbd7-2fcd-42eb-9f35-c9c9d6e46ae7 10Gi RWX Delete Bound dev/data-nacos-1 nfs-client 3d4h
persistentvolume/pvc-e632f052-7637-401b-b464-4bb80217413e 10Gi RWX Delete Bound dev/data-nacos-0 nfs-client 3d4h
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/data-nacos-0 Bound pvc-e632f052-7637-401b-b464-4bb80217413e 10Gi RWX nfs-client 3d4h
persistentvolumeclaim/data-nacos-1 Bound pvc-baffdbd7-2fcd-42eb-9f35-c9c9d6e46ae7 10Gi RWX nfs-client 3d4h
persistentvolumeclaim/mysql-pv-claim Bound pvc-4432ca6f-b912-4aff-827b-a4573d11d2e3 2Gi RWO nfs-client 4d6h
# kubectl get pods -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-d45f868dd-qb5m2 1/1 Running 11 (3d3h ago) 4d6h 10.244.190.110 w1 <none> <none>
nacos-0 1/1 Running 0 32m 10.244.80.243 w2 <none> <none>
nacos-1 1/1 Running 0 32m 10.244.190.112 w1 <none> <none>
nfs-client-provisioner-dd7474448-r4ckf 1/1 Running 2 (3d3h ago) 3d5h 10.244.190.113 w1 <none> <none>
# kubectl get StatefulSet -n dev
NAME READY AGE
nacos 2/2 30m
# kubectl get svc -n dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP None <none> 3306/TCP 4d6h
nacos-headless ClusterIP None <none> 8848/TCP,9848/TCP,9849/TCP,7848/TCP 30m
4.3 配置nginx-ingress以便外網(wǎng)能訪問
nginx-ingress詳細(xì)內(nèi)容請參考 [K8S系列五]Ingress與Ingress Controller智什,ingress.yaml參考附錄8。為了能通過域名foo.mydomain.com訪問集群讼呢,需要在集群外的/etc/hosts 中增加
192.168.0.61 foo.mydomain.com
訪問http://foo.mydomain.com:30434/nacos 查看管理頁面撩鹿,ingress-nginx是以NodePort形式部署的,暴露30434端口悦屏〗诼伲可看到當(dāng)前集群有兩個節(jié)點,分別是
nacos-0.nacos-headless.dev.svc.cluster.local
nacos-1.nacos-headless.dev.svc.cluster.local
4.4 可能遇到的問題
no such host
2022/05/21 11:06:36 lookup nacos-headless on 10.96.0.10:53: no such host
這個是因為clusterIP沒有設(shè)置為None導(dǎo)致的UnknownHostException: jmenv.tbsite.net
Caused by: com.alibaba.nacos.api.exception.NacosException: java.net.UnknownHostException: jmenv.tbsite.net
這個是因為去掉findpeer之后础爬,沒有指定NACOS_SERVERS,
查看/home/nacos/bin/docker-startup.sh甫贯,發(fā)現(xiàn)如下一段代碼,如果指定了PLUGINS_DIR(即使用findpeer插件)看蚜,則執(zhí)行plugin.sh叫搁,否則將NACOS_SERVERS寫入到cluster.conf中。
if [[ ! -d "${PLUGINS_DIR}" ]]; then
echo "" >"$CLUSTER_CONF"
for server in ${NACOS_SERVERS}; do
echo "$server" >>"$CLUSTER_CONF"
done
else
bash $PLUGINS_DIR/plugin.sh
sleep 30
fi
部署后查看/home/nacos/conf/cluster.conf的內(nèi)容供炎,正是NACOS_SERVERS的值
# kubectl exec -it nacos-0 -n dev -- cat /home/nacos/conf/cluster.conf
#2022-05-21T15:35:45.067
nacos-0.nacos-headless.dev.svc.cluster.local:8848
nacos-1.nacos-headless.dev.svc.cluster.local:8848
5. 部署java應(yīng)用
java應(yīng)用包括spring-cloud-provider-example和spring-cloud-consumer-example兩個模塊:
- spring-cloud-provider-example通過Mybatis-plus訪問MySQL數(shù)據(jù)庫渴逻,并注冊到nacos對外提供http服務(wù);
- spring-cloud-consumer-exampe通過RPC調(diào)用spring-cloud-provider-example音诫,并對外提供http服務(wù)
- spring-cloud + nacos 示例參考:nacos-spring-cloud-discovery-example
- mybatis-plus示例參考:快速開始
5.1 核心代碼
spring-cloud-provider-example的application.yaml和NacosProviderApplication.java 如下所示惨奕,spring-cloud-consumer-example請參考附錄9、10竭钝。
application.yaml中主要包含了nacos和MySQL的配置:
- nacos地址為nacos-0.nacos-headless.dev.svc.cluster.local:8848,nacos-1.nacos-headless.dev.svc.cluster.local:8848梨撞;
- mysql地址為mysql.dev.svc.cluster.local:3306;
- 數(shù)據(jù)庫的密碼通過環(huán)境變量
{MYSQL_PASSWORD}
注入進(jìn)來香罐。
# application.yaml
server:
port: 8070
spring:
application:
name: spring-cloud-provider
cloud:
nacos:
discovery:
server-addr: nacos-0.nacos-headless.dev.svc.cluster.local:8848,nacos-1.nacos-headless.dev.svc.cluster.local:8848
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://mysql.dev.svc.cluster.local:3306/demo?serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false&allowPublicKeyRetrieval=true
username: root
password: ${MYSQL_PASSWORD}
main:
allow-bean-definition-overriding: true
// NacosProviderApplication.java
@SpringBootApplication
@EnableDiscoveryClient
@MapperScan("com.liuil.k8s.example.spring.cloud")
public class NacosProviderApplication {
public static void main(String[] args) {
SpringApplication.run(NacosProviderApplication.class, args);
}
private static ObjectMapper objectMapper = new ObjectMapper();
@Autowired
private UserMapper userMapper;
@RestController
public class UserController {
@RequestMapping(value = "/user/{id}", method = RequestMethod.GET)
public String find(@PathVariable int id) {
return findById(id);
}
private String findById(int id) {
User user = userMapper.selectById(id);
return serialize(user);
}
private <T> String serialize(T t) {
try {
return objectMapper.writeValueAsString(t);
} catch (JsonProcessingException e) {
return null;
}
}
}
}
5.2 初始化MySQL
# 01 創(chuàng)建演示數(shù)據(jù)庫
create database demo;
use demo;
# 02 創(chuàng)建表
CREATE TABLE user
(
id BIGINT(20) NOT NULL COMMENT '主鍵ID',
name VARCHAR(30) NULL DEFAULT NULL COMMENT '姓名',
age INT(11) NULL DEFAULT NULL COMMENT '年齡',
email VARCHAR(50) NULL DEFAULT NULL COMMENT '郵箱',
PRIMARY KEY (id)
);
# 03 插入演示數(shù)據(jù)
INSERT INTO user (id, name, age, email) VALUES
(1, 'Jone', 18, 'test1@baomidou.com'),
(2, 'Jack', 20, 'test2@baomidou.com'),
(3, 'Tom', 28, 'test3@baomidou.com'),
(4, 'Sandy', 21, 'test4@baomidou.com'),
(5, 'Billie', 24, 'test5@baomidou.com');
5.3 打包鏡像
打包后可以將鏡像推送到阿里云等公有倉庫或自己搭建的私有倉庫卧波,但這里僅用于演示,所以分別在work01庇茫、work02機(jī)器中構(gòu)建鏡像即可港粱。
將spring-cloud-provider-example-1.0.0.jar 和spring-cloud-consumer-example-1.0.0.jar 拷貝到 192.168.0.61 和192.168.0.62 兩臺服務(wù)器上,與Dockerfile放置在同一個目錄旦签,然后打包即可啥容。ProviderDockerfile和ConsumerDockerfile請參考附錄11、12顷霹。
# 01 將spring-cloud-consumer-example-1.0.0.jar拷貝到w1,其他拷貝類似击吱,稍作修改即可
scp -P 22231 spring-cloud-consumer-example/target/spring-cloud-consumer-example-1.0.0.jar root@192.168.0.126:/root/spring-mysql-nacos/spring-cloud-consumer-example-1.0.0.jar
# 02 打包鏡像
docker build -f ProviderDockerfile -t spring-cloud-provider:v0.0.1 .
docker build -f ConsumerDockerfile -t spring-cloud-consumer:v0.0.1 .
# 03 查看鏡像
# docker images |grep spring-cloud
spring-cloud-provider v0.0.1 96e357196894 About an hour ago 127MB
spring-cloud-consumer v0.0.1 1cae051e2efd About an hour ago 121MB
5.4 部署
spring-cloud-provider.yaml和spring-cloud-consumer.yaml分別參考附錄13淋淀、14
# 01 部署spring-cloud-provider和spring-cloud-consumer
# kubectl apply -f spring-cloud-provider.yaml
# kubectl apply -f spring-cloud-consumer.yaml
# 02 查看pods和svc
# kubectl get pods -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-d45f868dd-qb5m2 1/1 Running 11 (3d4h ago) 4d7h 10.244.190.110 w1 <none> <none>
nacos-0 1/1 Running 0 82m 10.244.80.243 w2 <none> <none>
nacos-1 1/1 Running 0 82m 10.244.190.112 w1 <none> <none>
nfs-client-provisioner-dd7474448-r4ckf 1/1 Running 2 (3d4h ago) 3d5h 10.244.190.113 w1 <none> <none>
spring-cloud-consumer-78ddf98844-j42g9 1/1 Running 0 40m 10.244.80.246 w2 <none> <none>
spring-cloud-provider-9576d8464-hkfpp 1/1 Running 0 29m 10.244.80.247 w2 <none> <none>
# kubectl get svc -n dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP None <none> 3306/TCP 4d6h
nacos-headless ClusterIP None <none> 8848/TCP,9848/TCP,9849/TCP,7848/TCP 52m
spring-cloud-consumer ClusterIP 10.107.212.144 <none> 8080/TCP 10m
spring-cloud-provider ClusterIP 10.108.18.158 <none> 8070/TCP 7s
# 03 測試spring-cloud-provider和spring-cloud-consumer能否正常工作
# 10.108.18.158是spring-cloud-provider service的ip地址,10.107.212.144是spring-cloud-consumer service的ip地址。
# curl 10.108.18.158:8070/user/1
{"id":1,"name":"Jone","age":18,"email":"test1@baomidou.com"}
# curl 10.107.212.144:8080/consumer/user/1
{"id":1,"name":"Jone","age":18,"email":"test1@baomidou.com"}
登錄nacos管理后臺查看當(dāng)前服務(wù)列表朵纷,發(fā)現(xiàn)spring-cloud-provider和spring-cloud-consumer都已經(jīng)注冊成功炭臭。
5.5 配置Ingress
配置Ingress以便能夠在集群外訪問, rules中增加如下內(nèi)容,完整內(nèi)容參考附錄8
- path: /consumer
pathType: Prefix
backend:
service:
name: spring-cloud-consumer
port:
number: 8080
最后訪問foo.mydomain.com:30434/consumer/user/1 得到如下結(jié)果
至此袍辞,基于K8S的spring-cloud+nacos+MySQL服務(wù)都已經(jīng)完全部署鞋仍。
參考
1.示例:使用 Persistent Volumes 部署 WordPress 和 MySQL
2.Kubernetes Nacos
附錄
1. namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: dev
labels:
name: dev
2. rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: dev
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: dev
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: dev
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: dev
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: dev
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
3. deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
# 指定namespace為dev
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
# 替換成實際的NFS server地址
- name: NFS_SERVER
value: 192.168.0.51
# 替換成實際的NFS路徑
- name: NFS_PATH
value: /nfs/data
volumes:
- name: nfs-client-root
nfs:
server: 192.168.0.51
path: /nfs/data
4. class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client
namespace: dev
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
# 指定路徑模板
pathPattern: ${.PVC.namespace}-${.PVC.name}
archiveOnDelete: "false"
5. secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: secret
namespace: dev
data:
password: MTIzNDU2
6. mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: dev
labels:
app: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
namespace: dev
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: dev
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
tier: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: secret
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
7. nacos-pvc-nfs.yaml
---
apiVersion: v1
kind: Service
metadata:
name: nacos-headless
namespace: dev
labels:
app: nacos
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
ports:
- port: 8848
name: server
targetPort: 8848
- port: 9848
name: client-rpc
targetPort: 9848
- port: 9849
name: raft-rpc
targetPort: 9849
## 兼容1.4.x版本的選舉端口
- port: 7848
name: old-raft-rpc
targetPort: 7848
clusterIP: None
selector:
app: nacos
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nacos-cm
namespace: dev
data:
mysql.service.host: "mysql.dev.svc.cluster.local"
mysql.db.name: "nacos"
mysql.port: "3306"
mysql.user: "nacos"
mysql.password: "nacos"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nacos
namespace: dev
spec:
serviceName: nacos-headless
replicas: 2
template:
metadata:
labels:
app: nacos
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- nacos
topologyKey: "kubernetes.io/hostname"
serviceAccountName: nfs-client-provisioner
#initContainers:
# - name: peer-finder-plugin-install
# image: nacos/nacos-peer-finder-plugin:1.1
# imagePullPolicy: Always
# volumeMounts:
# - mountPath: /home/nacos/plugins/peer-finder
# name: data
# subPath: peer-finder
containers:
- name: nacos
imagePullPolicy: Always
image: nacos/nacos-server:v2.0.3
resources:
requests:
memory: "2Gi"
cpu: "500m"
ports:
- containerPort: 8848
name: client-port
- containerPort: 9848
name: client-rpc
- containerPort: 9849
name: raft-rpc
- containerPort: 7848
name: old-raft-rpc
env:
- name: NACOS_REPLICAS
value: "2"
- name: SERVICE_NAME
value: "nacos-headless"
- name: DOMAIN_NAME
value: "cluster.local"
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: MYSQL_SERVICE_HOST
valueFrom:
configMapKeyRef:
name: nacos-cm
key: mysql.service.host
- name: MYSQL_SERVICE_DB_NAME
valueFrom:
configMapKeyRef:
name: nacos-cm
key: mysql.db.name
- name: MYSQL_SERVICE_PORT
valueFrom:
configMapKeyRef:
name: nacos-cm
key: mysql.port
- name: MYSQL_SERVICE_USER
valueFrom:
configMapKeyRef:
name: nacos-cm
key: mysql.user
- name: MYSQL_SERVICE_PASSWORD
valueFrom:
configMapKeyRef:
name: nacos-cm
key: mysql.password
#- name: SPRING_DATASOURCE_PLATFORM
# value: "mysql"
#- name: MYSQL_SERVICE_DB_PARAM
# value: "characterEncoding=utf8&connectTimeout=10000&socketTimeout=30000&autoReconnect=true&useSSL=false&serverTimezone=Asia/Shanghai&allowPublicKeyRetrieval=true"
- name: NACOS_SERVER_PORT
value: "8848"
- name: NACOS_APPLICATION_PORT
value: "8848"
- name: PREFER_HOST_MODE
value: "hostname"
- name: NACOS_SERVERS
value: "nacos-0.nacos-headless.dev.svc.cluster.local:8848 nacos-1.nacos-headless.dev.svc.cluster.local:8848"
volumeMounts:
#- name: data
# mountPath: /home/nacos/plugins/peer-finder
# subPath: peer-finder
- name: data
mountPath: /home/nacos/data
subPath: data
- name: data
mountPath: /home/nacos/logs
subPath: logs
volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.beta.kubernetes.io/storage-class: "nfs-client"
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 10Gi
selector:
matchLabels:
app: nacos
8. ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
namespace: dev
spec:
ingressClassName: nginx
rules:
- host: foo.mydomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nacos-headless
port:
number: 8848
- path: /consumer
pathType: Prefix
backend:
service:
name: spring-cloud-consumer
port:
number: 8080
9. spring-cloud-consumer-example applicaiton.yaml
server:
port: 8080
spring:
application:
name: spring-cloud-consumer
cloud:
nacos:
discovery:
server-addr: nacos-headless.dev.svc.cluster.local:8848
10.spring-cloud-consumer-example NacosConsumerApplication.java
@SpringBootApplication
@EnableDiscoveryClient
public class NacosConsumerApplication {
@LoadBalanced
@Bean
public RestTemplate restTemplate() {
return new RestTemplate();
}
public static void main(String[] args) {
SpringApplication.run(NacosConsumerApplication.class, args);
}
@RestController
public class TestController {
private final RestTemplate restTemplate;
@Autowired
public TestController(RestTemplate restTemplate) {this.restTemplate = restTemplate;}
@RequestMapping(value = "consumer/user/{id}", method = RequestMethod.GET)
public String echo(@PathVariable int id) {
return restTemplate.getForObject("http://spring-cloud-provider/user/" + id, String.class);
}
}
}
11. ProviderDockerfile
FROM openjdk:8-jre-alpine
COPY spring-cloud-provider-example-1.0.0.jar /spring-cloud-provider.jar
ENTRYPOINT ["java","-jar","/spring-cloud-provider.jar"]
12. ConsumerDockerfile
FROM openjdk:8-jre-alpine
COPY spring-cloud-consumer-example-1.0.0.jar /spring-cloud-consumer.jar
ENTRYPOINT ["java","-jar","/spring-cloud-consumer.jar"]
13. spring-cloud-provider.yaml
# 以Deployment部署Pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-cloud-provider
namespace: dev
spec:
selector:
matchLabels:
app: spring-cloud-provider
replicas: 1
template:
metadata:
labels:
app: spring-cloud-provider
spec:
containers:
- name: spring-cloud-provider
image: spring-cloud-provider:v0.0.1
ports:
- containerPort: 8070
env:
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: secret
key: password
---
# 創(chuàng)建Pod的Service
apiVersion: v1
kind: Service
metadata:
name: spring-cloud-provider
namespace: dev
spec:
ports:
- port: 8070
protocol: TCP
targetPort: 8070
selector:
app: spring-cloud-provider
14. spring-cloud-consumer.yaml
# 以Deployment部署Pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-cloud-consumer
namespace: dev
spec:
selector:
matchLabels:
app: spring-cloud-consumer
replicas: 1
template:
metadata:
labels:
app: spring-cloud-consumer
spec:
containers:
- name: spring-cloud-consumer
image: spring-cloud-consumer:v0.0.1
ports:
- containerPort: 8080
---
# 創(chuàng)建Pod的Service
apiVersion: v1
kind: Service
metadata:
name: spring-cloud-consumer
namespace: dev
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: spring-cloud-consumer