??在以往消息隊列的使用中济舆,我們通常使用集成消息中間件開源包來實(shí)現(xiàn)對應(yīng)功能涩嚣,而消息中間件的實(shí)現(xiàn)又有多種赃梧,比如目前比較主流的ActiveMQ固以、RocketMQ墩虹、RabbitMQ、Kafka憨琳,Stream等诫钓,這些消息中間件的實(shí)現(xiàn)都各有優(yōu)劣。
??在進(jìn)行框架設(shè)計的時候篙螟,我們考慮是否能夠和之前實(shí)現(xiàn)的短信發(fā)送菌湃、分布式存儲等功能一樣,抽象統(tǒng)一消息接口遍略,屏蔽底層實(shí)現(xiàn)惧所,在用到消息隊列時,使用統(tǒng)一的接口代碼绪杏,然后在根據(jù)自己業(yè)務(wù)需要選擇不同消息中間件時下愈,只需要通過配置就可以實(shí)現(xiàn)靈活切換使用哪種消息中間件。Spring Cloud Stream已經(jīng)實(shí)現(xiàn)了這樣的功能蕾久,下面我們在框架中集成并測試消息中間件的功能势似。
目前spring-cloud-stream官網(wǎng)顯示已支持以下消息中間件,我們使用RabbitMQ和Apache Kafka來集成測試:
- RabbitMQ
- Apache Kafka
- Kafka Streams
- Amazon Kinesis
- Google PubSub (partner maintained)
- Solace PubSub+ (partner maintained)
- Azure Event Hubs (partner maintained)
- AWS SQS (partner maintained)
- AWS SNS (partner maintained)
- Apache RocketMQ (partner maintained)
一腔彰、集成RabbitMQ并測試消息收發(fā)
??RabbitMQ是使用Erlang語言實(shí)現(xiàn)的叫编,這里安裝需要安裝Erlang的依賴等,這里為了快速安裝測試霹抛,所以使用Docker安裝單機(jī)版RabbitMQ搓逾。
1、拉取RabbitMQ的Docker鏡像杯拐,后綴帶management的是帶web管理界面的鏡像
docker pull rabbitmq:3.9.13-management
2霞篡、創(chuàng)建和啟動RabbitMQ容器
docker run -d\
-e RABBITMQ_DEFAULT_USER=admin\
-e RABBITMQ_DEFAULT_PASS=123456\
--name rabbitmq\
-p 15672:15672\
-p 5672:5672\
-v `pwd`/bigdata:/var/lib/rabbitmq\
rabbitmq:3.9.13-management
3、查看RabbitMQ是否啟動
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ff1922cc6b73 rabbitmq:3.9.13-management "docker-entrypoint.s…" About a minute ago Up About a minute 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, :::5672->5672/tcp, 15671/tcp, 15691-15692/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp, :::15672->15672/tcp rabbitmq
4端逼、訪問管理控制臺http://172.16.20.225:15672 朗兵,輸入設(shè)置的用戶名密碼 admin/123456登錄。如果管理臺不能訪問顶滩,可以嘗試使用一下命令啟動:
docker exec -it rabbitmq rabbitmq-plugins enable rabbitmq_management
5余掖、Nacos添加配置,我們以操作日志和API日志為示例礁鲁,說明自定義輸入和輸出通道進(jìn)行消息收發(fā)盐欺,operation-log為操作日志赁豆,api-log為API日志。注意冗美,官網(wǎng)有文檔說明:使用multiple RabbitMQ binders 時需要排除RabbitAutoConfiguration魔种,實(shí)際應(yīng)用過程中,如果不排除粉洼,也不直接配置RabbitMQ的連接节预,那么RabbitMQ健康檢查會默認(rèn)去連接127.0.0.1:5672,導(dǎo)致后臺一直報錯属韧。
spring:
autoconfigure:
# 使用multiple RabbitMQ binders 時需要排除RabbitAutoConfiguration
exclude:
- org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration
cloud:
stream:
binders:
defaultRabbit:
type: rabbit
environment: #配置rabbimq連接環(huán)境
spring:
rabbitmq:
host: 172.16.20.225
username: admin
password: 123456
virtual-host: /
bindings:
output_operation_log:
destination: operation-log #exchange名稱安拟,交換模式默認(rèn)是topic
content-type: application/json
binder: defaultRabbit
output_api_log:
destination: api-log #exchange名稱,交換模式默認(rèn)是topic
content-type: application/json
binder: defaultRabbit
input_operation_log:
destination: operation-log
content-type: application/json
binder: defaultRabbit
group: ${spring.application.name}
consumer:
concurrency: 2 # 初始/最少/空閑時 消費(fèi)者數(shù)量,默認(rèn)1
input_api_log:
destination: api-log
content-type: application/json
binder: defaultRabbit
group: ${spring.application.name}
consumer:
concurrency: 2 # 初始/最少/空閑時 消費(fèi)者數(shù)量,默認(rèn)1
6挫剑、在gitegg-service-bigdata中添加spring-cloud-starter-stream-rabbit依賴去扣,這里注意,只需要在具體使用消息中間件的微服務(wù)上引入樊破,不需要統(tǒng)一引入,并不是每個微服務(wù)都會用到消息中間件唆铐,況且可能不同的微服務(wù)使用不同的消息中間件哲戚。
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-rabbit</artifactId>
</dependency>
7、自定義日志輸出通道LogSink.java
/**
* @author GitEgg
*/
public interface LogSink {
String INPUT_OPERATION_LOG = "output_operation_log";
String INPUT_API_LOG = "output_api_log";
/**
* 操作日志自定義輸入通道
* @return
*/
@Input(INPUT_OPERATION_LOG)
SubscribableChannel inputOperationLog();
/**
* API日志自定義輸入通道
* @return
*/
@Input(INPUT_API_LOG)
SubscribableChannel inputApiLog();
}
8艾岂、自定義日志輸入通道LogSource.java
/**
* 自定義Stream輸出通道
* @author GitEgg
*/
public interface LogSource {
String OUTPUT_OPERATION_LOG = "input_operation_log";
String OUTPUT_API_LOG = "input_api_log";
/**
* 操作日志自定義輸出通道
* @return
*/
@Output(OUTPUT_OPERATION_LOG)
MessageChannel outputOperationLog();
/**
* API日志自定義輸出通道
* @return
*/
@Output(OUTPUT_API_LOG)
MessageChannel outputApiLog();
}
9顺少、實(shí)現(xiàn)日志推送接口的調(diào)用, @Scheduled(fixedRate = 3000)是為了測試推送消息王浴,每隔3秒執(zhí)行一次定時任務(wù)脆炎,注意:要使定時任務(wù)執(zhí)行,還需要在Application啟動類添加@EnableScheduling注解氓辣。
ILogSendService.java
/**
* @author GitEgg
*/
public interface ILogSendService {
/**
* 發(fā)送操作日志消息
* @return
*/
void sendOperationLog();
/**
* 發(fā)送api日志消息
* @return
*/
void sendApiLog();
}
LogSendImpl.java
/**
* @author GitEgg
*/
@EnableBinding(value = { LogSource.class })
@Slf4j
@Component
@RequiredArgsConstructor(onConstructor_ = @Autowired)
public class LogSendImpl implements ILogSendService {
private final LogSource logSource;
@Scheduled(fixedRate = 3000)
@Override
public void sendOperationLog() {
log.info("推送操作日志-------開始------");
logSource.outputOperationLog()
.send(MessageBuilder.withPayload(UUID.randomUUID().toString()).build());
log.info("推送操作日志-------結(jié)束------");
}
@Scheduled(fixedRate = 3000)
@Override
public void sendApiLog() {
log.info("推送API日志-------開始------");
logSource.outputApiLog()
.send(MessageBuilder.withPayload(UUID.randomUUID().toString()).build());
log.info("推送API日志-------結(jié)束------");
}
}
10秒裕、實(shí)現(xiàn)日志消息接收接口
ILogReceiveService.java
/**
* @author GitEgg
*/
public interface ILogReceiveService {
/**
* 接收到操作日志消息
* @param msg
*/
<T> void receiveOperationLog(GenericMessage<T> msg);
/**
* 接收到API日志消息
* @param msg
*/
<T> void receiveApiLog(GenericMessage<T> msg);
}
LogReceiveImpl.java
/**
* @author GitEgg
*/
@Slf4j
@Component
@EnableBinding(value = { LogSink.class })
public class LogReceiveImpl implements ILogReceiveService {
@StreamListener(LogSink.INPUT_OPERATION_LOG)
@Override
public synchronized <T> void receiveOperationLog(GenericMessage<T> msg) {
log.info("接收到操作日志: " + msg.getPayload());
}
@StreamListener(LogSink.INPUT_API_LOG)
@Override
public synchronized <T> void receiveApiLog(GenericMessage<T> msg) {
log.info("接收到API日志: " + msg.getPayload());
}
}
10、啟動微服務(wù)钞啸,可以看到日志打印推送和接收消息已經(jīng)執(zhí)行的情況
二几蜻、集成Kafka測試消息收發(fā)并測試消息中間件切換
??使用Spring Cloud Stream的其中一項(xiàng)優(yōu)勢就是方便切換消息中間件又不需要改動代碼,那么下面我們測試在Nacos的Spring Cloud Stream配置中同時添加Kafka配置体斩,并且API日志繼續(xù)使用RabbitMQ梭稚,操作日志使用Kafka,查看是否能夠同時運(yùn)行絮吵。這里先將配置測試放在前面方便對比弧烤,Kafka集群搭建放在后面說明。
1蹬敲、Nacos添加Kafka配置暇昂,并且將operation_log的binder改為Kafka
spring:
autoconfigure:
# 使用multiple RabbitMQ binders 時需要排除RabbitAutoConfiguration
exclude:
- org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration
cloud:
stream:
binders:
defaultRabbit:
type: rabbit
environment: #配置rabbimq連接環(huán)境
spring:
rabbitmq:
host: 172.16.20.225
username: admin
password: 123456
virtual-host: /
kafka:
type: kafka
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: 172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092
zkNodes: 172.16.20.220:2181,172.16.20.221:2181,172.16.20.222:2181
# 自動創(chuàng)建Topic
auto-create-topics: true
bindings:
output_operation_log:
destination: operation-log #exchange名稱想幻,交換模式默認(rèn)是topic
content-type: application/json
binder: kafka
output_api_log:
destination: api-log #exchange名稱,交換模式默認(rèn)是topic
content-type: application/json
binder: defaultRabbit
input_operation_log:
destination: operation-log
content-type: application/json
binder: kafka
group: ${spring.application.name}
consumer:
concurrency: 2 # 初始/最少/空閑時 消費(fèi)者數(shù)量,默認(rèn)1
input_api_log:
destination: api-log
content-type: application/json
binder: defaultRabbit
group: ${spring.application.name}
consumer:
concurrency: 2 # 初始/最少/空閑時 消費(fèi)者數(shù)量,默認(rèn)1
2话浇、登錄Kafka服務(wù)器脏毯,切換到Kafka的bin目錄下啟動一個消費(fèi)operation-log主題的消費(fèi)者
./kafka-console-consumer.sh --bootstrap-server 172.16.20.221:9092 --topic operation-log
3、啟動微服務(wù)幔崖,查看RabbitMQ和Kafka的日志推送和接收是否能夠正常運(yùn)行
-
微服務(wù)后臺日志顯示能夠正常推送和接收消息:
服務(wù)后臺日志 -
Kafka服務(wù)器顯示收到了操作日志消息
Kafka服務(wù)器
三食店、Kafka集群搭建
1、環(huán)境準(zhǔn)備:
??首先準(zhǔn)備好三臺CentOS系統(tǒng)的主機(jī)赏寇,設(shè)置ip為:172.16.20.220吉嫩、172.16.20.221、172.16.20.222嗅定。
??Kafka會使用大量文件和網(wǎng)絡(luò)socket自娩,Linux默認(rèn)配置的File descriptors(文件描述符)不能夠滿足Kafka高吞吐量的要求,所以這里需要調(diào)整(更多性能優(yōu)化渠退,請查看Kafka官方文檔):
vi /etc/security/limits.conf
# 在最后加入忙迁,修改完成后,重啟系統(tǒng)生效碎乃。
* soft nofile 131072
* hard nofile 131072
??新建kafka的日志目錄和zookeeper數(shù)據(jù)目錄姊扔,因?yàn)檫@兩項(xiàng)默認(rèn)放在tmp目錄,而tmp目錄中內(nèi)容會隨重啟而丟失,所以我們自定義以下目錄:
mkdir /data/zookeeper
mkdir /data/zookeeper/data
mkdir /data/zookeeper/logs
mkdir /data/kafka
mkdir /data/kafka/data
mkdir /data/kafka/logs
2梅誓、zookeeper.properties配置
vi /usr/local/kafka/config/zookeeper.properties
修改如下:
# 修改為自定義的zookeeper數(shù)據(jù)目錄
dataDir=/data/zookeeper/data
# 修改為自定義的zookeeper日志目錄
dataLogDir=/data/zookeeper/logs
# 端口
clientPort=2181
# 注釋掉
#maxClientCnxns=0
# 設(shè)置連接參數(shù)恰梢,添加如下配置
# 為zk的基本時間單元,毫秒
tickTime=2000
# Leader-Follower初始通信時限 tickTime*10
initLimit=10
# Leader-Follower同步通信時限 tickTime*5
syncLimit=5
# 設(shè)置broker Id的服務(wù)地址梗掰,本機(jī)ip一定要用0.0.0.0代替
server.1=0.0.0.0:2888:3888
server.2=172.16.20.221:2888:3888
server.3=172.16.20.222:2888:3888
3嵌言、在各臺服務(wù)器的zookeeper數(shù)據(jù)目錄/data/zookeeper/data添加myid文件,寫入服務(wù)broker.id屬性值
在data文件夾中新建myid文件及穗,myid文件的內(nèi)容為1(一句話創(chuàng)建:echo 1 > myid)
cd /data/zookeeper/data
vi myid
#添加內(nèi)容:1 其他兩臺主機(jī)分別配置 2和3
1
4摧茴、kafka配置,進(jìn)入config目錄下拥坛,修改server.properties文件
vi /usr/local/kafka/config/server.properties
# 每臺服務(wù)器的broker.id都不能相同
broker.id=1
# 是否可以刪除topic
delete.topic.enable=true
# topic 在當(dāng)前broker上的分片個數(shù)蓬蝶,與broker保持一致
num.partitions=3
# 每個主機(jī)地址不一樣:
listeners=PLAINTEXT://172.16.20.220:9092
advertised.listeners=PLAINTEXT://172.16.20.220:9092
# 具體一些參數(shù)
log.dirs=/data/kafka/kafka-logs
# 設(shè)置zookeeper集群地址與端口如下:
zookeeper.connect=172.16.20.220:2181,172.16.20.221:2181,172.16.20.222:2181
5、Kafka啟動
kafka啟動時先啟動zookeeper猜惋,再啟動kafka丸氛;關(guān)閉時相反,先關(guān)閉kafka著摔,再關(guān)閉zookeeper缓窜。
- zookeeper啟動命令
./zookeeper-server-start.sh ../config/zookeeper.properties &
后臺運(yùn)行啟動命令:
nohup ./zookeeper-server-start.sh ../config/zookeeper.properties >/data/zookeeper/logs/zookeeper.log 2>1 &
或者
./zookeeper-server-start.sh -daemon ../config/zookeeper.properties &
查看集群狀態(tài):
./zookeeper-server-start.sh status ../config/zookeeper.properties
- kafka啟動命令
./kafka-server-start.sh ../config/server.properties &
后臺運(yùn)行啟動命令:
nohup bin/kafka-server-start.sh ../config/server.properties >/data/kafka/logs/kafka.log 2>1 &
或者
./kafka-server-start.sh -daemon ../config/server.properties &
- 創(chuàng)建topic,最新版本已經(jīng)不需要使用zookeeper參數(shù)創(chuàng)建。
./kafka-topics.sh --create --replication-factor 2 --partitions 1 --topic test --bootstrap-server 172.16.20.220:9092
參數(shù)解釋:
復(fù)制兩份
--replication-factor 2
創(chuàng)建1個分區(qū)
--partitions 1
topic 名稱
--topic test
- 查看已經(jīng)存在的topic(三臺設(shè)備都執(zhí)行時可以看到)
./kafka-topics.sh --list --bootstrap-server 172.16.20.220:9092
- 啟動生產(chǎn)者:
./kafka-console-producer.sh --broker-list 172.16.20.220:9092 --topic test
- 啟動消費(fèi)者:
./kafka-console-consumer.sh --bootstrap-server 172.16.20.221:9092 --topic test
./kafka-console-consumer.sh --bootstrap-server 172.16.20.222:9092 --topic test
添加參數(shù) --from-beginning 從開始位置消費(fèi)禾锤,不是從最新消息
./kafka-console-consumer.sh --bootstrap-server 172.16.20.221 --topic test --from-beginning
- 測試:在生產(chǎn)者輸入test空镜,可以在消費(fèi)者的兩臺服務(wù)器上看到同樣的字符test滥搭,說明Kafka服務(wù)器集群已搭建成功奶躯。
四树灶、完整的Nacos配置
spring:
jackson:
time-zone: Asia/Shanghai
date-format: yyyy-MM-dd HH:mm:ss
servlet:
multipart:
max-file-size: 2048MB
max-request-size: 2048MB
security:
oauth2:
resourceserver:
jwt:
jwk-set-uri: 'http://127.0.0.1/gitegg-oauth/oauth/public_key'
autoconfigure:
# 動態(tài)數(shù)據(jù)源排除默認(rèn)配置
exclude:
- com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceAutoConfigure
- org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration
datasource:
druid:
stat-view-servlet:
enabled: true
loginUsername: admin
loginPassword: 123456
dynamic:
# 設(shè)置默認(rèn)的數(shù)據(jù)源或者數(shù)據(jù)源組,默認(rèn)值即為master
primary: master
# 設(shè)置嚴(yán)格模式,默認(rèn)false不啟動. 啟動后在未匹配到指定數(shù)據(jù)源時候會拋出異常,不啟動則使用默認(rèn)數(shù)據(jù)源.
strict: false
# 開啟seata代理,開啟后默認(rèn)每個數(shù)據(jù)源都代理黄娘,如果某個不需要代理可單獨(dú)關(guān)閉
seata: false
#支持XA及AT模式,默認(rèn)AT
seata-mode: AT
druid:
initialSize: 1
minIdle: 3
maxActive: 20
# 配置獲取連接等待超時的時間
maxWait: 60000
# 配置間隔多久才進(jìn)行一次檢測峭状,檢測需要關(guān)閉的空閑連接,單位是毫秒
timeBetweenEvictionRunsMillis: 60000
# 配置一個連接在池中最小生存的時間逼争,單位是毫秒
minEvictableIdleTimeMillis: 30000
validationQuery: select 'x'
testWhileIdle: true
testOnBorrow: false
testOnReturn: false
# 打開PSCache优床,并且指定每個連接上PSCache的大小
poolPreparedStatements: true
maxPoolPreparedStatementPerConnectionSize: 20
# 配置監(jiān)控統(tǒng)計攔截的filters,去掉后監(jiān)控界面sql無法統(tǒng)計誓焦,'wall'用于防火墻
filters: config,stat,slf4j
# 通過connectProperties屬性來打開mergeSql功能胆敞;慢SQL記錄
connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000;
# 合并多個DruidDataSource的監(jiān)控數(shù)據(jù)
useGlobalDataSourceStat: true
datasource:
master:
url: jdbc:mysql://127.0.0.188/gitegg_cloud?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf8&allowMultiQueries=true&serverTimezone=Asia/Shanghai
username: root
password: root
cloud:
sentinel:
filter:
enabled: true
transport:
port: 8719
dashboard: 127.0.0.188:8086
eager: true
datasource:
ds2:
nacos:
data-type: json
server-addr: 127.0.0.188:8848
dataId: ${spring.application.name}-sentinel
groupId: DEFAULT_GROUP
rule-type: flow
gateway:
discovery:
locator:
enabled: true
routes:
- id: gitegg-oauth
uri: lb://gitegg-oauth
predicates:
- Path=/gitegg-oauth/**
filters:
- StripPrefix=1
- id: gitegg-service-system
uri: lb://gitegg-service-system
predicates:
- Path=/gitegg-service-system/**
filters:
- StripPrefix=1
- id: gitegg-service-extension
uri: lb://gitegg-service-extension
predicates:
- Path=/gitegg-service-extension/**
filters:
- StripPrefix=1
- id: gitegg-service-base
uri: lb://gitegg-service-base
predicates:
- Path=/gitegg-service-base/**
filters:
- StripPrefix=1
- id: gitegg-code-generator
uri: lb://gitegg-code-generator
predicates:
- Path=/gitegg-code-generator/**
filters:
- StripPrefix=1
plugin:
config:
# 是否開啟Gateway日志插件
enable: true
# requestLog==true && responseLog==false時,只記錄請求參數(shù)日志杂伟;responseLog==true時移层,記錄請求參數(shù)和返回參數(shù)。
# 記錄入?yún)?requestLog==false時稿壁,不記錄日志
requestLog: true
# 生產(chǎn)環(huán)境幽钢,盡量只記錄入?yún)ⅲ驗(yàn)榉祷貐?shù)數(shù)據(jù)太大傅是,且大多數(shù)情況是無意義的
# 記錄出參
responseLog: true
# all: 所有日志 configure:serviceId和pathList交集 serviceId: 只記錄serviceId配置列表 pathList:只記錄pathList配置列表
logType: all
serviceIdList:
- "gitegg-oauth"
- "gitegg-service-system"
pathList:
- "/gitegg-oauth/oauth/token"
- "/gitegg-oauth/oauth/user/info"
stream:
binders:
defaultRabbit:
type: rabbit
environment: #配置rabbimq連接環(huán)境
spring:
rabbitmq:
host: 127.0.0.225
username: admin
password: 123456
virtual-host: /
kafka:
type: kafka
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: 127.0.0.220:9092,127.0.0.221:9092,127.0.0.222:9092
zkNodes: 127.0.0.220:2181,127.0.0.221:2181,127.0.0.222:2181
# 自動創(chuàng)建Topic
auto-create-topics: true
bindings:
output_operation_log:
destination: operation-log #exchange名稱,交換模式默認(rèn)是topic
content-type: application/json
binder: kafka
output_api_log:
destination: api-log #exchange名稱蕾羊,交換模式默認(rèn)是topic
content-type: application/json
binder: defaultRabbit
input_operation_log:
destination: operation-log
content-type: application/json
binder: kafka
group: ${spring.application.name}
consumer:
concurrency: 2 # 初始/最少/空閑時 消費(fèi)者數(shù)量,默認(rèn)1
input_api_log:
destination: api-log
content-type: application/json
binder: defaultRabbit
group: ${spring.application.name}
consumer:
concurrency: 2 # 初始/最少/空閑時 消費(fèi)者數(shù)量,默認(rèn)1
redis:
database: 1
host: 127.0.0.188
port: 6312
password: 123456
ssl: false
timeout: 2000
redisson:
config: |
singleServerConfig:
idleConnectionTimeout: 10000
connectTimeout: 10000
timeout: 3000
retryAttempts: 3
retryInterval: 1500
password: 123456
subscriptionsPerConnection: 5
clientName: null
address: "redis://127.0.0.188:6312"
subscriptionConnectionMinimumIdleSize: 1
subscriptionConnectionPoolSize: 50
connectionMinimumIdleSize: 32
connectionPoolSize: 64
database: 0
dnsMonitoringInterval: 5000
threads: 0
nettyThreads: 0
codec: !<org.redisson.codec.JsonJacksonCodec> {}
"transportMode":"NIO"
#業(yè)務(wù)系統(tǒng)相關(guān)初始化參數(shù)
system:
#登錄密碼默認(rèn)最大嘗試次數(shù)
maxTryTimes: 5
#不需要驗(yàn)證碼登錄的最大次數(shù)
maxNonCaptchaTimes: 2
#注冊用戶默認(rèn)密碼
defaultPwd: 12345678
#注冊用戶默認(rèn)角色I(xiàn)D
defaultRoleId: 4
#注冊用戶默認(rèn)組織機(jī)構(gòu)ID
defaultOrgId: 79
#不需要數(shù)據(jù)權(quán)限過濾的角色key
noDataFilterRole: DATA_NO_FILTER
#AccessToken過期時間(秒)默認(rèn)為2小時
accessTokenExpiration: 60
#RefreshToken過期時間(秒)默認(rèn)為24小時
refreshTokenExpiration: 300
logging:
config: http://${spring.cloud.nacos.discovery.server-addr}/nacos/v1/cs/configs?dataId=log4j2.xml&group=${spring.nacos.config.group}
file:
# 配置日志的路徑喧笔,包含 spring.application.name Linux: /var/log/${spring.application.name}
path: D:\\log4j2_nacos\\${spring.application.name}
feign:
hystrix:
enabled: false
compression:
# 配置響應(yīng) GZIP 壓縮
response:
enabled: true
# 配置請求 GZIP 壓縮
request:
enabled: true
# 支持壓縮的mime types
mime-types: text/xml,application/xml,application/json
# 配置壓縮數(shù)據(jù)大小的最小閥值,默認(rèn) 2048
min-request-size: 2048
client:
config:
default:
connectTimeout: 8000
readTimeout: 8000
loggerLevel: FULL
#Ribbon配置
ribbon:
#請求連接的超時時間
ConnectTimeout: 50000
#請求處理/響應(yīng)的超時時間
ReadTimeout: 50000
#對所有操作請求都進(jìn)行重試龟再,如果沒有實(shí)現(xiàn)冪等的情況下是很危險的,所以這里設(shè)置為false
OkToRetryOnAllOperations: false
#切換實(shí)例的重試次數(shù)
MaxAutoRetriesNextServer: 5
#當(dāng)前實(shí)例的重試次數(shù)
MaxAutoRetries: 5
#負(fù)載均衡策略
NFLoadBalancerRuleClassName: com.alibaba.cloud.nacos.ribbon.NacosRule
#Sentinel端點(diǎn)配置
management:
endpoints:
web:
exposure:
include: '*'
mybatis-plus:
mapper-locations: classpath*:/com/gitegg/*/*/mapper/*Mapper.xml
typeAliasesPackage: com.gitegg.*.*.entity
global-config:
#主鍵類型 0:"數(shù)據(jù)庫ID自增", 1:"用戶輸入ID",2:"全局唯一ID (數(shù)字類型唯一ID)", 3:"全局唯一ID UUID";
id-type: 2
#字段策略 0:"忽略判斷",1:"非 NULL 判斷"),2:"非空判斷"
field-strategy: 2
#駝峰下劃線轉(zhuǎn)換
db-column-underline: true
#刷新mapper 調(diào)試神器
refresh-mapper: true
#數(shù)據(jù)庫大寫下劃線轉(zhuǎn)換
#capital-mode: true
#邏輯刪除配置
logic-delete-value: 1
logic-not-delete-value: 0
configuration:
map-underscore-to-camel-case: true
cache-enabled: false
log-impl: org.apache.ibatis.logging.stdout.StdOutImpl
# 多租戶配置
tenant:
# 是否開啟租戶模式
enable: true
# 需要排除的多租戶的表
exclusionTable:
- "t_sys_district"
- "t_sys_tenant"
- "t_sys_role"
- "t_sys_resource"
- "t_sys_role_resource"
- "oauth_client_details"
# 租戶字段名稱
column: tenant_id
# 數(shù)據(jù)權(quán)限
data-permission:
# 注解方式默認(rèn)關(guān)閉书闸,否則影響性能
annotation-enable: true
seata:
enabled: false
application-id: ${spring.application.name}
tx-service-group: gitegg_seata_tx_group
# 一定要是false
enable-auto-data-source-proxy: false
service:
vgroup-mapping:
#key與上面的gitegg_seata_tx_group的值對應(yīng)
gitegg_seata_tx_group: default
config:
type: nacos
nacos:
namespace:
serverAddr: 127.0.0.188:8848
group: SEATA_GROUP
userName: "nacos"
password: "nacos"
registry:
type: nacos
nacos:
#seata服務(wù)端(TC)在nacos中的應(yīng)用名稱
application: seata-server
server-addr: 127.0.0.188:8848
namespace:
userName: "nacos"
password: "nacos"
#驗(yàn)證碼配置
captcha:
#驗(yàn)證碼的類型 sliding: 滑動驗(yàn)證碼 image: 圖片驗(yàn)證碼
type: sliding
aj:
captcha:
#緩存local/redis...
cache-type: redis
#local緩存的閾值,達(dá)到這個值,清除緩存
#cache-number=1000
#local定時清除過期緩存(單位秒),設(shè)置為0代表不執(zhí)行
#timing-clear=180
#驗(yàn)證碼類型default兩種都實(shí)例化利凑。
type: default
#漢字統(tǒng)一使用Unicode,保證程序通過@value讀取到是中文浆劲,在線轉(zhuǎn)換 https://tool.chinaz.com/tools/unicode.aspx 中文轉(zhuǎn)Unicode
#右下角水印文字(我的水印)
water-mark: GitEgg
#右下角水印字體(宋體)
water-font: 宋體
#點(diǎn)選文字驗(yàn)證碼的文字字體(宋體)
font-type: 宋體
#校驗(yàn)滑動拼圖允許誤差偏移量(默認(rèn)5像素)
slip-offset: 5
#aes加密坐標(biāo)開啟或者禁用(true|false)
aes-status: true
#滑動干擾項(xiàng)(0/1/2) 1.2.2版本新增
interference-options: 2
# 接口請求次數(shù)一分鐘限制是否開啟 true|false
req-frequency-limit-enable: true
# 驗(yàn)證失敗5次,get接口鎖定
req-get-lock-limit: 5
# 驗(yàn)證失敗后哀澈,鎖定時間間隔,s
req-get-lock-seconds: 360
# get接口一分鐘內(nèi)請求數(shù)限制
req-get-minute-limit: 30
# check接口一分鐘內(nèi)請求數(shù)限制
req-check-minute-limit: 60
# verify接口一分鐘內(nèi)請求數(shù)限制
req-verify-minute-limit: 60
#SMS短信通用配置
sms:
#手機(jī)號碼正則表達(dá)式牌借,為空則不做驗(yàn)證
reg:
#負(fù)載均衡類型 可選值: Random、RoundRobin割按、WeightRandom膨报、WeightRoundRobin
load-balancer-type: Random
web:
#啟用web端點(diǎn)
enable: true
#訪問路徑前綴
base-path: /commons/sms
verification-code:
#驗(yàn)證碼長度
code-length: 6
#為true則驗(yàn)證失敗后刪除驗(yàn)證碼
delete-by-verify-fail: false
#為true則驗(yàn)證成功后刪除驗(yàn)證碼
delete-by-verify-succeed: true
#重試間隔時間,單位秒
retry-interval-time: 60
#驗(yàn)證碼有效期,單位秒
expiration-time: 180
#識別碼長度
identification-code-length: 3
#是否啟用識別碼
use-identification-code: false
redis:
#驗(yàn)證碼業(yè)務(wù)在保存到redis時的key的前綴
key-prefix: VerificationCode
# 網(wǎng)關(guān)放行設(shè)置 1现柠、whiteUrls不需要鑒權(quán)的公共url院领,白名單,配置白名單路徑 2够吩、authUrls需要鑒權(quán)的公共url
oauth-list:
staticFiles:
- "/doc.html"
- "/webjars/**"
- "/favicon.ico"
- "/swagger-resources/**"
whiteUrls:
- "/*/v2/api-docs"
- "/gitegg-oauth/login/phone"
- "/gitegg-oauth/login/qr"
- "/gitegg-oauth/oauth/token"
- "/gitegg-oauth/oauth/public_key"
- "/gitegg-oauth/oauth/captcha/type"
- "/gitegg-oauth/oauth/captcha"
- "/gitegg-oauth/oauth/captcha/check"
- "/gitegg-oauth/oauth/captcha/image"
- "/gitegg-oauth/oauth/sms/captcha/send"
- "/gitegg-service-base/dict/list/{dictCode}"
authUrls:
- "/gitegg-oauth/oauth/logout"
- "/gitegg-oauth/oauth/user/info"
- "/gitegg-service-extension/extension/upload/file"
- "/gitegg-service-extension/extension/dfs/query/default"
GitEgg-Cloud是一款基于SpringCloud整合搭建的企業(yè)級微服務(wù)應(yīng)用開發(fā)框架比然,開源項(xiàng)目地址:
Gitee: https://gitee.com/wmz1930/GitEgg
GitHub: https://github.com/wmz1930/GitEgg
歡迎感興趣的小伙伴Star支持一下。