環(huán)境信息
-購(gòu)買(mǎi)操作系統(tǒng)選擇centos7(7的任何一個(gè)版本都可以),如果選錯(cuò)了可以在阿里云管理面板的-更多--云盤(pán)和鏡像--更換操作系統(tǒng)腿宰。
在阿里云購(gòu)買(mǎi)ecs-購(gòu)買(mǎi)后機(jī)器網(wǎng)卡環(huán)境:
公網(wǎng)IP-8.134.80.143雁刷、內(nèi)網(wǎng)IP-172.30.40.95
設(shè)置阿里云端口映射:
開(kāi)放3個(gè)端口
50070:hdfs管理端口
8088 : yarn 管理端口
60010:hbase管理端口
配置入口-->安全組-->配置規(guī)則
點(diǎn)擊手動(dòng)添加覆劈,添加8088、50070沛励、60010端口
開(kāi)始安裝
整個(gè)一套安裝配置內(nèi)容比較多责语,順序是:安裝zookeeper--安裝hadoop---安裝hbase--springboot接入
安裝zookeeper ,zookeeper版本不要選最后一個(gè)版本目派,選上下兼容hadoop坤候、hbase的版本,這里選3.4.9就可以了企蹭。
安裝java
yum -y install java-1.8.0-openjdk
配置java環(huán)境變量
執(zhí)行:
export JAVA_HOME=/usr/lib/jvm/jre
export JRE_HOME=/usr/lib/jvm/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
source /etc/profile
下載zookeeper
wget http://archive.apache.org/dist/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gz
解壓
tar -xzvf zookeeper-3.4.9.tar.gz
配置環(huán)境變量
vim /etc/profile
添加:
export ZOOKEEPER_HOME=/root/zookeeper-3.4.9
export PATH=$ZOOKEEPER_HOME/bin:$PATH
刷新環(huán)境變量
source /etc/profile
復(fù)制配置文件
cp /root/zookeeper-3.4.9/conf/zoo_sample.cfg /root/zookeeper-3.4.9/conf/zoo.cfg
創(chuàng)建目錄:
/root/zookeeper-3.4.9/run
/root/zookeeper-3.4.9/run/data
/root/zookeeper-3.4.9/run/log
修改配置文件
vim /root/zookeeper-3.4.9/conf/zoo.cfg
修改如下兩處(沒(méi)有就增加):
dataDir=/root/zookeeper-3.4.9/run/data
dataLogDir=/root/zookeeper-3.4.9/run/log
啟動(dòng)zookeeper
zkServer.sh start
zk安裝完成白筹。
安裝hadoop
hadoop,包括hdfs(分布式文件)谅摄、yarn(資源調(diào)度)徒河、mapreduce(運(yùn)算)
hadoop和hbase 有依賴(lài)關(guān)系,
hadoop這里選3.1.4送漠,hbase 選2.3.3 能夠兼容
下載hadoop
wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-3.1.4/hadoop-3.1.4.tar.gz
解壓:
tar -zxvf hadoop-3.1.4.tar.gz
配置環(huán)境變量
vim /etc/profile
添加兩行
export HADOOP_HOME=/root/hadoop-3.1.4
export PATH=${HADOOP_HOME}/bin:$PATH
刷新環(huán)境變量
source /etc/profile
修改hadoop配置文件
vim /root/hadoop-3.1.4/etc/hadoop/hadoop-env.sh
設(shè)置java_home
修改JAVA_HOME=/usr/lib/jvm/jre
創(chuàng)建目錄:
mkdir /root/hadoop-3.1.4/run
mkdir /root/hadoop-3.1.4/run/hadoop
修改hosts文件顽照,
vi /etc/hosts
添加1行(172.30.40.95為服務(wù)器內(nèi)網(wǎng)地址):
172.30.40.95 hadoop1 hadoop1
修改配置文件core-site.xml
vim /root/hadoop-3.1.4/etc/hadoop/core-site.xml
修改hdfs配置,內(nèi)容(沒(méi)有就添加):
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop1:8020</value>
</property>
<property>
<!--指定 hadoop 存儲(chǔ)臨時(shí)文件的目錄-->
<name>hadoop.tmp.dir</name>
<value>/root/hadoop-3.1.4/run/hadoop</value>
</property>
<property>
<name>hadoop.native.lib</name>
<value>false</value>
<description></description>
</property>
</configuration>
修改hdfs-site.xml文件
vim /root/hadoop-3.1.4/etc/hadoop/hdfs-site.xml
添加hdfs副本數(shù)配置闽寡,這里配置1 代兵,內(nèi)容(172.30.40.95為服務(wù)器內(nèi)網(wǎng)地址):
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.secondary.http.address</name>
<value>172.30.40.95:50070</value>
</property>
</configuration>
修改文件:mapred-site.xml
vim /root/hadoop-3.1.4/etc/hadoop/mapred-site.xml
內(nèi)容:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
修改文件:yarn-site.xml
vim /root/hadoop-3.1.4/etc/hadoop/yarn-site.xml
內(nèi)容:
<configuration>
<property>
<!--配置 NodeManager 上運(yùn)行的附屬服務(wù)。需要配置成 mapreduce_shuffle 后才可以在 Yarn 上運(yùn)行 MapReduce 程序爷狈。-->
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
主機(jī)訪問(wèn)設(shè)置
在root用戶(hù)目錄下執(zhí)行奢人,也就是/root目錄下
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys
格式化hdfs
/root/hadoop-3.1.4/bin/hdfs namenode -format
修改hdfs啟動(dòng)腳本:
vim /root/hadoop-3.1.4/sbin/start-dfs.sh
頂部增加4行
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
修改hdfs停止腳本:
vim /root/hadoop-3.1.4/sbin/stop-dfs.sh
頂部增加4行
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
修改yarn啟動(dòng)腳本:
vim /root/hadoop-3.1.4/sbin/start-yarn.sh
頂部增加3行
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
修改yarn停止腳本:
vim /root/hadoop-3.1.4/sbin/stop-yarn.sh
頂部增加3行
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
啟動(dòng)hdfs
export JAVA_HOME=/usr/lib/jvm/jre
(停止腳本:/root/hadoop-3.1.4/sbin/stop-dfs.sh)
/root/hadoop-3.1.4/sbin/start-dfs.sh
瀏覽器訪問(wèn)驗(yàn)證
地址:http://8.134.80.143:50070/
啟動(dòng)yarn
(停止腳本:/root/hadoop-3.1.4/sbin/stop-yarn.sh)
/root/hadoop-3.1.4/sbin/start-yarn.sh
瀏覽器訪問(wèn)驗(yàn)證
地址:http://8.134.80.143:8088/
hadoop 安裝完成。
附:hadoop hbase 對(duì)應(yīng)關(guān)系
安裝hbase
hbase 運(yùn)行依賴(lài)zookeeper淆院,前面已經(jīng)安裝好了。
下載hbase
wget http://mirror.bit.edu.cn/apache/hbase/2.3.3/hbase-2.3.3-bin.tar.gz
解壓:
tar -zxvf hbase-2.3.3-bin.tar.gz
修改環(huán)境變量
vim /etc/profile
添加:
export HBASE_HOME=/root/hbase-2.3.3
export PATH=$HBASE_HOME/bin:$PATH
刷新環(huán)境變量:
source /etc/profile
修改hbase配置文件
vim /root/hbase-2.3.3/conf/hbase-env.sh
設(shè)置java_home
修改2處
export JAVA_HOME=/usr/lib/jvm/jre
export HBASE_MANAGES_ZK=flase
vim /root/hbase-2.3.3/conf/hbase-site.xml
修改為如下:
<configuration>
<!--指定 HBase 以分布式模式運(yùn)行-->
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<!--指定 HBase 數(shù)據(jù)存儲(chǔ)路徑為 HDFS 上的 hbase 目錄,hbase 目錄不需要預(yù)先創(chuàng)建句惯,程序會(huì)自動(dòng)創(chuàng)建-->
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop1:8020/hbase</value>
</property>
<!--指定 zookeeper 數(shù)據(jù)的存儲(chǔ)位置-->
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/root/zookeeper-3.4.9/run/data</value>
</property>
<!--指定 Hbase Web UI 默認(rèn)端口-->
<property>
<name>hbase.master.info.port</name>
<value>60010</value>
</property>
<!--指定外置zookeeper-->
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop1:2181</value>
</property>
</configuration>
修改 regionservers文件
vim /root/hbase-2.3.3/conf/regionservers
修改為:hadoop1
啟動(dòng)hbase
(停止腳本: /root/hbase-2.3.3/bin/stop-hbase.sh)
/root/hbase-2.3.3/bin/start-hbase.sh
測(cè)試hbase
瀏覽器打開(kāi):http://8.134.80.143:60010/
hbase安裝完成土辩。
SpringBoot接入調(diào)用
引用
pom文件引入
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.18.10</version>
</dependency>
<!--hbase -->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>2.3.3</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-server</artifactId>
<version>2.3.3</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-common</artifactId>
<version>2.3.3</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-mapreduce</artifactId>
<version>2.3.3</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-annotations -->
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-annotations</artifactId>
<version>2.3.3</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.16</version>
</dependency>
配置文件application.yaml 的hbase服務(wù)器參數(shù)
內(nèi)容(8.134.80.143是阿里云公網(wǎng)IP):
hbase:
master: 8.134.80.143:60010
zookeeper:
quorum: 8.134.80.143
property:
clientPort: 2181
defaults:
for:
version:
skip: true
創(chuàng)建hbase操作和測(cè)試類(lèi)
連接配置類(lèi)-HbaseConfiguration:
內(nèi)容:
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Scope;
import java.io.IOException;
import java.util.function.Supplier;
@Configuration
public class HbaseConfiguration {
@Value("${hbase.defaults.for.version.skip}")
private String skip;
@Value("${hbase.zookeeper.property.clientPort}")
private String clientPort;
@Value("${hbase.zookeeper.quorum}")
private String quorum;
@Value("${hbase.master}")
private String master;
@Bean
public org.apache.hadoop.conf.Configuration config() {
//這里啟動(dòng)會(huì)打印異常java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.
//該異常不影響應(yīng)用,源碼里面是window需要配置對(duì)應(yīng)版本的hadoop客戶(hù)端才不拋出這個(gè)錯(cuò)誤抢野,這個(gè)不影響程序運(yùn)行拷淘,linux服務(wù)器不會(huì)走這個(gè)邏輯,所以不管就行了
org.apache.hadoop.conf.Configuration config = HBaseConfiguration.create();
config.set("hbase.defaults.for.version.skip", skip);
config.set("hbase.zookeeper.property.clientPort", clientPort);
config.set("hbase.zookeeper.quorum", quorum);
config.set("hbase.master", master);
return config;
}
@Bean
public Supplier<Connection> hbaseConnSupplier() {
return () -> {
try {
return connection();
} catch (IOException e) {
throw new RuntimeException(e);
}
};
}
@Bean
@Scope(value = "prototype")
public Connection connection() throws IOException {
return ConnectionFactory.createConnection(config());
}
}
業(yè)務(wù)實(shí)現(xiàn)類(lèi)-HbaseService:
import lombok.extern.slf4j.Slf4j;
import org.apache.hadoop.hbase.*;
import org.apache.hadoop.hbase.client.*;
import org.apache.hadoop.hbase.util.Bytes;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.io.IOException;
import java.text.SimpleDateFormat;
import java.util.*;
@Slf4j
@Service
public class HbaseService {
@Autowired
private Connection hbaseConnection;
private String tableName = "tb_order";
/**
* 創(chuàng)建表
*/
public String createTable() throws IOException {
//獲取管理員對(duì)象指孤,用于創(chuàng)建表启涯、刪除表等
Admin admin = hbaseConnection.getAdmin();
//獲取HTableDescriptor對(duì)象
HTableDescriptor hTableDescriptor = new HTableDescriptor(TableName.valueOf(tableName));
//給表添加列族名需要?jiǎng)?chuàng)建HColumnDescriptor對(duì)象
HColumnDescriptor order = new HColumnDescriptor("order");
HColumnDescriptor orther = new HColumnDescriptor("orther");
//先給表添加列族名
hTableDescriptor.addFamily(order);
hTableDescriptor.addFamily(orther);
//用管理員創(chuàng)建表
admin.createTable(hTableDescriptor);
//關(guān)閉admin
admin.close();
//hbaseConnection.close();
return tableName;
}
/**
* 創(chuàng)建訂單
*/
public String createOrder(String code,String address,String state,String createName) throws IOException {
long r = (long) ((Math.random() * 9 + 1) * 100000000);
String rowKey =new SimpleDateFormat("yyyyMMddHHmmssSSS").format(new Date(System.currentTimeMillis()))+r;
String orderId =new SimpleDateFormat("yyyyMMddHHmmssSSS").format(new Date(System.currentTimeMillis()));
try (Table table = hbaseConnection.getTable(TableName.valueOf(tableName))) {//獲取表連接
Put put = new Put(Bytes.toBytes(rowKey));
put.addColumn(Bytes.toBytes("order"), Bytes.toBytes("id"), Bytes.toBytes(orderId));
put.addColumn(Bytes.toBytes("order"), Bytes.toBytes("code"), Bytes.toBytes(code));
put.addColumn(Bytes.toBytes("order"), Bytes.toBytes("address"), Bytes.toBytes(address));
put.addColumn(Bytes.toBytes("order"), Bytes.toBytes("state"), Bytes.toBytes(state));
put.addColumn(Bytes.toBytes("orther"), Bytes.toBytes("create_name"), Bytes.toBytes(createName));
//put插入數(shù)據(jù)
table.put(put);
}
//hbaseConnection.close();
return rowKey;
}
/**
* 根據(jù)rowkey查找數(shù)據(jù)
*/
public Map findDataByRowKey(String rowKey) throws IOException {
Map rtn = new HashMap();
//獲取get對(duì)象
Get get = new Get(Bytes.toBytes(rowKey));
//通過(guò)get獲取數(shù)據(jù) result封裝了所有結(jié)果數(shù)據(jù)
try (Table table = hbaseConnection.getTable(TableName.valueOf(tableName))) {
get.addColumn("order".getBytes(),"id".getBytes());
get.addColumn("order".getBytes(),"code".getBytes());
get.addColumn("order".getBytes(),"address".getBytes());
get.addColumn("order".getBytes(),"state".getBytes());
get.addColumn("orther".getBytes(),"create_name".getBytes());
Result result = table.get(get);
Map<byte[], byte[]> familyMap = result.getFamilyMap(Bytes.toBytes("order"));
String id = Bytes.toString(familyMap.get(Bytes.toBytes("id")));
String code = Bytes.toString(familyMap.get(Bytes.toBytes("code")));
String address = Bytes.toString(familyMap.get(Bytes.toBytes("address")));
String state = Bytes.toString(familyMap.get(Bytes.toBytes("state")));
Map<byte[], byte[]> ortherfamilyMap = result.getFamilyMap(Bytes.toBytes("orther"));
String createName = Bytes.toString(ortherfamilyMap.get(Bytes.toBytes("create_name")));
rtn.put("id",id);
rtn.put("code",code);
rtn.put("address",address);
rtn.put("state",state);
rtn.put("createName",createName);
/* for(Map.Entry<byte[], byte[]> entry:familyMap.entrySet()){
System.out.println(Bytes.toString(entry.getKey())+":"+Bytes.toString(entry.getValue()));
}*/
}
return rtn;
}
/**查詢(xún)一段時(shí)間范圍類(lèi)的數(shù)據(jù)**/
public List getRangeRowKey() throws IOException {
ArrayList rtnList = new ArrayList();
//創(chuàng)建Scan對(duì)象贬堵,獲取到rk的范圍-當(dāng)天的數(shù)據(jù),如果不指定范圍就是全表掃描
String beginRow =new SimpleDateFormat("yyyyMMdd").format(new Date(System.currentTimeMillis()))+"000000000"+"00000000";
String endRow =new SimpleDateFormat("yyyyMMdd").format(new Date(System.currentTimeMillis()))+"999999999"+"00000000";
Scan scan = new Scan(beginRow.getBytes(),endRow.getBytes());
try (Table table = hbaseConnection.getTable(TableName.valueOf(tableName))) {
//拿到了多條數(shù)據(jù)的結(jié)果
ResultScanner scanner = table.getScanner(scan);
//循環(huán)遍歷ResultScanner结洼,將多條數(shù)據(jù)分成一條條數(shù)據(jù)
for (Result result : scanner) {
Map<byte[], byte[]> familyMap = result.getFamilyMap(Bytes.toBytes("order"));
String id = Bytes.toString(familyMap.get(Bytes.toBytes("id")));
String code = Bytes.toString(familyMap.get(Bytes.toBytes("code")));
String address = Bytes.toString(familyMap.get(Bytes.toBytes("address")));
String state = Bytes.toString(familyMap.get(Bytes.toBytes("state")));
Map<byte[], byte[]> ortherfamilyMap = result.getFamilyMap(Bytes.toBytes("orther"));
String createName = Bytes.toString(ortherfamilyMap.get(Bytes.toBytes("create_name")));
Map rtn = new HashMap();
rtn.put("id",id);
rtn.put("code",code);
rtn.put("address",address);
rtn.put("state",state);
rtn.put("createName",createName);
rtnList.add(rtn);
}
}
return rtnList;
}
}
hbase的controller測(cè)試類(lèi)-HbaseController:
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.io.IOException;
import java.util.List;
import java.util.Map;
@Slf4j
@RestController
public class HbaseController {
@Autowired
private HbaseService hbaseService;
@RequestMapping("hbaseCreate")
public String create(){
String rtnName = "";
try {
rtnName = hbaseService.createTable();
} catch (IOException e) {
e.printStackTrace();
}
return "hbase成功:"+rtnName;
}
@RequestMapping("hbaseCreateOrder")
public String createOrder(String code,String address,String state,String createName){
String rtnName = "";
try {
rtnName = hbaseService.createOrder(code,address,state,createName);
} catch (IOException e) {
e.printStackTrace();
}
return "hbase工單創(chuàng)建成功:"+rtnName;
}
@RequestMapping("hbaseGet")
public Map get(String key){
Map info = null;
try {
info = hbaseService.findDataByRowKey(key);
} catch (IOException e) {
e.printStackTrace();
}
return info;
}
@RequestMapping("hbaseGetRang")
public List getRang(){
List info = null;
try {
info = hbaseService.getRangeRowKey();
} catch (IOException e) {
e.printStackTrace();
}
return info;
}
}
目錄結(jié)構(gòu):
測(cè)試
本地測(cè)試前黎做,需要設(shè)置一下本機(jī)的hosts文件,添加阿里云服務(wù)器的名稱(chēng)和公網(wǎng)IP的映射松忍。
找到本地hosts文件(windows在:C:\Windows\System32\drivers\etc\hosts)蒸殿,添加:
8.134.80.143 iZ7xvd5tarkby9hshhbm37Z
注意:(iZ7xvd5tarkby9hshhbm37Z是阿里云主機(jī)的服務(wù)器名稱(chēng),不知道名稱(chēng)鸣峭,可以在服務(wù)器上查看名稱(chēng)宏所,這里服務(wù)器名字自動(dòng)創(chuàng)建的沒(méi)有改名字,所以顯得有點(diǎn)長(zhǎng))
hostname
啟動(dòng)項(xiàng)目
啟動(dòng)會(huì)打印連接日志摊溶,連接失敗是顯示異常爬骤。
注意:
windows IDEA 啟動(dòng)打印的“java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME ”是沒(méi)有配置hadoop客戶(hù)端,這里不需要配置莫换,源碼里面是去找exe文件霞玄,源碼打印的也只是告警信息,這里我們用不到浓镜,linux服務(wù)器也不會(huì)走這個(gè)邏輯溃列,直接忽略。
開(kāi)始測(cè)試
創(chuàng)建訂單表
瀏覽器訪問(wèn):http://localhost:8080/hbaseCreate
創(chuàng)建訂單
瀏覽器訪問(wèn):http://localhost:8080/hbaseCreateOrder?code=202101142025&address=cs0103&state=1&createName=georgekaren3
根據(jù)rowkey查詢(xún)工單信息
瀏覽器訪問(wèn):http://localhost:8080/hbaseGet?key=20210114203036869321791402
查詢(xún)一段時(shí)間的工單信息
瀏覽器訪問(wèn):http://localhost:8080/hbaseGetRang
測(cè)試成功膛薛,部署調(diào)用完成听隐。