Hive 部署
https://cwiki.apache.org/confluence/display/Hive/Home#Home-UserDocumentation
https://cwiki.apache.org/confluence/display/Hive/GettingStarted
Apache Hive
Apache Hive? 數(shù)據(jù)倉庫提供對分布式存儲內(nèi)大數(shù)據(jù)集的讀佩番、寫、管理功能,以及提供SQL語法查詢描孟。
在 Apache Hadoop? 之上鸠儿,Hive 提供如下特性:
- 提供了通過SQL訪問數(shù)據(jù)的工具辅肾,以此支持?jǐn)?shù)倉任務(wù):比如ETL盾戴,報表埠对,數(shù)據(jù)分析术健。
- 提供了對各類型的數(shù)據(jù)結(jié)構(gòu)化的方法汹碱。
- 可以直接訪問存儲于Apache HDFS?的文件,或者其他數(shù)據(jù)存儲系統(tǒng):比如Apache HBase?荞估。
- 通過 Apache Tez?, Apache Spark?, or MapReduce執(zhí)行查詢咳促。
- HPL-SQL過程語言。
- 通過Hive LLAP, Apache YARN and Apache Slider子查詢勘伺。
Hive 提供標(biāo)準(zhǔn) SQL 功能跪腹,并包含了后來的 SQL:2003, SQL:2011, and SQL:2016的很多特性供分析使用。
Hive's SQL 還可以通過 user defined functions
(UDFs), user defined aggregates
(UDAFs), and user defined table functions
(UDTFs)進(jìn)行user code 擴(kuò)展飞醉。
沒有以 "Hive format" 存儲數(shù)據(jù)的說法冲茸。 Hive 支持使用逗號、tab分隔符 (CSV/TSV) 文本文件缅帘、Apache Parquet?轴术、 Apache ORC?等格式。 用戶可以擴(kuò)展格式钦无,請參閱Developer Guide 下的 File Formats 和 Hive SerDe逗栽。
Hive 不是設(shè)計于在線分析 (OLTP) 工作,最好用于傳統(tǒng)數(shù)據(jù)倉庫任務(wù)(離線分析)失暂。
Hive 設(shè)計于最大化彈性伸縮(動態(tài)的添加主機(jī)到Hadoop集群)彼宠、性能、可擴(kuò)展性弟塞、容錯兵志,以及輸入格式的松耦合。
Hive 組件有 HCatalog 和 WebHCat宣肚。
- HCatalog 是Hadoop的表和存儲管理層想罕,使用戶可以使用不同的數(shù)據(jù)處理工具(比如Pig和MapReduce)輕松地在網(wǎng)格上讀寫數(shù)據(jù)。
- WebHCat 提供服務(wù)來運(yùn)行Hadoop MapReduce (or YARN), Pig, Hive 任務(wù)。還可以使用HTTP接口(REST 類型)操作Hive元數(shù)據(jù)按价。
Installation and Configuration
Requirements
- Java 1.7
注意: Hive versions 1.2 后需要 Java 1.7 及以上的版本惭适。 Hive versions 0.14 to 1.1 適配 Java 1.6 。強(qiáng)烈推薦用戶遷移到 Java 1.8 (see HIVE-8607楼镐。Java7環(huán)境編譯癞志,Java8環(huán)境測試。). - Hadoop 2.x (首選), 1.x (不被Hive 2.0.0 及以后版本支持).
Hive 版本在0.13之前框产,支持Hadoop 0.20.x, 0.23.x凄杯。 - Hive 生產(chǎn)環(huán)境通常為 Linux 和 Windows。 Mac 通常作為開發(fā)環(huán)境秉宿。本文適用于 Linux 和 Mac戒突。Windows環(huán)境的配置略有不同。
Installing Hive from a Stable Release
- 解壓軟件包
[root@hadoop opt]# mv ~/apache-hive-3.1.2-bin.tar.gz /opt
[root@hadoop opt]# tar xzvf apache-hive-3.1.2-bin.tar.gz
[root@hadoop opt]# useradd hive
[root@hadoop opt]# chown -R hive:hive apache-hive-3.1.2-bin
- 設(shè)置環(huán)境變量
HIVE_HOME
指向安裝目錄 - 添加
$HIVE_HOME/bin
到PATH
JAVA_HOME=/usr/local/jdk/jdk1.8.0_202
HADOOP_HOME=/opt/hadoop-3.1.4
HIVE_HOME=/opt/apache-hive-3.1.2-bin
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME:$HIVE_HOME
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME HADOOP_HOME HIVE_HOME
export PATH
export CLASSPATH
Running Hive
Hive 需要調(diào)用 Hadoop描睦,如上所示:
- 需要在path配置Hadoop膊存, 或者
- export HADOOP_HOME=<hadoop-install-dir>
另外,還需要使用如下HDFS命令創(chuàng)建 /tmp
和 /user/hive/warehouse
(aka hive.metastore.warehouse.dir) 并在Hive創(chuàng)建表之前設(shè)置chmod g+w忱叭。
[hadoop@hadoop hadoop-3.1.4]$ $HADOOP_HOME/bin/hadoop fs -mkdir /tmp
[hadoop@hadoop hadoop-3.1.4]$ $HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/
[hadoop@hadoop hadoop-3.1.4]$ $HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse
[hadoop@hadoop hadoop-3.1.4]$ $HADOOP_HOME/bin/hadoop fs -chmod 777 /tmp
[hadoop@hadoop hadoop-3.1.4]$ $HADOOP_HOME/bin/hadoop fs -chmod g+w /user/hive/warehouse
[hadoop@hadoop hadoop-3.1.4]$ bin/hdfs dfs -ls /
Found 2 items
drwxrwxrwx - hadoop supergroup 0 2020-10-22 18:41 /tmp
drwxr-xr-x - hadoop supergroup 0 2020-10-22 18:42 /user
[hadoop@hadoop hadoop-3.1.4]$ bin/hdfs dfs -ls /user
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2020-10-22 18:39 /user/hadoop
drwxr-xr-x - hadoop supergroup 0 2020-10-22 18:42 /user/hive
[hadoop@hadoop hadoop-3.1.4]$ bin/hdfs dfs -chown -R hive /user/hive
[hadoop@hadoop hadoop-3.1.4]$ bin/hdfs dfs -ls /user
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2020-10-22 18:39 /user/hadoop
drwxr-xr-x - hive supergroup 0 2020-10-22 18:42 /user/hive
[hadoop@hadoop hadoop-3.1.4]$ bin/hdfs dfs -ls /user/hive
Found 1 items
drwxrwxr-x - hive supergroup 0 2020-10-22 18:42 /user/hive/warehouse
測試hive賬戶上傳文件
[hive@hadoop ~]$ ls
test
[hive@hadoop ~]$ $HADOOP_HOME/bin/hdfs dfs -put test
[hive@hadoop ~]$ $HADOOP_HOME/bin/hdfs dfs -ls /user/hive
Found 2 items
-rw-r--r-- 1 hive supergroup 0 2020-10-22 19:06 /user/hive/test
drwxrwxr-x - hive supergroup 0 2020-10-22 18:42 /user/hive/warehouse
[hive@hadoop ~]$ $HADOOP_HOME/bin/hdfs dfs -put test /user/hive/warehouse
[hive@hadoop ~]$ $HADOOP_HOME/bin/hdfs dfs -ls /user/hive/warehouse
Found 1 items
-rw-r--r-- 1 hive supergroup 0 2020-10-22 19:07 /user/hive/warehouse/test
[hive@hadoop ~]$ $HADOOP_HOME/bin/hdfs dfs -rm /user/hive/warehouse/test
Deleted /user/hive/warehouse/test
[hive@hadoop ~]$ $HADOOP_HOME/bin/hdfs dfs -rm /user/hive/test
Deleted /user/hive/test
設(shè)置HIVE_HOME
$ export HIVE_HOME=<hive-install-dir>
Remote Metastore Server
https://cwiki.apache.org/confluence/display/Hive/AdminManual+Metastore+Administration
在遠(yuǎn)程metastore的設(shè)置中隔崎,所有的Hive客戶端會連接到metastore server。Metastore server依次查詢datastore(比如MySQL)請求元數(shù)據(jù)韵丑。 Metastore server 和 client 以 Thrift 協(xié)議通信爵卒。自Hive 0.5.0后,你可以使用如下命令啟動 Thrift server :
[hive@hadoop apache-hive-3.1.2-bin]$ bin/hive --service metastore &
對于早與版本0.5.0的Hive撵彻, 需要通過直接執(zhí)行Java來運(yùn)行 Thrift server:
$JAVA_HOME/bin/java -Xmx1024m -Dlog4j.configuration=file://$HIVE_HOME/conf/hms-log4j.properties -Djava.library.path=$HADOOP_HOME/lib/native/Linux-amd64-64/ -cp $CLASSPATH org.apache.hadoop.hive.metastore.HiveMetaStore
如果你直接執(zhí)行Java技潘,那么你需要配置好JAVA_HOME
, HIVE_HOME
千康, HADOOP_HOME
享幽;CLASSPATH
應(yīng)該包含Hadoop, Hive (lib and auxlib)拾弃,and Java jars值桩。
Running Hive CLI
[hive@hadoop apache-hive-3.1.2-bin]$ bin/hive
which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/local/jdk/jdk1.8.0_202/bin:/opt/hadoop-3.1.4:/opt/apache-hive-3.1.2-bin:/home/hive/.local/bin:/home/hive/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-3.1.4/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = b31ef87c-a414-4971-93e9-03cab446385d
Logging initialized using configuration in jar:file:/opt/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true
Hive Session ID = b8104850-ccab-46fa-8359-508a61d6a172
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> show tables;
OK
Time taken: 1.097 seconds
hive> quit;
Running HiveServer2 and Beeline
自Hive 2.1起,我們需要執(zhí)行如下的schematool 命令作為初始化步驟豪椿。比如奔坟,我們可以使用 "derby" 作為 db type。
初始化過程搭盾,參考 Hive Schema Tool 咳秉。
https://docs.cloudera.com/documentation/enterprise/5-6-x/topics/cdh_ig_hive_metastore_configure.html
https://cwiki.apache.org/confluence/display/Hive/AdminManual+Metastore+Administration
$ $HIVE_HOME/bin/schematool -dbType <db type> -initSchema
配置Remote Metastore 準(zhǔn)備工作
安裝mysql5.6,并配置mysql> CREATE DATABASE metastore; mysql> CREATE USER 'hive'@'%' IDENTIFIED BY 'hive'; mysql> REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'hive'@'%'; mysql> GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'%'; mysql> FLUSH PRIVILEGES; mysql> quit;
安裝mysql-connector-java
[root@hadoop conf]# yum install mysql-connector-java.noarch 并拷貝 [root@hadoop conf]# cp /usr/share/java/mysql-connector-java.jar /opt/apache-hive-3.1.2-bin/lib/
拷貝
$HADOOP_HOME/share/hadoop/common/lib/guava-27.0-jre.jar
覆蓋$HIVE_HOME/lib/guava-19.0.jar
創(chuàng)建hive-site.xml鸯隅,使metastore連接到MySQL澜建。如下(根據(jù)cloudera配置修改):
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- Connection -->
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://10.0.31.65/metastore</value>
<description>the URL of the MySQL database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.cj.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
</property>
<!-- datanucleus -->
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoStartMechanism</name>
<value>SchemaTable</value>
</property>
<!-- metastore -->
<property>
<name>hive.metastore.uris</name>
<value>thrift://10.0.31.65:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>true</value>
</property>
</configuration>
執(zhí)行初始化向挖,指定dbType mysql
[hive@hadoop apache-hive-3.1.2-bin]$ $HIVE_HOME/bin/schematool -dbType mysql -initSchema
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-3.1.4/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:mysql://10.0.31.65/metastore
Metastore Connection Driver : com.mysql.cj.jdbc.Driver
Metastore connection User: hive
Starting metastore schema initialization to 3.1.0
Initialization script hive-schema-3.1.0.mysql.sql
Initialization script completed
schemaTool completed
[hive@hadoop apache-hive-3.1.2-bin]$
HiveServer2
HiveServer2(HS2)是一個服務(wù)器接口,它使遠(yuǎn)程客戶機(jī)能夠?qū)IVE執(zhí)行查詢并檢索結(jié)果炕舵。
HiveServer2 有自己的CLI Beeline何之。因?yàn)锽eeline而導(dǎo)致HiveCLI 被棄用,因?yàn)樗狈iveServer2的多用戶咽筋、安全性和其他功能溶推。
要從shell運(yùn)行HiveServer2和Beeline:
$ $HIVE_HOME/bin/hiveserver2
$ $HIVE_HOME/bin/beeline -u jdbc:hive2://$HS2_HOST:$HS2_PORT
Beeline啟動需要配置 HiveServer2的JDBC URL ,取決于HiveServer2啟動的地址和端口奸攻。默認(rèn)是localhost:10000
蒜危,所以地址類似 jdbc:hive2://localhost:10000
。
基于測試目的睹耐,可以在同一進(jìn)程啟動 Beeline 和 HiveServer2 辐赞,就類似于 HiveCLI 的使用方式:
$ $HIVE_HOME/bin/beeline -u jdbc:hive2://
范例:
[hive@hadoop apache-hive-3.1.2-bin]$ bin/hiveserver2 &
[hive@hadoop apache-hive-3.1.2-bin]$ bin/beeline -u jdbc:hive2://
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-3.1.4/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://
Hive Session ID = d9889ac7-8b44-417c-a415-a0c35e5de9aa
20/10/28 00:11:48 [main]: WARN session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
Connected to: Apache Hive (version 3.1.2)
Driver: Hive JDBC (version 3.1.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.2 by Apache Hive
0: jdbc:hive2://> show tables;
OK
+-----------+
| tab_name |
+-----------+
+-----------+
No rows selected (2.033 seconds)
0: jdbc:hive2://> !quit
Closing: 0: jdbc:hive2://
HiveServer2 高級配置
配置文件 hive-site.xml
hive --service hiveserver2 --hiveconf hive.server2.thrift.port=10000 --hiveconf hive.root.logger=INFO,console
HiveServer2 客戶端
beeline -u jdbc:hive2://192.168.0.51:10000/training_db -n username -p password -e "select current_date()"
beeline -u jdbc:hive2://192.168.0.51:10000/training_db -n impadmin -p impetus --silent=true --outputformat=csv2 -e "select * from stud"
Running HCatalog
如果以二進(jìn)制版本安裝Hive,hcat
命令已經(jīng)在hcatalog/bin
目錄內(nèi)了疏橄。除了hcat -g
和hcat -p
,大多情況下hcat
都可以用 hive
命令代替略就。注意捎迫,hcat
命令使用-p
標(biāo)記來設(shè)置權(quán)限,但是hive
使用-p
來指派端口表牢。
HCatalog server 和Hive metastore一樣窄绒,啟動了Hive metastore就可以了。
Hive 0.11.0版本以上崔兴,運(yùn)行HCatalog server
$ $HIVE_HOME/hcatalog/sbin/hcat_server.sh
Hive 0.11.0版本以上彰导,運(yùn)行HCatalog 命令行工具
$ $HIVE_HOME/hcatalog/bin/hcat
更多信息參考 HCatalog Installation from Tarball and HCatalog CLI in the HCatalog manual.
范例:
[hive@hadoop apache-hive-3.1.2-bin]$ $HIVE_HOME/hcatalog/bin/hcat -e 'show tables;'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop-3.1.4/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2020-10-28 00:28:33,218 INFO conf.HiveConf: Found configuration file file:/opt/apache-hive-3.1.2-bin/conf/hive-site.xml
Hive Session ID = 1dcb43c9-2c02-4401-8be4-4bc3760d0de6
2020-10-28 00:28:36,048 INFO SessionState: Hive Session ID = 1dcb43c9-2c02-4401-8be4-4bc3760d0de6
2020-10-28 00:28:37,482 INFO session.SessionState: Created HDFS directory: /tmp/hive/hive/1dcb43c9-2c02-4401-8be4-4bc3760d0de6
2020-10-28 00:28:37,504 INFO session.SessionState: Created local directory: /tmp/hive/1dcb43c9-2c02-4401-8be4-4bc3760d0de6
2020-10-28 00:28:37,508 INFO session.SessionState: Created HDFS directory: /tmp/hive/hive/1dcb43c9-2c02-4401-8be4-4bc3760d0de6/_tmp_space.db
2020-10-28 00:28:37,614 INFO ql.Driver: Compiling command(queryId=hive_20201028002837_d4f757e8-ea5c-458f-bacf-afeba2b149a0): show tables
2020-10-28 00:28:40,761 INFO metastore.HiveMetaStoreClient: Trying to connect to metastore with URI thrift://hadoop:9083
2020-10-28 00:28:40,796 INFO metastore.HiveMetaStoreClient: Opened a connection to metastore, current connections: 1
2020-10-28 00:28:40,820 INFO metastore.HiveMetaStoreClient: Connected to metastore.
2020-10-28 00:28:40,820 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hive (auth:SIMPLE) retries=1 delay=1 lifetime=0
2020-10-28 00:28:41,147 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
2020-10-28 00:28:41,209 INFO ql.Driver: Semantic Analysis Completed (retrial = false)
2020-10-28 00:28:41,317 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from deserializer)], properties:null)
2020-10-28 00:28:41,453 INFO exec.ListSinkOperator: Initializing operator LIST_SINK[0]
2020-10-28 00:28:41,467 INFO ql.Driver: Completed compiling command(queryId=hive_20201028002837_d4f757e8-ea5c-458f-bacf-afeba2b149a0); Time taken: 3.896 seconds
2020-10-28 00:28:41,467 INFO reexec.ReExecDriver: Execution #1 of query
2020-10-28 00:28:41,468 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
2020-10-28 00:28:41,468 INFO ql.Driver: Executing command(queryId=hive_20201028002837_d4f757e8-ea5c-458f-bacf-afeba2b149a0): show tables
2020-10-28 00:28:41,485 INFO ql.Driver: Starting task [Stage-0:DDL] in serial mode
2020-10-28 00:28:41,511 INFO ql.Driver: Completed executing command(queryId=hive_20201028002837_d4f757e8-ea5c-458f-bacf-afeba2b149a0); Time taken: 0.043 seconds
OK
2020-10-28 00:28:41,512 INFO ql.Driver: OK
2020-10-28 00:28:41,512 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
2020-10-28 00:28:41,523 INFO exec.ListSinkOperator: RECORDS_OUT_INTERMEDIATE:0, RECORDS_OUT_OPERATOR_LIST_SINK_0:0,
Time taken: 4.004 seconds
2020-10-28 00:28:41,542 INFO session.SessionState: Deleted directory: /tmp/hive/hive/1dcb43c9-2c02-4401-8be4-4bc3760d0de6 on fs with scheme hdfs
2020-10-28 00:28:41,551 INFO session.SessionState: Deleted directory: /tmp/hive/1dcb43c9-2c02-4401-8be4-4bc3760d0de6 on fs with scheme file
2020-10-28 00:28:41,558 INFO metastore.HiveMetaStoreClient: Closed a connection to metastore, current connections: 0
Running WebHCat (Templeton)
Hive 0.11.0版本以上,運(yùn)行WebHCat server
$ $HIVE_HOME/hcatalog/sbin/webhcat_server.sh
更多信息參考 WebHCat manual 內(nèi)的 WebHCat Installation 部分敲茄。
[hive@hadoop apache-hive-3.1.2-bin]$ $HIVE_HOME/hcatalog/sbin/webhcat_server.sh
Lenght of string is non zero
usage: /opt/apache-hive-3.1.2-bin/hcatalog/sbin/webhcat_server.sh [start|startDebug|stop|foreground]
start Start the Webhcat Server
startDebug Start the Webhcat Server listening for debugger on port 5005
stop Stop the Webhcat Server
foreground Run the Webhcat Server in the foreground
[hive@hadoop apache-hive-3.1.2-bin]$
[hive@hadoop apache-hive-3.1.2-bin]$
[hive@hadoop apache-hive-3.1.2-bin]$
[hive@hadoop apache-hive-3.1.2-bin]$
[hive@hadoop apache-hive-3.1.2-bin]$ $HIVE_HOME/hcatalog/sbin/webhcat_server.sh start
Lenght of string is non zero
webhcat: starting ...
webhcat: /opt/hadoop-3.1.4/bin/hadoop jar /opt/apache-hive-3.1.2-bin/hcatalog/sbin/../share/webhcat/svr/lib/hive-webhcat-3.1.2.jar org.apache.hive.hcatalog.templeton.Main
webhcat: starting ... started.
webhcat: done