Hadoop 單節(jié)點(diǎn)部署 (三)Hive

Hive 部署

https://cwiki.apache.org/confluence/display/Hive/Home#Home-UserDocumentation
https://cwiki.apache.org/confluence/display/Hive/GettingStarted

Apache Hive

Apache Hive? 數(shù)據(jù)倉庫提供對分布式存儲內(nèi)大數(shù)據(jù)集的讀佩番、寫、管理功能,以及提供SQL語法查詢描孟。

Apache Hadoop? 之上鸠儿,Hive 提供如下特性:

  • 提供了通過SQL訪問數(shù)據(jù)的工具辅肾,以此支持?jǐn)?shù)倉任務(wù):比如ETL盾戴,報表埠对,數(shù)據(jù)分析术健。
  • 提供了對各類型的數(shù)據(jù)結(jié)構(gòu)化的方法汹碱。
  • 可以直接訪問存儲于Apache HDFS?的文件,或者其他數(shù)據(jù)存儲系統(tǒng):比如Apache HBase?荞估。
  • 通過 Apache Tez?, Apache Spark?, or MapReduce執(zhí)行查詢咳促。
  • HPL-SQL過程語言。
  • 通過Hive LLAP, Apache YARN and Apache Slider子查詢勘伺。

Hive 提供標(biāo)準(zhǔn) SQL 功能跪腹,并包含了后來的 SQL:2003, SQL:2011, and SQL:2016的很多特性供分析使用。
Hive's SQL 還可以通過 user defined functions (UDFs), user defined aggregates (UDAFs), and user defined table functions (UDTFs)進(jìn)行user code 擴(kuò)展飞醉。

沒有以 "Hive format" 存儲數(shù)據(jù)的說法冲茸。 Hive 支持使用逗號、tab分隔符 (CSV/TSV) 文本文件缅帘、Apache Parquet?轴术、 Apache ORC?等格式。 用戶可以擴(kuò)展格式钦无,請參閱Developer Guide 下的 File FormatsHive SerDe逗栽。

Hive 不是設(shè)計于在線分析 (OLTP) 工作,最好用于傳統(tǒng)數(shù)據(jù)倉庫任務(wù)(離線分析)失暂。

Hive 設(shè)計于最大化彈性伸縮(動態(tài)的添加主機(jī)到Hadoop集群)彼宠、性能、可擴(kuò)展性弟塞、容錯兵志,以及輸入格式的松耦合。

Hive 組件有 HCatalogWebHCat宣肚。

  • HCatalog 是Hadoop的表和存儲管理層想罕,使用戶可以使用不同的數(shù)據(jù)處理工具(比如Pig和MapReduce)輕松地在網(wǎng)格上讀寫數(shù)據(jù)。
  • WebHCat 提供服務(wù)來運(yùn)行Hadoop MapReduce (or YARN), Pig, Hive 任務(wù)。還可以使用HTTP接口(REST 類型)操作Hive元數(shù)據(jù)按价。

Installation and Configuration

Requirements

  • Java 1.7
    注意: Hive versions 1.2 后需要 Java 1.7 及以上的版本惭适。 Hive versions 0.14 to 1.1 適配 Java 1.6 。強(qiáng)烈推薦用戶遷移到 Java 1.8 (see HIVE-8607楼镐。Java7環(huán)境編譯癞志,Java8環(huán)境測試。).
  • Hadoop 2.x (首選), 1.x (不被Hive 2.0.0 及以后版本支持).
    Hive 版本在0.13之前框产,支持Hadoop 0.20.x, 0.23.x凄杯。
  • Hive 生產(chǎn)環(huán)境通常為 Linux 和 Windows。 Mac 通常作為開發(fā)環(huán)境秉宿。本文適用于 Linux 和 Mac戒突。Windows環(huán)境的配置略有不同。

Installing Hive from a Stable Release

  1. 解壓軟件包
[root@hadoop opt]# mv ~/apache-hive-3.1.2-bin.tar.gz /opt
[root@hadoop opt]# tar xzvf apache-hive-3.1.2-bin.tar.gz
[root@hadoop opt]# useradd hive
[root@hadoop opt]# chown -R hive:hive apache-hive-3.1.2-bin
  1. 設(shè)置環(huán)境變量HIVE_HOME指向安裝目錄
  2. 添加$HIVE_HOME/binPATH
JAVA_HOME=/usr/local/jdk/jdk1.8.0_202
HADOOP_HOME=/opt/hadoop-3.1.4
HIVE_HOME=/opt/apache-hive-3.1.2-bin
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME:$HIVE_HOME
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME HADOOP_HOME HIVE_HOME
export PATH
export CLASSPATH

Running Hive

Hive 需要調(diào)用 Hadoop描睦,如上所示:

  • 需要在path配置Hadoop膊存, 或者
  • export HADOOP_HOME=<hadoop-install-dir>

另外,還需要使用如下HDFS命令創(chuàng)建 /tmp/user/hive/warehouse (aka hive.metastore.warehouse.dir) 并在Hive創(chuàng)建表之前設(shè)置chmod g+w忱叭。

[hadoop@hadoop hadoop-3.1.4]$ $HADOOP_HOME/bin/hadoop fs -mkdir       /tmp
[hadoop@hadoop hadoop-3.1.4]$ $HADOOP_HOME/bin/hadoop fs -mkdir       /user/hive/
[hadoop@hadoop hadoop-3.1.4]$ $HADOOP_HOME/bin/hadoop fs -mkdir       /user/hive/warehouse

[hadoop@hadoop hadoop-3.1.4]$ $HADOOP_HOME/bin/hadoop fs -chmod 777   /tmp
[hadoop@hadoop hadoop-3.1.4]$ $HADOOP_HOME/bin/hadoop fs -chmod g+w   /user/hive/warehouse

[hadoop@hadoop hadoop-3.1.4]$ bin/hdfs dfs -ls /
Found 2 items
drwxrwxrwx   - hadoop supergroup          0 2020-10-22 18:41 /tmp
drwxr-xr-x   - hadoop supergroup          0 2020-10-22 18:42 /user
[hadoop@hadoop hadoop-3.1.4]$ bin/hdfs dfs -ls /user
Found 2 items
drwxr-xr-x   - hadoop supergroup          0 2020-10-22 18:39 /user/hadoop
drwxr-xr-x   - hadoop supergroup          0 2020-10-22 18:42 /user/hive
[hadoop@hadoop hadoop-3.1.4]$ bin/hdfs dfs -chown -R hive /user/hive
[hadoop@hadoop hadoop-3.1.4]$ bin/hdfs dfs -ls /user
Found 2 items
drwxr-xr-x   - hadoop supergroup          0 2020-10-22 18:39 /user/hadoop
drwxr-xr-x   - hive   supergroup          0 2020-10-22 18:42 /user/hive
[hadoop@hadoop hadoop-3.1.4]$ bin/hdfs dfs -ls /user/hive
Found 1 items
drwxrwxr-x   - hive supergroup          0 2020-10-22 18:42 /user/hive/warehouse

測試hive賬戶上傳文件

[hive@hadoop ~]$ ls
test
[hive@hadoop ~]$ $HADOOP_HOME/bin/hdfs dfs -put test
[hive@hadoop ~]$ $HADOOP_HOME/bin/hdfs dfs -ls /user/hive
Found 2 items
-rw-r--r--   1 hive supergroup          0 2020-10-22 19:06 /user/hive/test
drwxrwxr-x   - hive supergroup          0 2020-10-22 18:42 /user/hive/warehouse

[hive@hadoop ~]$ $HADOOP_HOME/bin/hdfs dfs -put test /user/hive/warehouse
[hive@hadoop ~]$ $HADOOP_HOME/bin/hdfs dfs -ls /user/hive/warehouse
Found 1 items
-rw-r--r--   1 hive supergroup          0 2020-10-22 19:07 /user/hive/warehouse/test

[hive@hadoop ~]$ $HADOOP_HOME/bin/hdfs dfs -rm /user/hive/warehouse/test
Deleted /user/hive/warehouse/test
[hive@hadoop ~]$ $HADOOP_HOME/bin/hdfs dfs -rm /user/hive/test
Deleted /user/hive/test


設(shè)置HIVE_HOME

 $ export HIVE_HOME=<hive-install-dir>

Remote Metastore Server

https://cwiki.apache.org/confluence/display/Hive/AdminManual+Metastore+Administration
在遠(yuǎn)程metastore的設(shè)置中隔崎,所有的Hive客戶端會連接到metastore server。Metastore server依次查詢datastore(比如MySQL)請求元數(shù)據(jù)韵丑。 Metastore server 和 client 以 Thrift 協(xié)議通信爵卒。自Hive 0.5.0后,你可以使用如下命令啟動 Thrift server :

[hive@hadoop apache-hive-3.1.2-bin]$ bin/hive --service metastore &

對于早與版本0.5.0的Hive撵彻, 需要通過直接執(zhí)行Java來運(yùn)行 Thrift server:

$JAVA_HOME/bin/java  -Xmx1024m -Dlog4j.configuration=file://$HIVE_HOME/conf/hms-log4j.properties -Djava.library.path=$HADOOP_HOME/lib/native/Linux-amd64-64/ -cp $CLASSPATH org.apache.hadoop.hive.metastore.HiveMetaStore

如果你直接執(zhí)行Java技潘,那么你需要配置好JAVA_HOMEHIVE_HOME千康, HADOOP_HOME享幽;CLASSPATH應(yīng)該包含Hadoop, Hive (lib and auxlib)拾弃,and Java jars值桩。

Running Hive CLI

Hive CLI 文檔

[hive@hadoop apache-hive-3.1.2-bin]$ bin/hive
which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/local/jdk/jdk1.8.0_202/bin:/opt/hadoop-3.1.4:/opt/apache-hive-3.1.2-bin:/home/hive/.local/bin:/home/hive/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-3.1.4/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = b31ef87c-a414-4971-93e9-03cab446385d

Logging initialized using configuration in jar:file:/opt/apache-hive-3.1.2-bin/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true
Hive Session ID = b8104850-ccab-46fa-8359-508a61d6a172
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> show tables;
OK
Time taken: 1.097 seconds
hive> quit;


Running HiveServer2 and Beeline

自Hive 2.1起,我們需要執(zhí)行如下的schematool 命令作為初始化步驟豪椿。比如奔坟,我們可以使用 "derby" 作為 db type。
初始化過程搭盾,參考 Hive Schema Tool 咳秉。
https://docs.cloudera.com/documentation/enterprise/5-6-x/topics/cdh_ig_hive_metastore_configure.html
https://cwiki.apache.org/confluence/display/Hive/AdminManual+Metastore+Administration

hive_embeddedmetastore.jpg

hive_localmetastore.jpg

hive_remotemetastore.jpg

hive_metadatastore_backend.png

$ $HIVE_HOME/bin/schematool -dbType <db type> -initSchema

配置Remote Metastore 準(zhǔn)備工作
安裝mysql5.6,并配置

mysql> CREATE DATABASE metastore;
mysql> CREATE USER 'hive'@'%' IDENTIFIED BY 'hive';
mysql> REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'hive'@'%';
mysql> GRANT ALL PRIVILEGES ON metastore.* TO 'hive'@'%';
mysql> FLUSH PRIVILEGES;
mysql> quit;

安裝mysql-connector-java

[root@hadoop conf]# yum install mysql-connector-java.noarch
并拷貝
[root@hadoop conf]# cp /usr/share/java/mysql-connector-java.jar /opt/apache-hive-3.1.2-bin/lib/

拷貝 $HADOOP_HOME/share/hadoop/common/lib/guava-27.0-jre.jar 覆蓋 $HIVE_HOME/lib/guava-19.0.jar

創(chuàng)建hive-site.xml鸯隅,使metastore連接到MySQL澜建。如下(根據(jù)cloudera配置修改):

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>

<!-- Connection -->

<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://10.0.31.65/metastore</value>
  <description>the URL of the MySQL database</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.cj.jdbc.Driver</value>
</property>

<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>hive</value>
</property>

<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>hive</value>
</property>


<!-- datanucleus -->

<property>
  <name>datanucleus.autoCreateSchema</name>
  <value>false</value>
</property>

<property>
  <name>datanucleus.fixedDatastore</name>
  <value>true</value>
</property>

<property>
  <name>datanucleus.autoStartMechanism</name> 
  <value>SchemaTable</value>
</property> 

<!-- metastore -->

<property>
  <name>hive.metastore.uris</name>
  <value>thrift://10.0.31.65:9083</value>
  <description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>

<property>
<name>hive.metastore.schema.verification</name>
<value>true</value>
</property>

</configuration>

執(zhí)行初始化向挖,指定dbType mysql

[hive@hadoop apache-hive-3.1.2-bin]$ $HIVE_HOME/bin/schematool -dbType mysql -initSchema
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-3.1.4/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:    jdbc:mysql://10.0.31.65/metastore
Metastore Connection Driver :    com.mysql.cj.jdbc.Driver
Metastore connection User:   hive
Starting metastore schema initialization to 3.1.0
Initialization script hive-schema-3.1.0.mysql.sql


Initialization script completed
schemaTool completed
[hive@hadoop apache-hive-3.1.2-bin]$ 

HiveServer2
HiveServer2(HS2)是一個服務(wù)器接口,它使遠(yuǎn)程客戶機(jī)能夠?qū)IVE執(zhí)行查詢并檢索結(jié)果炕舵。

HiveServer2 有自己的CLI Beeline何之。因?yàn)锽eeline而導(dǎo)致HiveCLI 被棄用,因?yàn)樗狈iveServer2的多用戶咽筋、安全性和其他功能溶推。

要從shell運(yùn)行HiveServer2和Beeline:

 $ $HIVE_HOME/bin/hiveserver2

 $ $HIVE_HOME/bin/beeline -u jdbc:hive2://$HS2_HOST:$HS2_PORT

Beeline啟動需要配置 HiveServer2的JDBC URL ,取決于HiveServer2啟動的地址和端口奸攻。默認(rèn)是localhost:10000蒜危,所以地址類似 jdbc:hive2://localhost:10000

基于測試目的睹耐,可以在同一進(jìn)程啟動 Beeline 和 HiveServer2 辐赞,就類似于 HiveCLI 的使用方式:

  $ $HIVE_HOME/bin/beeline -u jdbc:hive2://

Beeline – Command Line Shell

范例:

[hive@hadoop apache-hive-3.1.2-bin]$ bin/hiveserver2 &

[hive@hadoop apache-hive-3.1.2-bin]$ bin/beeline -u jdbc:hive2://
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-3.1.4/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://
Hive Session ID = d9889ac7-8b44-417c-a415-a0c35e5de9aa
20/10/28 00:11:48 [main]: WARN session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
Connected to: Apache Hive (version 3.1.2)
Driver: Hive JDBC (version 3.1.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.2 by Apache Hive
0: jdbc:hive2://> show tables;
OK
+-----------+
| tab_name  |
+-----------+
+-----------+
No rows selected (2.033 seconds)
0: jdbc:hive2://> !quit
Closing: 0: jdbc:hive2://

HiveServer2 高級配置

配置文件 hive-site.xml

hive --service hiveserver2 --hiveconf hive.server2.thrift.port=10000 --hiveconf hive.root.logger=INFO,console

HiveServer2 客戶端

beeline -u jdbc:hive2://192.168.0.51:10000/training_db -n username -p password -e "select current_date()"
beeline -u jdbc:hive2://192.168.0.51:10000/training_db -n impadmin -p impetus --silent=true --outputformat=csv2 -e "select * from stud"

Running HCatalog

如果以二進(jìn)制版本安裝Hive,hcat命令已經(jīng)在hcatalog/bin目錄內(nèi)了疏橄。除了hcat -ghcat -p,大多情況下hcat都可以用 hive命令代替略就。注意捎迫,hcat命令使用-p標(biāo)記來設(shè)置權(quán)限,但是hive使用-p來指派端口表牢。

HCatalog server 和Hive metastore一樣窄绒,啟動了Hive metastore就可以了。

HCatalog CLI 文檔

Hive CLI 文檔

HCatalog 安裝文檔.

Hive 0.11.0版本以上崔兴,運(yùn)行HCatalog server

$ $HIVE_HOME/hcatalog/sbin/hcat_server.sh

Hive 0.11.0版本以上彰导,運(yùn)行HCatalog 命令行工具

 $ $HIVE_HOME/hcatalog/bin/hcat

更多信息參考 HCatalog Installation from Tarball and HCatalog CLI in the HCatalog manual.

范例:

[hive@hadoop apache-hive-3.1.2-bin]$ $HIVE_HOME/hcatalog/bin/hcat -e 'show tables;'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop-3.1.4/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2020-10-28 00:28:33,218 INFO conf.HiveConf: Found configuration file file:/opt/apache-hive-3.1.2-bin/conf/hive-site.xml
Hive Session ID = 1dcb43c9-2c02-4401-8be4-4bc3760d0de6
2020-10-28 00:28:36,048 INFO SessionState: Hive Session ID = 1dcb43c9-2c02-4401-8be4-4bc3760d0de6
2020-10-28 00:28:37,482 INFO session.SessionState: Created HDFS directory: /tmp/hive/hive/1dcb43c9-2c02-4401-8be4-4bc3760d0de6
2020-10-28 00:28:37,504 INFO session.SessionState: Created local directory: /tmp/hive/1dcb43c9-2c02-4401-8be4-4bc3760d0de6
2020-10-28 00:28:37,508 INFO session.SessionState: Created HDFS directory: /tmp/hive/hive/1dcb43c9-2c02-4401-8be4-4bc3760d0de6/_tmp_space.db
2020-10-28 00:28:37,614 INFO ql.Driver: Compiling command(queryId=hive_20201028002837_d4f757e8-ea5c-458f-bacf-afeba2b149a0): show tables
2020-10-28 00:28:40,761 INFO metastore.HiveMetaStoreClient: Trying to connect to metastore with URI thrift://hadoop:9083
2020-10-28 00:28:40,796 INFO metastore.HiveMetaStoreClient: Opened a connection to metastore, current connections: 1
2020-10-28 00:28:40,820 INFO metastore.HiveMetaStoreClient: Connected to metastore.
2020-10-28 00:28:40,820 INFO metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hive (auth:SIMPLE) retries=1 delay=1 lifetime=0
2020-10-28 00:28:41,147 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
2020-10-28 00:28:41,209 INFO ql.Driver: Semantic Analysis Completed (retrial = false)
2020-10-28 00:28:41,317 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from deserializer)], properties:null)
2020-10-28 00:28:41,453 INFO exec.ListSinkOperator: Initializing operator LIST_SINK[0]
2020-10-28 00:28:41,467 INFO ql.Driver: Completed compiling command(queryId=hive_20201028002837_d4f757e8-ea5c-458f-bacf-afeba2b149a0); Time taken: 3.896 seconds
2020-10-28 00:28:41,467 INFO reexec.ReExecDriver: Execution #1 of query
2020-10-28 00:28:41,468 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
2020-10-28 00:28:41,468 INFO ql.Driver: Executing command(queryId=hive_20201028002837_d4f757e8-ea5c-458f-bacf-afeba2b149a0): show tables
2020-10-28 00:28:41,485 INFO ql.Driver: Starting task [Stage-0:DDL] in serial mode
2020-10-28 00:28:41,511 INFO ql.Driver: Completed executing command(queryId=hive_20201028002837_d4f757e8-ea5c-458f-bacf-afeba2b149a0); Time taken: 0.043 seconds
OK
2020-10-28 00:28:41,512 INFO ql.Driver: OK
2020-10-28 00:28:41,512 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager
2020-10-28 00:28:41,523 INFO exec.ListSinkOperator: RECORDS_OUT_INTERMEDIATE:0, RECORDS_OUT_OPERATOR_LIST_SINK_0:0, 
Time taken: 4.004 seconds
2020-10-28 00:28:41,542 INFO session.SessionState: Deleted directory: /tmp/hive/hive/1dcb43c9-2c02-4401-8be4-4bc3760d0de6 on fs with scheme hdfs
2020-10-28 00:28:41,551 INFO session.SessionState: Deleted directory: /tmp/hive/1dcb43c9-2c02-4401-8be4-4bc3760d0de6 on fs with scheme file
2020-10-28 00:28:41,558 INFO metastore.HiveMetaStoreClient: Closed a connection to metastore, current connections: 0

Running WebHCat (Templeton)

Hive 0.11.0版本以上,運(yùn)行WebHCat server

 $ $HIVE_HOME/hcatalog/sbin/webhcat_server.sh

更多信息參考 WebHCat manual 內(nèi)的 WebHCat Installation 部分敲茄。

[hive@hadoop apache-hive-3.1.2-bin]$ $HIVE_HOME/hcatalog/sbin/webhcat_server.sh
Lenght of string is non zero
usage: /opt/apache-hive-3.1.2-bin/hcatalog/sbin/webhcat_server.sh [start|startDebug|stop|foreground]
  start           Start the Webhcat Server
  startDebug      Start the Webhcat Server listening for debugger on port 5005
  stop            Stop the Webhcat Server
  foreground      Run the Webhcat Server in the foreground
[hive@hadoop apache-hive-3.1.2-bin]$ 
[hive@hadoop apache-hive-3.1.2-bin]$ 
[hive@hadoop apache-hive-3.1.2-bin]$ 
[hive@hadoop apache-hive-3.1.2-bin]$ 
[hive@hadoop apache-hive-3.1.2-bin]$ $HIVE_HOME/hcatalog/sbin/webhcat_server.sh start
Lenght of string is non zero
webhcat: starting ...
webhcat: /opt/hadoop-3.1.4/bin/hadoop jar /opt/apache-hive-3.1.2-bin/hcatalog/sbin/../share/webhcat/svr/lib/hive-webhcat-3.1.2.jar org.apache.hive.hcatalog.templeton.Main  
webhcat: starting ... started.
webhcat: done
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末位谋,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子堰燎,更是在濱河造成了極大的恐慌掏父,老刑警劉巖,帶你破解...
    沈念sama閱讀 222,681評論 6 517
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件秆剪,死亡現(xiàn)場離奇詭異赊淑,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)仅讽,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 95,205評論 3 399
  • 文/潘曉璐 我一進(jìn)店門陶缺,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人洁灵,你說我怎么就攤上這事饱岸。” “怎么了?”我有些...
    開封第一講書人閱讀 169,421評論 0 362
  • 文/不壞的土叔 我叫張陵伶贰,是天一觀的道長蛛砰。 經(jīng)常有香客問我,道長黍衙,這世上最難降的妖魔是什么泥畅? 我笑而不...
    開封第一講書人閱讀 60,114評論 1 300
  • 正文 為了忘掉前任,我火速辦了婚禮琅翻,結(jié)果婚禮上位仁,老公的妹妹穿的比我還像新娘。我一直安慰自己方椎,他們只是感情好聂抢,可當(dāng)我...
    茶點(diǎn)故事閱讀 69,116評論 6 398
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著棠众,像睡著了一般琳疏。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上闸拿,一...
    開封第一講書人閱讀 52,713評論 1 312
  • 那天空盼,我揣著相機(jī)與錄音,去河邊找鬼新荤。 笑死揽趾,一個胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的苛骨。 我是一名探鬼主播篱瞎,決...
    沈念sama閱讀 41,170評論 3 422
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼痒芝!你這毒婦竟也來了俐筋?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 40,116評論 0 277
  • 序言:老撾萬榮一對情侶失蹤严衬,失蹤者是張志新(化名)和其女友劉穎校哎,沒想到半個月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體瞳步,經(jīng)...
    沈念sama閱讀 46,651評論 1 320
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡闷哆,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,714評論 3 342
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了单起。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片抱怔。...
    茶點(diǎn)故事閱讀 40,865評論 1 353
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖嘀倒,靈堂內(nèi)的尸體忽然破棺而出屈留,到底是詐尸還是另有隱情局冰,我是刑警寧澤,帶...
    沈念sama閱讀 36,527評論 5 351
  • 正文 年R本政府宣布灌危,位于F島的核電站康二,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏勇蝙。R本人自食惡果不足惜沫勿,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 42,211評論 3 336
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望味混。 院中可真熱鬧产雹,春花似錦、人聲如沸翁锡。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,699評論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽馆衔。三九已至瘟判,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間角溃,已是汗流浹背拷获。 一陣腳步聲響...
    開封第一講書人閱讀 33,814評論 1 274
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留开镣,地道東北人刀诬。 一個月前我還...
    沈念sama閱讀 49,299評論 3 379
  • 正文 我出身青樓咽扇,卻偏偏與公主長得像邪财,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子质欲,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,870評論 2 361

推薦閱讀更多精彩內(nèi)容