1 Hive Metastore
1.1相關(guān)概念
Hive Metastore有三種配置方式抠蚣,分別是:
Embedded Metastore Database (Derby)內(nèi)嵌模式
Local Metastore Server本地元存儲
Remote Metastore Server遠(yuǎn)程元存儲
1.1 Metadata徘熔、Metastore作用
metadata即元數(shù)據(jù)碑韵。元數(shù)據(jù)包含用Hive創(chuàng)建的database温鸽、tabel等的元信息。
元數(shù)據(jù)存儲在關(guān)系型數(shù)據(jù)庫中喜命。如Derby爱葵、MySQL等。
Metastore的作用是:客戶端連接metastore服務(wù)纸俭,metastore再去連接MySQL數(shù)據(jù)庫來存取元數(shù)據(jù)皇耗。有了metastore服務(wù),就可以有多個(gè)客戶端同時(shí)連接揍很,而且這些客戶端不需要知道MySQL數(shù)據(jù)庫的用戶名和密碼郎楼,只需要連接metastore 服務(wù)即可。
1.2三種配置方式區(qū)別
內(nèi)嵌模式使用的是內(nèi)嵌的Derby數(shù)據(jù)庫來存儲元數(shù)據(jù)窒悔,也不需要額外起Metastore服務(wù)呜袁。這個(gè)是默認(rèn)的,配置簡單简珠,但是一次只能一個(gè)客戶端連接阶界,適用于用來實(shí)驗(yàn),不適用于生產(chǎn)環(huán)境北救。
本地元存儲和遠(yuǎn)程元存儲都采用外部數(shù)據(jù)庫來存儲元數(shù)據(jù)荐操,目前支持的數(shù)據(jù)庫有:MySQL、Postgres珍策、Oracle、MS SQL Server.在這里我們使用MySQL宅倒。
本地元存儲和遠(yuǎn)程元存儲的區(qū)別是:本地元存儲不需要單獨(dú)起metastore服務(wù)攘宙,用的是跟hive在同一個(gè)進(jìn)程里的metastore服務(wù)。遠(yuǎn)程元存儲需要單獨(dú)起metastore服務(wù),然后每個(gè)客戶端都在配置文件里配置連接到該metastore服務(wù)蹭劈。遠(yuǎn)程元存儲的metastore服務(wù)和hive運(yùn)行在不同的進(jìn)程里疗绣。
在生產(chǎn)環(huán)境中,建議用遠(yuǎn)程元存儲來配置Hive Metastore铺韧。
1.3配置文件相關(guān):
配置文件為hivemetastore-site.xml?或在hive-site.xml
hive.metastore.urisHive connects to one of these URIs to make metadata requests to a remote Metastore (comma separated list of URIs)
javax.jdo.option.ConnectionURLJDBC connection string for the data store which contains metadata
javax.jdo.option.ConnectionDriverNameJDBC Driver class name for the data store which contains metadata
hive.metastore.locallocal or remote metastore (removed as of Hive 0.10: If hive.metastore.uris is empty local mode is assumed, remote otherwise)
hive.metastore.warehouse.dirURI of the default location for native tables
javax.jdo.option.ConnectionUserName<user name>
javax.jdo.option.ConnectionPassword<password>
Data Nucleus Auto Start
Configuring datanucleus.autoStartMechanism is highly recommended
Configuring auto start for data nucleus is highly recommended. See HIVE-4762 for more details.
<property>
???<name>datanucleus.autoStartMechanism</name>
???<value>SchemaTable</value>
?</property>
2 Hive 數(shù)據(jù)存儲
Hive結(jié)構(gòu)和常規(guī)的數(shù)據(jù)庫差別不大多矮,也是下面多個(gè)數(shù)據(jù)庫,每個(gè)數(shù)據(jù)庫下面多個(gè)表格
數(shù)據(jù)存儲位置配置在
?<property>
????<name>hive.metastore.warehouse.dir</name>
????<value>/user/hive/warehouse</value>
????<description>location of default database for the warehouse</description>
??</property>
Hive數(shù)據(jù)存儲的路徑為/user/hive/warehouse哈打,該路徑為hdfs的路徑塔逃,其具體路徑配置在
$HADOOP_HOME/etc/hadoop中的配置文件core-site.xml和hdfs-site.xml
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/leesf/program/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml配置文件內(nèi)容
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/leesf/program/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/leesf/program/hadoop/tmp/dfs/data</value>
</property>
<property>
??<name>dfs.http.address</name>
??<value>192.168.65.128:50070</value>
</property>
</configuration>
3 Hive 操作
1.1:查看所有的數(shù)據(jù)庫: hive>show databases;
1.2:使用數(shù)據(jù)庫default; hive>use default;
1.3:查看數(shù)據(jù)庫信息: hive>describe database default;
1.4:顯示的展示當(dāng)前使用的數(shù)據(jù)庫:hive>set hive.cli.print.current.db=true;
1.5:Hive顯示表中標(biāo)題行: hive>set hive.cli.print.header=true;
1.6:創(chuàng)建數(shù)據(jù)庫命令: hive>create database test;
1.7:切換當(dāng)前的數(shù)據(jù)庫: hive>use test;
4 hiveserver2和beeline
連接hive有兩種方式
Hive cli和hiveserver2 beeline
$ $HIVE_HOME/bin/hive
或者
?$ $HIVE_HOME/bin/hiveserver2
?$ $HIVE_HOME/bin/beeline -u jdbc:hive2://$HS2_HOST:$HS2_PORT
?HiveCLI is now deprecated in favor of Beeline, as it lacks the multi-user, security, and other capabilities of HiveServer2.
Hivecli已經(jīng)不推薦使用了,因?yàn)椴恢С侄嘤脩袅险蹋踩酝宓粒约耙恍┢渌膶傩浴?/p>
使用beeline連接的時(shí)候,如果出現(xiàn)錯(cuò)誤:
Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000/default: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate anonymous (state=08S01,code=0)
則需要修改hadoop的core-site.xml添加如下內(nèi)容立轧,并重啟服務(wù)器格粪。
<property>
???<name>hadoop.proxyuser.root.hosts</name>
???<value>*</value>
</property>
<property>
???<name>hadoop.proxyuser.root.groups</name>
???<value>*</value>
</property>
其中紅色標(biāo)注的部分,是你使用beeline連接的用戶名
Connection URL for Remote or Embedded Mode
The JDBC connection URL format has the prefix jdbc:hive2:// and the Driver class is org.apache.hive.jdbc.HiveDriver. Note that this is different from the old HiveServer.
For a remote server, the URL format is jdbc:hive2://<host>:<port>/<db>;initFile=<file> (default port for HiveServer2 is 10000).
For an embedded server, the URL format is jdbc:hive2:///;initFile=<file> (no host or port).
4.2使用hiveserver2 配置用戶名和密碼
Hive-site.xml中配置
?<property>
????<name>hive.server2.authentication</name>
????<value>NONE</value>
????<description>
??????Expects one of [nosasl, none, ldap, kerberos, pam, custom].
??????Client authentication types.
????????NONE: no authentication check
????????LDAP: LDAP/AD based authentication
????????KERBEROS: Kerberos/GSSAPI authentication
????????CUSTOM: Custom authentication provider
????????????????(Use with property hive.server2.custom.authentication.class)
????????PAM: Pluggable authentication module
????????NOSASL: ?Raw transport
????</description>
??</property>
配置為CUSTOM氛改,之后自己編寫java驗(yàn)證類帐萎,打包成jar文件,放置在hive的lib目錄下胜卤,并配置
<property>
????<name>hive.server2.custom.authentication.class</name>
????<value>test.SampleAuthenticator </value>
????<description>
??????Custom authentication class. Used when property
??????'hive.server2.authentication' is set to 'CUSTOM'. Provided class
??????must be a proper implementation of the interface
??????org.apache.hive.service.auth.PasswdAuthenticationProvider. HiveServer2
??????will call its Authenticate(user, passed) method to authenticate requests.
??????The implementation may optionally implement Hadoop's
??????org.apache.hadoop.conf.Configurable class to grab Hive's Configuration object.
????</description>
??</property>
以下代碼為java驗(yàn)證類
package test;
import java.util.Hashtable;
import javax.security.sasl.AuthenticationException;
import org.apache.hive.service.auth.PasswdAuthenticationProvider;
/*
?javac -cp $HIVE_HOME/lib/hive-service-0.12.0-cdh5.0.0-beta-2.jar SampleAuthenticator.java -d .
?jar cf sampleauth.jar hive
?cp sampleauth.jar $HIVE_HOME/lib/.
*/
public class SampleAuthenticator implements PasswdAuthenticationProvider {
??Hashtable<String, String> store = null;
??public SampleAuthenticator () {
????store = new Hashtable<String, String>();
????store.put("user1", "passwd1");
????store.put("user2", "passwd2");
??}
??@Override
??public void Authenticate(String user, String ?password)
??????throws AuthenticationException {
????String storedPasswd = store.get(user);
????if (storedPasswd != null && storedPasswd.equals(password))
??????return;
????throw new AuthenticationException("SampleAuthenticator: Error validating user");
??}
}