2017年11月3日課后作業(yè)
Hive 第三天
[toc]
第二天內(nèi)容回顧
Hive幫助文檔的地址
https://cwiki.apache.org/confluence/display/Hive/Home
Hive SQL Language Manual: Commands, CLIs, Data Types,
DDL (create/drop/alter/truncate/show/describe), Statistics (analyze), Indexes, Archiving,
DML (load/insert/update/delete/merge, import/export, explain plan),
Queries (select), Operators and UDFs, Locks, Authorization
File Formats and Compression: RCFile, Avro, ORC, Parquet; Compression, LZO
Procedural Language: Hive HPL/SQL
Hive Configuration Properties
Hive Clients
Hive Client (JDBC, ODBC, Thrift)
HiveServer2: Overview, HiveServer2 Client and Beeline, Hive Metrics
DDL
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL
database
- create
- drop
- alter
- Use
Table
Create
CREATE [TEMPORARY] [EXTERNAL] TABLE
Create Table As Select (CTAS)
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name
LIKE existing_table_or_view_name
[LOCATION hdfs_path];
三種類型表
臨時表:TEMPORARY
跟Hive的Session生命周期一致跷乐,Hive Client 關(guān)閉|退出 表也一起刪除了
臨時表的優(yōu)先級比其他表高:當臨時表與其他表名一致時,我們操作的是臨時表
直到我們把臨時表Drop掉统台,或者Alter掉隘弊,我們才可以操作其他表
外部表:EXTERNAL
只管理元數(shù)據(jù),Drop表的時候键耕,只刪除原數(shù)據(jù)咧叭,HDFS上的數(shù)據(jù)葱跋,不會被刪除
需要指定Location
內(nèi)部表:沒有修飾詞
全部管理澳化,元數(shù)據(jù)和HDFS上的數(shù)據(jù)崔步,刪除就都沒了
特別注意一下,沒事別刪除數(shù)據(jù)
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name -- (Note: TEMPORARY available in Hive 0.14.0 and later)
[(col_name data_type [COMMENT col_comment], ... [constraint_specification])]
[COMMENT table_comment]
[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
[CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]
[SKEWED BY (col_name, col_name, ...) -- (Note: Available in Hive 0.10.0 and later)]
ON ((col_value, col_value, ...), (col_value, col_value, ...), ...)
[STORED AS DIRECTORIES]
[
[ROW FORMAT row_format]
[STORED AS file_format]
| STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)] -- (Note: Available in Hive 0.6.0 and later)
]
[LOCATION hdfs_path]
[TBLPROPERTIES (property_name=property_value, ...)] -- (Note: Available in Hive 0.6.0 and later)
[AS select_statement]; -- (Note: Available in Hive 0.5.0 and later; not supported for external tables)
ROW FORMAT
原始數(shù)據(jù)缎谷,用什么樣的格式井濒,加載到我們Hive表
加載到我們表里的數(shù)據(jù),原始數(shù)據(jù)不會改變
PARTITIONED BY
對我們數(shù)據(jù)進行分區(qū)
STORED AS
數(shù)據(jù)存儲的文件格式
LOCATION
存放在HDFS 上目錄的位置
Drop
Truncate
DML
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML
LOAD
LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)]
LOCAL本地
LOCAL和inpath組合列林,決定是從hdfs上讀取數(shù)據(jù)瑞你,還是從客戶端位置讀取數(shù)據(jù)
我們加載數(shù)據(jù)的時候,實際是把一個數(shù)據(jù)文件希痴,移動到Hive warehouse目錄下面者甲,表名的這個目錄
HDFS 上 直接就挪過去了
LOCAL 是上傳到臨時目錄,然后在移動到相應(yīng)的位置
強調(diào)一下砌创,那個warehouse目錄和本地什么的过牙,那個地方?jīng)]明白甥厦?
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
在本地linux系統(tǒng)上的文件 要加上LOCAL這個關(guān)鍵詞
如果是hdfs上的文件纺铭,直接寫 filepath
OVERWRITE
是否覆蓋原有數(shù)據(jù)
如果不覆蓋原有數(shù)據(jù)的話寇钉,把原來的數(shù)據(jù),復(fù)制到hive數(shù)據(jù)目錄下舶赔,就會重復(fù)了xxx_copy
PARTITION
分區(qū)扫倡,根據(jù)PARTITION (gender='male',age='35')
INSERT
into Hive tables from queries
Standard syntax:
INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1 FROM from_statement;
INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1 FROM from_statement;
Hive extension (multiple inserts):
FROM from_statement
INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1
[INSERT OVERWRITE TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2]
[INSERT INTO TABLE tablename2 [PARTITION ...] select_statement2] ...;
FROM from_statement
INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1
[INSERT INTO TABLE tablename2 [PARTITION ...] select_statement2]
[INSERT OVERWRITE TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2] ...;
Hive extension (dynamic partition inserts):
INSERT OVERWRITE TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...) select_statement FROM from_statement;
INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...) select_statement FROM from_statement;
Example
FROM page_view_stg pvs
INSERT OVERWRITE TABLE page_view PARTITION(dt='2008-06-08', country)
SELECT pvs.viewTime, pvs.userid, pvs.page_url, pvs.referrer_url, null, null, pvs.ip, pvs.cnt
into Hive tables from SQL
Standard Syntax:
INSERT INTO TABLE tablename [PARTITION (partcol1[=val1], partcol2[=val2] ...)] VALUES values_row [, values_row ...]
Where values_row is:
( value [, value ...] )
where a value is either null or any valid SQL literal
案例
CREATE TABLE students (name VARCHAR(64), age INT, gpa DECIMAL(3, 2))
CLUSTERED BY (age) INTO 2 BUCKETS STORED AS ORC;
INSERT INTO TABLE students
VALUES ('fred flintstone', 35, 1.28), ('barney rubble', 32, 2.32);
CREATE TABLE pageviews (userid VARCHAR(64), link STRING, came_from STRING)
PARTITIONED BY (datestamp STRING) CLUSTERED BY (userid) INTO 256 BUCKETS STORED AS ORC;
INSERT INTO TABLE pageviews PARTITION (datestamp = '2014-09-23')
VALUES ('jsmith', 'mail.com', 'sports.com'), ('jdoe', 'mail.com', null);
INSERT INTO TABLE pageviews PARTITION (datestamp)
VALUES ('tjohnson', 'sports.com', 'finance.com', '2014-09-23'), ('tlee', 'finance.com', null, '2014-09-21');
今天要講的內(nèi)容
HiveServer2 Client and Beeline
HiveServer2
Beeline
Operators and UDFs
Operators 操作符
UDFs:User-Defined Functions
Hive 所有知識點
HiveServer2 Client and Beeline
執(zhí)行beeline之前要啟動HiveServer2
Dependencies of HS2(HiveServer2)
啟動HS2之前,依賴
Metastore
需要啟動metastore
hive --service metastore &
The metastore can be configured as embedded (in the same process as HS2) or as a remote server (which is a Thrift-based service as well). HS2 talks to the metastore for the metadata required for query compilation.
Hadoop cluster
start-all.sh
HS2 prepares physical execution plans for various execution engines (MapReduce/Tez/Spark) and submits jobs to the Hadoop cluster for execution.
可以配置hive-site.xml
hive.server2.thrift.min.worker.threads – Minimum number of worker threads, default 5.
hive.server2.thrift.max.worker.threads – Maximum number of worker threads, default 500.
hive.server2.thrift.port – TCP port number to listen on, default 10000.
hive.server2.thrift.bind.host – TCP interface to bind to.
兩種方式啟動How to Start
$HIVE_HOME/bin/hiveserver2
$HIVE_HOME/bin/hive --service hiveserver2
寫一個JDBC的程序竟纳,連接Hive撵溃,操作Hive里面的表
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-JDBC
ctrl + f
JDBC Client Sample Code
創(chuàng)建JAVA項目
創(chuàng)建包名
創(chuàng)建類名
粘貼
run as
沒有JDBC
把Hive里面的 lib 加到我們的 buildpath
還要加Hadoop的buildpath
hadoop里面 所有share里面的說有l(wèi)ib放到一起
url地址不對
用戶名要改
找到合適的數(shù)據(jù)格式
https://baike.baidu.com/item/ASCII
默認格式 SOH
0000 0001
1
1
01
SOH(start of headline)
標題開始
制作默認分隔符數(shù)據(jù)文件的Shell腳本
#!/bin/bash
HADOOP_HOME=/your/path/to/hadoop
HIVE_HOME=/your/path/to/hive
echo -e '1\x01foo' > /tmp/a.txt
echo -e '2\x01bar' >> /tmp/a.txt
HADOOP_CORE=$(ls $HADOOP_HOME/hadoop-core*.jar)
CLASSPATH=.:$HIVE_HOME/conf:$(hadoop classpath)
for i in ${HIVE_HOME}/lib/*.jar ; do
CLASSPATH=$CLASSPATH:$i
done
java -cp $CLASSPATH HiveJdbcClient
Operators and UDFs
UDF、UDAF锥累、UDTF他們之間的區(qū)別
一葉知秋的問題
UDF缘挑、UDAF、UDTF他們之間的區(qū)別
User-Defined Functions (UDFs)
一個輸入一個輸出
mask()
Aggregate Functions (UDAF)
多個輸入一個輸出
Table-Generating Functions (UDTF)
一個輸入多個輸出桶略,更復(fù)雜的類型
https://www.iteblog.com/archives/2258.html
todo
案例的位置
https://cwiki.apache.org/confluence/display/Hive/HivePlugins
粘貼程序到eclipse里面语淘,把所有小紅線都給去掉
寫一下 mask的業(yè)務(wù) substring
第一步,你需要創(chuàng)建一個class 繼承UDF际歼,一個或者多個方法叫做evaluate
First, you need to create a new class that extends UDF, with one or more methods named evaluate.
package com.youxiaoxueyuan.udf;
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.Text;
public final class Mask extends UDF {
public Text evaluate(final Text s) {
if (s == null) {
return null;
}
String newstring = s.toString();
newstring = newstring.substring(0,1) + "*****" + newstring.substring(newstring.length()-1, newstring.length());
return new Text(newstring);
}
}
第二步惶翻,在把你的代碼編譯成jar包以后,你需要添加這個jar到 Hive classpath
After compiling your code to a jar, you need to add this to the Hive classpath. See the section below on deploying jars.
ADD { FILE[S] | JAR[S] | ARCHIVE[S] } <filepath1> [<filepath2>]*
ADD JAR /root/Mask.jar
第三步鹅心,第一次運行吕粗,Hive時候,你需要注冊你的function
Once Hive is started up with your jars in the classpath, the final step is to register your function as described in Create Function:
create temporary function mask as 'com.youxiaoxueyuan.udf.Mask';
第四步旭愧,你就可以使用這個Fuction了
Now you can start using it:
select my_lower(title), sum(freq) from titles group by my_lower(title);
select key,mask(value),value from testhivedrivertable;
ROW FORMAT 正則表達式
作用和意義
數(shù)據(jù)來源
兩種
- 我們自己采集
可以按照我們自己要求的格式來采集
- 從別人拿的(同公司的其他部門颅筋、外面買的、傳感器采集输枯、從互聯(lián)網(wǎng)爬來的)
ETL(抽取议泵,轉(zhuǎn)換,加載)清洗 Extract-Transform-Load
Extract translation load
水若清寒的好習慣用押,將看到的單詞肢簿,我重新復(fù)習一下
水若清寒的好習慣,將看到的單詞蜻拨,我重新復(fù)習一下
https://baike.baidu.com/item/ETL/1251949
MapReduce
拿到每一條數(shù)據(jù)
按分隔符給分開
處理每一條數(shù)據(jù)池充,把不要的內(nèi)容去掉
把要的內(nèi)容,寫到hdfs上
把清晰好的數(shù)據(jù)缎讼,進行數(shù)據(jù)分析或者 數(shù)據(jù)挖掘 或者 AI BI
正則表達式練習工具
https://cwiki.apache.org/confluence/display/Hive/GettingStarted
最后
CREATE TABLE logtbl1 (
host STRING,
identity STRING,
t_user STRING,
time STRING,
request STRING,
referer STRING,
agent STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
"input.regex" = "([^ ]*) ([^ ]*) ([^ ]*) \\[(.*)\\] \"(.*)\" (-|[0-9]*) (-|[0-9]*)"
)
STORED AS TEXTFILE;
SERDE 關(guān)鍵字
后面接'org.apache.hadoop.hive.serde2.RegexSerDe'
RegEx正則
https://baike.baidu.com/item/%E6%AD%A3%E5%88%99%E8%A1%A8%E8%BE%BE%E5%BC%8F
log源文件
192.168.57.4 - - [29/Feb/2016:18:14:35 +0800] "GET /bg-upper.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:35 +0800] "GET /bg-nav.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:35 +0800] "GET /asf-logo.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:35 +0800] "GET /bg-button.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:35 +0800] "GET /bg-middle.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET / HTTP/1.1" 200 11217
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET / HTTP/1.1" 200 11217
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET /tomcat.css HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET /tomcat.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET /asf-logo.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET /bg-middle.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET /bg-button.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET /bg-nav.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET /bg-upper.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET / HTTP/1.1" 200 11217
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET /tomcat.css HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET /tomcat.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET / HTTP/1.1" 200 11217
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET /tomcat.css HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET /tomcat.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET /bg-button.png HTTP/1.1" 304 -
192.168.57.4 - - [29/Feb/2016:18:14:36 +0800] "GET /bg-upper.png HTTP/1.1" 304 -
第一天基礎(chǔ)知識加上配置環(huán)境
12.5.2 build-4638234
如果搞不定的同學可以下載資料里面的hh.rar
用vmware 12.5.2 打開收夸,啟動就可以用了
啟動步驟
start-all.sh
service mysqld restart
hive --servcie metastore &
hive --servcie hiveserver2 & === hiveserver2
hive
遇到各種問題,解決的安裝視頻
環(huán)境安裝視頻坎坷版血崭,在這個目錄下面
不給大家總結(jié)了卧惜,總結(jié)的第一天的視頻厘灼,在第二節(jié)的開始
第二天的視頻,在第三天的開始
今天的總結(jié)
[TOC]