Hive分桶通俗點(diǎn)來(lái)說(shuō)就是將表(或者分區(qū),也就是hdfs上的目錄而真正的數(shù)據(jù)是存儲(chǔ)在該目錄下的文件)中文件分成幾個(gè)文件去存儲(chǔ)阿弃。比如表buck(目錄诊霹,里面存放了某個(gè)文件如sz.data)文件中本來(lái)是1000000條數(shù)據(jù),由于在處理大規(guī)模數(shù)據(jù)集時(shí)渣淳,在開(kāi)發(fā)和修改查詢(xún)的階段脾还,如果能在數(shù)據(jù)集的一小部分?jǐn)?shù)據(jù)上試運(yùn)行查詢(xún),會(huì)帶來(lái)很多方便入愧,所以我們可以分4個(gè)文件去存儲(chǔ)鄙漏。
下面記錄了從頭到尾以及出現(xiàn)問(wèn)題的操作
進(jìn)行連接嗤谚,創(chuàng)建數(shù)據(jù)庫(kù)myhive2,使用該數(shù)據(jù)庫(kù)
[root@mini1 ~]# cd apps/hive/bin
[root@mini1 bin]# ./beeline
Beeline version 1.2.1 by Apache Hive
beeline> !connect jdbc:hive2://localhost:10000
Connecting to jdbc:hive2://localhost:10000
Enter username for jdbc:hive2://localhost:10000: root
Enter password for jdbc:hive2://localhost:10000: ******
Connected to: Apache Hive (version 1.2.1)
Driver: Hive JDBC (version 1.2.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000> show databases;
+----------------+--+
| database_name |
+----------------+--+
| default |
| myhive |
+----------------+--+
2 rows selected (1.795 seconds)
0: jdbc:hive2://localhost:10000> create database myhive2;
No rows affected (0.525 seconds)
0: jdbc:hive2://localhost:10000> use myhive2;
No rows affected (0.204 seconds)
創(chuàng)建分桶表,導(dǎo)入數(shù)據(jù)怔蚌,查看表內(nèi)容
0: jdbc:hive2://localhost:10000> create table buck(id string,name string)
0: jdbc:hive2://localhost:10000> clustered by (id) sorted by (id) into 4 buckets
0: jdbc:hive2://localhost:10000> row format delimited fields terminated by ',';
No rows affected (0.34 seconds)
0: jdbc:hive2://localhost:10000> desc buck;
+-----------+------------+----------+--+
| col_name | data_type | comment |
+-----------+------------+----------+--+
| id | string | |
| name | string | |
+-----------+------------+----------+--+
2 rows selected (0.55 seconds)
load data local inpath '/root/sz.data' into table buck;
INFO : Loading data to table myhive2.buck from file:/root/sz.data
INFO : Table myhive2.buck stats: [numFiles=1, totalSize=91]
No rows affected (1.411 seconds)
0: jdbc:hive2://localhost:10000> select * from buck;
+----------+------------+--+
| buck.id | buck.name |
+----------+------------+--+
| 1 | zhangsan |
| 2 | lisi |
| 3 | wangwu |
| 4 | furong |
| 5 | fengjie |
| 6 | aaa |
| 7 | bbb |
| 8 | ccc |
| 9 | ddd |
| 10 | eee |
| 11 | fff |
| 12 | ggg |
+----------+------------+--+
如果分桶了的話(huà)巩步,那么buck目錄下應(yīng)該有4個(gè)文件,頁(yè)面查看
然而并沒(méi)有媚创,還是自己導(dǎo)入的那個(gè)文件渗钉。
這是因?yàn)榉滞安皇莌ive活著hadoop自動(dòng)給我們劃分文件來(lái)分桶的,而應(yīng)該是我們分好之后導(dǎo)入才好钞钙。
需要設(shè)置開(kāi)啟分桶鳄橘,設(shè)置reducetask數(shù)量(跟分桶數(shù)量一致)
0: jdbc:hive2://localhost:10000> set hive.enforce.bucketing = true;
No rows affected (0.063 seconds)
0: jdbc:hive2://localhost:10000> set hive.enforce.bucketing ;
+------------------------------+--+
| set |
+------------------------------+--+
| hive.enforce.bucketing=true |
+------------------------------+--+
1 row selected (0.067 seconds)
0: jdbc:hive2://localhost:10000> set mapreduce.job.reduces=4;
那么創(chuàng)建另外一個(gè)表tp,將該表數(shù)據(jù)放入到buck中(select出來(lái)insert 進(jìn)去),放入的時(shí)候指定進(jìn)行分桶芒炼,那么會(huì)分四桶瘫怜,每個(gè)里面進(jìn)行排序。那么最后buck表就進(jìn)行了分桶(分桶是導(dǎo)入的時(shí)候就分桶的而不是自己實(shí)現(xiàn)分桶(文件劃分))本刽。
接下來(lái)鲸湃,清空buck表信息,創(chuàng)建表tp子寓,將tp中數(shù)據(jù)查詢(xún)出來(lái)insert into到buck中暗挑。
0: jdbc:hive2://localhost:10000> truncate table buck;
No rows affected (0.316 seconds)
0: jdbc:hive2://localhost:10000> create table tp(id string,name string)
0: jdbc:hive2://localhost:10000> row format delimited fields terminated by ',';
No rows affected (0.112 seconds)
0: jdbc:hive2://localhost:10000> load data local inpath '/root/sz.data' into table tp;
INFO : Loading data to table myhive2.tp from file:/root/sz.data
INFO : Table myhive2.tp stats: [numFiles=1, totalSize=91]
No rows affected (0.419 seconds)
0: jdbc:hive2://localhost:10000> show tables;
+-----------+--+
| tab_name |
+-----------+--+
| buck |
| tp |
+-----------+--+
2 rows selected (0.128 seconds)
0: jdbc:hive2://localhost:10000> select * from tp;
+--------+-----------+--+
| tp.id | tp.name |
+--------+-----------+--+
| 1 | zhangsan |
| 2 | lisi |
| 3 | wangwu |
| 4 | furong |
| 5 | fengjie |
| 6 | aaa |
| 7 | bbb |
| 8 | ccc |
| 9 | ddd |
| 10 | eee |
| 11 | fff |
| 12 | ggg |
+--------+-----------+--+
12 rows selected (0.243 seconds)
0: jdbc:hive2://localhost:10000> insert into buck
0: jdbc:hive2://localhost:10000> select id,name from tp distribute by (id) sort by (id);
INFO : Number of reduce tasks determined at compile time: 4
INFO : In order to change the average load for a reducer (in bytes):
INFO : set hive.exec.reducers.bytes.per.reducer=<number>
INFO : In order to limit the maximum number of reducers:
INFO : set hive.exec.reducers.max=<number>
INFO : In order to set a constant number of reducers:
INFO : set mapreduce.job.reduces=<number>
INFO : number of splits:1
INFO : Submitting tokens for job: job_1508216103995_0028
INFO : The url to track the job: http://mini1:8088/proxy/application_1508216103995_0028/
INFO : Starting Job = job_1508216103995_0028, Tracking URL = http://mini1:8088/proxy/application_1508216103995_0028/
INFO : Kill Command = /root/apps/hadoop-2.6.4/bin/hadoop job -kill job_1508216103995_0028
INFO : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 4
INFO : 2017-10-19 03:57:23,631 Stage-1 map = 0%, reduce = 0%
INFO : 2017-10-19 03:57:29,349 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.18 sec
INFO : 2017-10-19 03:57:40,096 Stage-1 map = 100%, reduce = 25%, Cumulative CPU 2.55 sec
INFO : 2017-10-19 03:57:41,152 Stage-1 map = 100%, reduce = 75%, Cumulative CPU 5.29 sec
INFO : 2017-10-19 03:57:42,375 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 6.61 sec
INFO : MapReduce Total cumulative CPU time: 6 seconds 610 msec
INFO : Ended Job = job_1508216103995_0028
INFO : Loading data to table myhive2.buck from hdfs://192.168.25.127:9000/user/hive/warehouse/myhive2.db/buck/.hive-staging_hive_2017-10-19_03-57-14_624_1985499545258899177-1/-ext-10000
INFO : Table myhive2.buck stats: [numFiles=4, numRows=12, totalSize=91, rawDataSize=79]
No rows affected (29.238 seconds)
0: jdbc:hive2://localhost:10000> select * from buck;
+----------+------------+--+
| buck.id | buck.name |
+----------+------------+--+
| 11 | fff |
| 4 | furong |
| 8 | ccc |
| 1 | zhangsan |
| 12 | ggg |
| 5 | fengjie |
| 9 | ddd |
| 2 | lisi |
| 6 | aaa |
| 10 | eee |
| 3 | wangwu |
| 7 | bbb |
+----------+------------+--+
到這應(yīng)該就知道已經(jīng)分桶了,否則id應(yīng)該是1-12出來(lái)的斜友,這是因?yàn)樵?個(gè)桶中炸裆,分別進(jìn)行了各自的排序,而不是跟order by一樣會(huì)進(jìn)行全局排序鲜屏,頁(yè)面查看下吧烹看。
能看到確實(shí)分了4桶,客戶(hù)端查看下內(nèi)容吧(可以直接解析hdfs操作的)
0: jdbc:hive2://localhost:10000> dfs -ls /user/hive/warehouse/myhive2.db/buck;
+-----------------------------------------------------------------------------------------------------------+--+
| DFS Output |
+-----------------------------------------------------------------------------------------------------------+--+
| Found 4 items |
| -rwxr-xr-x 2 root supergroup 22 2017-10-19 03:57 /user/hive/warehouse/myhive2.db/buck/000000_0 |
| -rwxr-xr-x 2 root supergroup 34 2017-10-19 03:57 /user/hive/warehouse/myhive2.db/buck/000001_0 |
| -rwxr-xr-x 2 root supergroup 13 2017-10-19 03:57 /user/hive/warehouse/myhive2.db/buck/000002_0 |
| -rwxr-xr-x 2 root supergroup 22 2017-10-19 03:57 /user/hive/warehouse/myhive2.db/buck/000003_0 |
+-----------------------------------------------------------------------------------------------------------+--+
5 rows selected (0.028 seconds)
0: jdbc:hive2://localhost:10000> dfs -cat /user/hive/warehouse/myhive2.db/buck/000000_0;
+-------------+--+
| DFS Output |
+-------------+--+
| 11,fff |
| 4,furong |
| 8,ccc |
+-------------+--+
3 rows selected (0.02 seconds)
0: jdbc:hive2://localhost:10000> dfs -cat /user/hive/warehouse/myhive2.db/buck/000001_0;
+-------------+--+
| DFS Output |
+-------------+--+
| 1,zhangsan |
| 12,ggg |
| 5,fengjie |
| 9,ddd |
+-------------+--+
4 rows selected (0.08 seconds)
0: jdbc:hive2://localhost:10000> dfs -cat /user/hive/warehouse/myhive2.db/buck/000002_0;
+-------------+--+
| DFS Output |
+-------------+--+
| 2,lisi |
| 6,aaa |
+-------------+--+
2 rows selected (0.088 seconds)
0: jdbc:hive2://localhost:10000> dfs -cat /user/hive/warehouse/myhive2.db/buck/000003_0;
+-------------+--+
| DFS Output |
+-------------+--+
| 10,eee |
| 3,wangwu |
| 7,bbb |
+-------------+--+
3 rows selected (0.062 seconds)
注: select id,name from tp distribute by (id) sort by (id)語(yǔ)句中distribute by (id) sort by (id)知道根據(jù)id進(jìn)行分桶(根據(jù)id進(jìn)行hash散列)洛史,根據(jù)id進(jìn)行排序默認(rèn)升序惯殊。如果兩者字段相同那么可以使用cluster by (id);也就是說(shuō)可以寫(xiě)成
insert into buck select id ,name from p cluster by (id);
效果是一樣的。
分桶的作用
觀(guān)察下面的語(yǔ)句也殖。
select a.id,a.name,b.addr from a join b on a.id = b.id;
如果a表和b表已經(jīng)是分桶表土思,而且分桶的字段是id字段,那么做這個(gè)操作的時(shí)候就不需要再進(jìn)行全表笛卡爾積了忆嗜。