背景:
DMP項目數(shù)據(jù)每秒寫入數(shù)據(jù)量達(dá)到20-30M(峰值),可持續(xù)2個小時左右拐纱,mongo性能原因趾盐,查詢效率很低。故考慮用hive替換DMP的mongo倉庫救鲤。
周一:線上環(huán)境搭建hive久窟,調(diào)試
遇到坑:由于要和mongo整合本缠,需要額外幾個jar包斥扛,放入$hive/lib和$hadoop/share/hadoop/yarn/lib下
mongo-hadoop-core
mongo-hadoop-hive
mongo-java-driver
json-serde
周二:從DMP的mongo庫全量導(dǎo)出結(jié)果集。
遇到坑:
1)運維導(dǎo)出Mongo結(jié)果集合速度奇慢丹锹,預(yù)計花費2-3天時間稀颁。
考慮不從mongo結(jié)果庫導(dǎo)出,而從DMP的spark程序修改入手卷仑,將寫入mongo的前一步改為寫入hdfs峻村。重跑DMP項目麸折,并且由于前期已經(jīng)有臨時數(shù)據(jù)锡凝,直接讀取再做處理,大約耗時1.5hour垢啼。
2)RDD轉(zhuǎn)化為json數(shù)據(jù)
采用jackson包
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.4.4</version>
</dependency>
//將K,V對寫入java.util.HashMap中,V若是array,將其轉(zhuǎn)為java.util.ArrayList結(jié)構(gòu)
val mapper = new ObjectMapper()
val maps = new util.HashMap[String,java.lang.Object]()
maps.put("uuid",s._1)
//RDD遍歷,將k,v對裝入map窜锯。(maps.put方法)
s._2.foreach(v =>
{
//自定義函數(shù) K,V對裝入map
FormatUser(maps,titlesets,v)
})
val jstring = mapper.writeValueAsString(maps)
//jstring直接打印即是json字符串
3)導(dǎo)入hive,創(chuàng)建外部表
mongo數(shù)據(jù)并不是太規(guī)范芭析,有149個key字段
hive創(chuàng)建表格:
create external table if not exists user_profile_dmp_all(
uuid STRING,
isreg INT,
isalive INT,
ispaid INT,
isintent INT,
province ARRAY<STRING>,
city ARRAY<STRING>,
online_m INT,
online_pc INT,
online_o INT,
os_win INT,
os_linux INT,
os_mac INT,
os_ios INT,
os_android INT,
os_o INT,
activity INT,
xf_last_time INT,
xf_ut_news INT,
xf_ut_house INT,
xf_ut_regv INT,
xf_ut_paid INT,
xf_ut_act INT,
xf_ubt_91 INT,
xf_ubt_yche INT,
xf_ubt_im INT,
xf_ubt_400 INT,
xf_ubt_ejq INT,
xf_ubt_kft INT,
xf_hp_a INT,
xf_hp_b INT,
xf_hp_c INT,
xf_hp_d INT,
xf_hp_e INT,
xf_hp_f INT,
xf_hp_g INT,
xf_hp_h INT,
xf_hp_i INT,
xf_province ARRAY<STRING>,
xf_city ARRAY<STRING>,
xf_district ARRAY<STRING>,
xf_bt_1 INT,
xf_bt_2 INT,
xf_bt_3 INT,
xf_bt_4 INT,
xf_bt_5 INT,
xf_bt_6 INT,
xf_bt_7 INT,
xf_bt_8 INT,
xf_bt_9 INT,
xf_bt_10 INT,
xf_bt_11 INT,
xf_bt_12 INT,
xf_op_1 INT,
xf_op_2 INT,
xf_op_3 INT,
xf_op_4 INT,
xf_op_5 INT,
xf_ht_1 INT,
xf_ht_2 INT,
xf_ht_3 INT,
xf_ht_4 INT,
xf_ht_5 INT,
xf_ht_6 INT,
xf_ht_7 INT,
xf_ht_8 INT,
xf_ht_9 INT,
xf_ht_10 INT,
xf_ht_11 INT,
xf_ht_12 INT,
xf_fitment_1 INT,
xf_fitment_2 INT,
xf_fitment_3 INT,
xf_fitment_4 INT,
xf_fitment_5 INT,
xf_dt_1 INT,
xf_dt_2 INT,
xf_dt_3 INT,
xf_dt_4 INT,
e_last_time INT,
e_ut_news INT,
e_ut_house INT,
e_ut_reg INT,
e_ut_paid INT,
e_ut_act INT,
e_ubt_im INT,
e_ubt_400 INT,
e_ubt_kft INT,
e_tt_lease INT,
e_tt_sale INT,
e_hp_a INT,
e_hp_b INT,
e_hp_c INT,
e_hp_d INT,
e_hp_e INT,
e_hp_f INT,
e_hp_g INT,
e_hp_h INT,
e_area_a INT,
e_area_b INT,
e_area_c INT,
e_area_d INT,
e_area_e INT,
e_area_f INT,
e_area_g INT,
e_area_h INT,
e_province ARRAY<STRING>,
e_city ARRAY<STRING>,
e_district ARRAY<STRING>,
e_room_0 INT,
e_room_1 INT,
e_room_2 INT,
e_room_3 INT,
e_room_4 INT,
e_room_5 INT,
e_room_6 INT,
e_hall_1 INT,
e_hall_2 INT,
e_hall_3 INT,
e_hall_4 INT,
e_balcony_1 INT,
e_balcony_2 INT,
e_balcony_3 INT,
e_toilet_1 INT,
e_toilet_2 INT,
e_toilet_3 INT,
e_toilet_4 INT,
e_propertype_1 INT,
e_propertype_2 INT,
e_propertype_3 INT,
e_propertype_4 INT,
e_propertype_5 INT,
e_propertype_6 INT,
e_propertype_7 INT,
e_propertype_8 INT,
e_propertype_9 INT,
e_fitment_1 INT,
e_fitment_2 INT,
e_fitment_3 INT,
e_deliverdate_1 INT,
e_deliverdate_2 INT,
e_deliverdate_3 INT,
e_deliverdate_4 INT,
e_deliverdate_5 INT,
ju_last_time INT,
ju_ut_reg INT,
ju_ut_act INT,
ju_ut_paid INT,
ju_ubt_order INT
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
STORED AS TEXTFILE
location '/WareHouse/HiveSource/DMP/user_profile/';
一定要創(chuàng)建外部表锚扎,以防不小心刪除。
4)測試查詢性能
js端查詢
hive語句:
select count(1) from user_profile_dmp_all where (online_m = 1 or online_pc = 1 or online_o = 1) and (os_win = 1 or os_linux = 1 or os_mac = 1 or os_ios = 1 or os_android = 1);