這兩天有一個需求如文章題目所示趟紊,要把之前從mysql數(shù)據(jù)庫導出到hive上的數(shù)據(jù)再反向操作導到mysql,具體用途就是生產(chǎn)mysql數(shù)據(jù)庫會備份到hive档泽,然后想通過hive把生產(chǎn)數(shù)據(jù)導入到測試環(huán)境匆篓。
sqoop我就不具體介紹了,直接上命令
sqoop export \
--driver com.mysql.jdbc.Driver \
--connect jdbc:mysql://10.2.123.115:3306/local4gome?characterEncoding=UTF-8 \
--table t_replenishment \
--username root \
--password 'esns' \
--fields-terminated-by '\0002' \
--export-dir /stage/supplycenter/gyl/t_replenishment_cb_biz/20190516
其他的沒啥好說的,不懂的可以google凤藏。這里主要分享一個坑:
ERROR tool.ExportTool: Error during export:
Export job failed!
at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:439)
at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
從命令行執(zhí)行輸出日志根本看不出具體的原因奸忽,這個時候需要首先到hadoop任務平臺查看任務詳細日志
Caused by: java.lang.RuntimeException: Can't parse input data: '1??201903?2??R17?相機??00383?尼康???xietianxu?2019-03-21 16:53:43???liangchen?2019-03-21 17:23:08????1??SY00, QD00??????1?COMPLETED?SUB_ORG?3C事業(yè)部?3C事業(yè)部?'
at t_buying_process.__loadFromFields(t_buying_process.java:2030)
at t_buying_process.parse(t_buying_process.java:1668)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:89)
... 10 more
Caused by: java.lang.NumberFormatException: For input string: "1??201903?2??R17?相機??00383?尼康???xietianxu?2019-03-21 16:53:43???liangchen?2019-03-21 17:23:08????1??SY00, QD00??????1?COMPLETED?SUB_ORG?3C事業(yè)部?3C事業(yè)部?"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.valueOf(Integer.java:766)
at t_buying_process.__loadFromFields(t_buying_process.java:1721)
從java.lang.NumberFormatException: For input string這句異扯槲保可以看出是分割字段錯誤,將很多字段識別成了一個字段栗菜,以至解析第一個int類型的id字段報錯欠雌。
這個時候我們需要確定hive存在hdfs文件中的每行的字段分隔符到底是什么。
一開始使用了一個錯誤的方法疙筹,得到了錯誤的分隔符:^I(通過查ASCII碼得其是9富俄,八進制0011)
hive -e "select * from stage.t_buying_process_cb_biz limit 10" >> res
于是我在命令中使用
--fields-terminated-by '\0011' #八進制,9而咆,即^I
當然霍比,不出所料,失敗了暴备。我還使用過tab分隔符
--fields-terminated-by '\t'
最后悠瞬,我把hdfs上的hive文件下載了下來,最終發(fā)現(xiàn)分隔符是^B
(ASCII碼是8涯捻,八進制002)
hadoop fs -get /stage/supplycenter/supply_cp/t_buying_process_cb_biz/20190516 .
于是把腳本中指定的分隔符換成了^B浅妆,最終終于成功了!結論就是一定要下載文件確定真正的字段分隔符障癌!
--fields-terminated-by '\0002' \
哈哈凌外,發(fā)現(xiàn)也可以以在線的方式查看hive文件分隔符
# 因為我這里hive存儲使用了bz2的方式壓縮,所以使用管道bzip2 -d進行了解壓縮
hadoop fs -cat /stage/supplycenter/gyl/t_replenishment_cb_biz/20190519/dbc__13b3ae19_5ee4_4e1e_8038_b4f186292d68.bz2 | bzip2 -d | head -n 10 | cat -A
從輸出可以看到涛浙,分隔符依然是^B
225483485^B^BGMZB^BR23^B00112^B000000000100019581^B0^B1^B^B0.000^B^B0^B0.000^B^B^B0.000^B0.000^B0.000^B8^B4^B0.000^B0^B0^B2019-05-09 11:17:43^B1^B0001^B$
225483486^B^BGMZB^BR23^B00112^B000000000100019581^B0^B2^B0.000^B0.000^B^B0^B0.000^B^B^B0.000^B0.000^B0.000^B8^B4^B0.000^B0^B0^B2019-05-09 11:17:43^B1^B0002^B$
225483487^B^BGMZB^BR23^B00112^B000000000100019581^B0^B2^B^B0.000^B^B0^B0.000^B^B^B0.000^B0.000^B0.000^B8^B4^B0.000^B0^B1^B2019-05-09 11:17:43^B1^B0002^B$
225483488^B^BGMZB^BR23^B00112^B000000000100019581^B1^B1^B^B0.000^B^B0^B0.000^B^B^B1.011^B0.000^B0.000^B8^B4^B0.000^B0^B1^B2019-05-09 11:17:43^B1^B0001^B$
225483489^B^BGMZB^BR23^B00112^B000000000100019581^B1^B2^B^B0.000^B^B0^B0.000^B^B^B4.187^B0.000^B0.000^B8^B4^B0.000^B0^B0^B2019-05-09 11:17:43^B1^B0001^B$
225483490^B^BGMZB^BR23^B00112^B000000000100019581^B1^B2^B^B0.000^B^B0^B0.000^B^B^B0.000^B0.000^B0.000^B8^B4^B0.000^B0^B1^B2019-05-09 11:17:43^B1^B0001^B$
225483491^B^BGMZB^BR23^B00112^B000000000100019581^B1^B2^B0.000^B0.000^B^B0^B2.167^B6.000^B3^B0.293^B18.000^B10.000^B8^B4^B0.000^B0^B0^B2019-05-09 11:17:43^B1^B0002^B$
225483492^B^BGMZB^BR19^B00080^B000000000100025235^B0^B1^B^B0.000^B^B0^B0.000^B^B^B0.000^B0.000^B0.000^B8^B1^B0.000^B0^B0^B2019-05-09 11:17:43^B1^B0001^B$
225483493^B^BGMZB^BR19^B00080^B000000000100025235^B0^B1^B^B0.000^B^B0^B0.000^B^B^B0.000^B0.000^B0.000^B8^B1^B0.000^B0^B1^B2019-05-09 11:17:43^B1^B0002^B$
225483494^B^BGMZB^BR19^B00080^B000000000100025235^B0^B2^B^B0.000^B^B0^B0.000^B^B^B0.000^B0.000^B0.000^B8^B1^B0.000^B0^B0^B2019-05-09 11:17:43^B1^B0001^B$
補充:很遺憾康辑,sqoop export不支持query以及where參數(shù),import倒是支持蝗拿。
以下參數(shù)命令將空串''
轉(zhuǎn)換為NULL存到數(shù)據(jù)庫
--input-null-string ''