作者:Syn良子 出處:http://www.cnblogs.com/cssdongl 轉(zhuǎn)載請注明出處
找時間總結(jié)整理了下數(shù)據(jù)從Kafka到Hdfs的一些pipeline庇勃,如下
1> Kafka -> Flume –> Hadoop Hdfs
常用方案,基于配置,需要注意hdfs小文件性能等問題.
GitHub地址: https://github.com/apache/flume
2> Kafka -> Kafka Hadoop Loader ->Hadoop Hdfs
Kafka Hadoop Loader通過為kafka Topic下每個分區(qū)建立對應(yīng)的split來創(chuàng)建task實現(xiàn)增量的加載數(shù)據(jù)流到hdfs,上次消費的partition offset是通過zookeeper來記錄的.簡單易用.
GitHub地址: https://github.com/michal-harish/kafka-hadoop-loader
3> Kafka -> KaBoom -> Hadoop Hdfs
KaBoom是一個借助Krackle(開源的kafka客戶端,能極大的減少對象的創(chuàng)建披摄,提高應(yīng)用程序的性能)來消費kafka的Topic分區(qū)數(shù)據(jù)隨后寫如hdfs,利用Curator和Zookeeper來實現(xiàn)分布式服務(wù),能夠靈活的根據(jù)topic來寫入不同的hdfs目錄.
GitHub地址: https://github.com/blackberry/KaBoom
4> Kafka -> Kafka-connect-hdfs -> Hadoop Hdfs
Confluent的Kafka Connect旨在通過標(biāo)準(zhǔn)化如何將數(shù)據(jù)移入和移出Kafka來簡化構(gòu)建大規(guī)模實時數(shù)據(jù)管道的過程馆铁∨苋啵可以使用Kafka Connect讀取或?qū)懭胪獠肯到y(tǒng),管理數(shù)據(jù)流并擴展系統(tǒng)埠巨,而無需編寫新代碼.
GitHub地址: https://github.com/confluentinc/kafka-connect-hdfs
5> Kafka -> Gobblin -> Hadoop Hdfs
Gobblin是LinkedIn開源的一個數(shù)據(jù)攝取組件.它支持多種數(shù)據(jù)源的攝取历谍,通過并發(fā)的多任務(wù)進行數(shù)據(jù)抽取,轉(zhuǎn)換辣垒,清洗望侈,最終加載到目標(biāo)數(shù)據(jù)源.支持單機和Hadoop MR二種方式,而且開箱即用勋桶,并支持很好的擴展和二次開發(fā).
GitHub地址: https://github.com/linkedin/gobblin
另外添加的資料
1脱衙、HiveKa : Apache Hive's storage handler that adds support in Apache Hive to query data from Apache Kafka
https://github.com/HiveKa/HiveKa
2、Confluent Platform - HDFS Connector
http://kaimingwan.com/post/kafka/kafkachi-jiu-hua-shu-ju-dao-hdfsde-fang-fa
http://docs.confluent.io/2.0.0/connect/connect-hdfs/docs/index.html
3例驹、camus或gobblin
http://www.aboutyun.com/thread-20701-1-1.html
參考資料:
https://www.confluent.io/blog/how-to-build-a-scalable-etl-pipeline-with-kafka-connect
http://gobblin.readthedocs.io/en/latest/Getting-Started/
http://gobblin.readthedocs.io/en/latest/case-studies/Kafka-HDFS-Ingestion/
https://github.com/confluentinc/kafka-connect-blog
http://docs.confluent.io/3.1.1/connect/connect-hdfs/docs/index.html