Kafka到Hdfs的數(shù)據(jù)Pipeline整理 - Syn良子 - 博客園 http://www.cnblogs.com/cssdongl/p/6077311.html
作者:Syn良子 出處:http://www.cnblogs.com/cssdongl 轉(zhuǎn)載請(qǐng)注明出處
找時(shí)間總結(jié)整理了下數(shù)據(jù)從Kafka到Hdfs的一些pipeline棠笑,如下
1> Kafka -> Flume –> Hadoop Hdfs
常用方案,基于配置,需要注意hdfs小文件性能等問(wèn)題.
GitHub地址: https://github.com/apache/flume
2> Kafka -> Kafka Hadoop Loader ->Hadoop Hdfs
Kafka Hadoop Loader通過(guò)為kafka Topic下每個(gè)分區(qū)建立對(duì)應(yīng)的split來(lái)創(chuàng)建task實(shí)現(xiàn)增量的加載數(shù)據(jù)流到hdfs,上次消費(fèi)的partition offset是通過(guò)zookeeper來(lái)記錄的.簡(jiǎn)單易用.
GitHub地址: https://github.com/michal-harish/kafka-hadoop-loader
3> Kafka -> KaBoom -> Hadoop Hdfs
KaBoom是一個(gè)借助Krackle(開源的kafka客戶端毯侦,能極大的減少對(duì)象的創(chuàng)建樊销,提高應(yīng)用程序的性能)來(lái)消費(fèi)kafka的Topic分區(qū)數(shù)據(jù)隨后寫如hdfs,利用Curator和Zookeeper來(lái)實(shí)現(xiàn)分布式服務(wù),能夠靈活的根據(jù)topic來(lái)寫入不同的hdfs目錄.
GitHub地址: https://github.com/blackberry/KaBoom
4> Kafka -> Kafka-connect-hdfs -> Hadoop Hdfs
Confluent的Kafka Connect旨在通過(guò)標(biāo)準(zhǔn)化如何將數(shù)據(jù)移入和移出Kafka來(lái)簡(jiǎn)化構(gòu)建大規(guī)模實(shí)時(shí)數(shù)據(jù)管道的過(guò)程烹玉∫撸可以使用Kafka Connect讀取或?qū)懭胪獠肯到y(tǒng)御毅,管理數(shù)據(jù)流并擴(kuò)展系統(tǒng)轧抗,而無(wú)需編寫新代碼.
GitHub地址: https://github.com/confluentinc/kafka-connect-hdfs
5> Kafka -> Gobblin -> Hadoop Hdfs
Gobblin是LinkedIn開源的一個(gè)數(shù)據(jù)攝取組件.它支持多種數(shù)據(jù)源的攝取抡锈,通過(guò)并發(fā)的多任務(wù)進(jìn)行數(shù)據(jù)抽取疾忍,轉(zhuǎn)換,清洗床三,最終加載到目標(biāo)數(shù)據(jù)源.支持單機(jī)和Hadoop MR二種方式一罩,而且開箱即用,并支持很好的擴(kuò)展和二次開發(fā).
GitHub地址: https://github.com/linkedin/gobblin
參考資料:
https://www.confluent.io/blog/how-to-build-a-scalable-etl-pipeline-with-kafka-connect
http://gobblin.readthedocs.io/en/latest/Getting-Started/
http://gobblin.readthedocs.io/en/latest/case-studies/Kafka-HDFS-Ingestion/
https://github.com/confluentinc/kafka-connect-blog
http://docs.confluent.io/3.1.1/connect/connect-hdfs/docs/index.html