Flume連接HDFS
-
進入Flume配置
-
配置flume.conf
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# sources
a1.sources.r1.type = netcat
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 41414
# sinks
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://slave1/flume/events/%y-%m-%d/%H%M/%S
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
a1.sinks.k1.hdfs.useLocalTimeStamp=true
a1.sinks.k1.hdfs.batchSize = 10
a1.sinks.k1.hdfs.fileType = DataStream
# channels
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
- 測試telnet通信
telnet slave1 41414
-
查看日志找到HDFS文件
-
查看文件內容,測試成功
Windows下Flume連接Hive
# Name the components on this agent
a1.sources=r1
a1.sinks=k1
a1.channels=c1
# source
a1.sources.r1.type=avro
a1.sources.r1.bind=0.0.0.0
a1.sources.r1.port=43434
# sink
a1.sinks.k1.type = hive
a1.sinks.k1.hive.metastore = thrift://192.168.18.33:9083
a1.sinks.k1.hive.database = bd14
a1.sinks.k1.hive.table = flume_log
a1.sinks.k1.useLocalTimeStamp = true
a1.sinks.k1.serializer = DELIMITED
a1.sinks.k1.serializer.delimiter = "\t"
a1.sinks.k1.serializer.serdeSeparator = '\t'
a1.sinks.k1.serializer.fieldnames = id,time,context
a1.sinks.k1.hive.txnsPerBatchAsk = 5
# channel
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
# Bind the source and sink to the channel
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
- 配置Windows下的flume
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# source
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = F:\\test
a1.sources.r1.fileHeader = true
# sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = 192.168.18.34
a1.sinks.k1.port = 43434
# channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
-
在hive中創(chuàng)建日志表
在flume文檔中要求將hive表分桶以及設置為orc格式杈绸,測試不聲明orc格式,Hive將不會收到數(shù)據(jù)
create table flume_log(
id int
,time string
,context string
)
clustered by (id) into 3 buckets
stored as orc;
-
創(chuàng)建日志文件到監(jiān)控目錄F:\test
在Windows中 flume的bin目錄下啟動flume
flume-ng.cmd agent -conf-file ../conf/windows.conf -name a1 -property flume.root.logger=INFO,console
-
在Windows中查找一個log文件拖放到F:\test中柑潦,內容如下
-
當flume讀取完文件后帕棉,文件后綴會增加completed
-
查看Hive表
測試成功,本來是想通過impala查詢Hive表秦叛,但Impala不支持orc格式的Hive表蚕愤,而flume中sink端需要采用orc格式傳輸數(shù)據(jù)答恶,所以只能放棄impala,后續(xù)解決問題再進行補充
三萍诱、遇到問題
- Flume無法連接到HDFS
解決:a1.sinks.k1.hdfs.path = hdfs://slave1:9000/flume/events/%y-%m-%d/%H%M/%S
改為 a1.sinks.k1.hdfs.path = hdfs://slave1/flume/events/%y-%m-%d/%H%M/%S
原因:在CDH的Flume中悬嗓,設置路徑只需要IP地址,不需要配置端口
-
HDFS文件存在亂碼
解決:在flume配置中添加
a1.sinks.k1.hdfs.fileType = DataStream
原因:
hdfs.fileType默認為SequenceFile裕坊,會壓縮文件
-
AvroRuntimeException: Excessively large list allocation request detected: 825373449 items!
解決:調整flume中java堆棧大小
原因:Flume內存溢出 -
NoClassDefFoundError: org/apache/hive/hcatalog/streaming/RecordWriter
解決:
找到Hive的jar包所在目錄
找到Flume的jar包所在目錄
cp /opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/jars/hive-* /opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/flume-ng/lib/
原因:flume缺少了hive的jar包包竹,需要從CDH拷貝
-
EventDeliveryException: java.lang.NullPointerException: Expected timestamp in the Flume event headers, but it was null
原因:時間戳參數(shù)設置錯誤
解決:
在flume的conf文件中配置sink端
a1.sinks.k1.hive.useLocalTimeStamp=true
參考文章:
https://blog.csdn.net/lifuxiangcaohui/article/details/49949865
https://blog.csdn.net/panguoyuan/article/details/39555239
http://miximixi.me/index.php/archives/961