1殖属、背景
- 將data路徑下所有日志文件通過Flume采集到HDFS上
- 五分鐘一個目錄,一分鐘形成一個文件
2铅乡、技術選型
flume中有三種可監(jiān)控文件或目錄的source宏怔,分別問exec、spooldir阔蛉、taildir
exec:可通過tail -f命令去tail住一個文件弃舒,然后實時同步日志到sink
spooldir:可監(jiān)聽一個目錄,同步目錄中的新文件到sink,被同步完的文件可被立即刪除或被打上標記状原。適合用于同步新文件聋呢,但不適合對實時追加日志的文件進行監(jiān)聽并同步。
taildir:可實時監(jiān)控一批文件颠区,并記錄每個文件最新消費位置削锰,agent進程重啟后不會有重復消費的問題。
故本次選擇 taildir - file - HDFS
3毕莱、配置agent
vi taildir-file-hdfs.conf
#agent_name
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#source的配置
# source類型
a1.sources.r1.type = TAILDIR
# 元數(shù)據(jù)位置
a1.sources.r1.positionFile = /home/hadoop/data/bd/taildir_position.json
# 監(jiān)控的目錄
a1.sources.r1.filegroups = f1
a1.sources.r1.filegroups.f1=/home/hadoop/data/bd/.*log
a1.sources.r1.fileHeader = true
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = timestamp
#sink的配置
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://hadoop001:9000/offline/%Y%m%d/%H%M
a1.sinks.k1.hdfs.useLocalTimeStamp = true
a1.sinks.k1.hdfs.filePrefix = bd
a1.sinks.k1.hdfs.fileSuffix = .log
a1.sinks.k1.hdfs.rollSize =67108864
a1.sinks.k1.hdfs.rollCount = 0
a1.sinks.k1.hdfs.rollInterval = 60
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 5
a1.sinks.k1.hdfs.roundUnit = minute
a1.sinks.k1.hdfs.minBlockReplicas = 1
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.fileType=DataStream
#channel的配置
a1.channels.c1.type = file
a1.channels.c1.checkpointDir = /home/hadoop/data/checkpoint
a1.channels.c1.dataDirs = /home/hadoop/data
a1.channels.c1.capacity = 10000000
a1.channels.c1.transactionCapacity = 5000
#用channel鏈接source和sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel =c1
4器贩、啟動flume
./flume-ng agent \
--name a1 \
--conf $FLUME_HOME/conf \
--conf-file /home/hadoop/script/flume/taildir-file-hdfs.conf \
-Dflume.root.logger=INFO,console
5、模擬業(yè)務數(shù)據(jù)
- 編寫shell腳本
vi 1.sh
#!/bin/bash
cat /home/hadoop/data/bd/1.log >> /home/hadoop/data/bd/bd.log
cat /home/hadoop/data/bd/2.log >> /home/hadoop/data/bd/bd.log
cat /home/hadoop/data/bd.log >> /home/hadoop/data/bd/bd.log
cat /home/hadoop/data/bd.log >> /home/hadoop/data/bd/bd1.log
cat /home/hadoop/data/bd/1.log >> /home/hadoop/data/bd/bd1.log
cat /home/hadoop/data/bd/2.log >> /home/hadoop/data/bd/bd1.log
cat /home/hadoop/data/bd/1.log >> /home/hadoop/data/bd/bd2.log
cat /home/hadoop/data/bd/2.log >> /home/hadoop/data/bd/bd2.log
- 編輯crontab朋截,添加每分鐘執(zhí)行1.sh
[hadoop@hadoop001 data]$ chmod +x 1.sh
[hadoop@hadoop001 data]$ crontab -e
* * * * * sh /home/hadoop/data/1.sh