部署Flume组件
此文以Hadoop 3.2.2、Flume 1.9.0版本为例!
如未指定,下述命令在所有节点执行!
一、系统资源及组件规划
节点名称 | 系统名称 | CPU/内存 | 网卡 | 磁盘 | IP地址 | OS |
---|---|---|---|---|---|---|
NameNode | namenode | 2C/4G | ens33 | 128G | 192.168.0.11 | CentOS7 |
Secondary NameNode | secondarynamenode | 2C/4G | ens33 | 128G | 192.168.0.12 | CentOS7 |
ResourceManager | resourcemanager | 2C/4G | ens33 | 128G | 192.168.0.13 | CentOS7 |
Worker1 | worker1 | 2C/4G | ens33 | 128G | 192.168.0.21 | CentOS7 |
Worker2 | worker2 | 2C/4G | ens33 | 128G | 192.168.0.22 | CentOS7 |
Worker3 | worker3 | 2C/4G | ens33 | 128G | 192.168.0.23 | CentOS7 |
Flume组件部署在Worker节点上
二、搭建Hadoop集群
Hadoop完全分布式集群搭建过程省略,参考如下:
https://blog.csdn.net/mengshicheng1992/article/details/116757775
三、部署Flume组件
1、安装Flume组件
下载Flume文件:
参考地址:http://flume.apache.org/download.html
在Worker节点(数据采集节点)上解压Flume安装文件:
tar -xf /root/apache-flume-1.9.0-bin.tar.gz -C /usr/local/
设置环境变量:
export PATH=$PATH:/usr/local/apache-flume-1.9.0-bin/bin/
添加环境变量至/etc/profile文件:
PATH=$PATH:/usr/local/apache-flume-1.9.0-bin/bin/
2、配置Flume采集目录到HDFS
在Worker节点(数据采集节点)上创建flume-env.sh文件:
cat > /usr/local/apache-flume-1.9.0-bin/conf/flume-env.sh << EOF
export JAVA_HOME=/usr/local/jdk1.8.0_291/
EOF
在Worker节点(数据采集节点)上同步guava文件,解决Flume版本过低问题:
cp /usr/local/hadoop-3.2.2/share/hadoop/common/lib/guava-27.0-jre.jar /usr/local/apache-flume-1.9.0-bin/lib/
rm -rf /usr/local/apache-flume-1.9.0-bin/lib/guava-11.0.2.jar
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5lPlgDvB-1626763108266)(media/45748b7f8c1f2f1634cd1cd1b5a45101.png)]
在Worker节点(数据采集节点)上创建Flume配置文件:
cat > /usr/local/apache-flume-1.9.0-bin/conf/dir-hdfs.conf << EOF
ag1.sources = source1
ag1.sinks = sink1
ag1.channels = channel1
ag1.sources.source1.type = spooldir
ag1.sources.source1.spoolDir = /file
ag1.sources.source1.fileHeader = true
ag1.sources.source1.deserializer.maxLineLength = 5120
ag1.sinks.sink1.type = hdfs
ag1.sinks.sink1.hdfs.path = hdfs://namenode:9000/file/%y-%m-%d/%H-%M
ag1.sinks.sink1.hdfs.filePrefix = file
ag1.sinks.sink1.hdfs.fileSuffix = .file
ag1.sinks.sink1.hdfs.batchSize = 100
ag1.sinks.sink1.hdfs.fileType = DataStream
ag1.sinks.sink1.hdfs.writeFormat = Text
ag1.sinks.sink1.hdfs.rollSize = 512000
ag1.sinks.sink1.hdfs.rollCount = 1000000
ag1.sinks.sink1.hdfs.rollInterval = 60
ag1.sinks.sink1.hdfs.round = true
ag1.sinks.sink1.hdfs.roundValue = 10
ag1.sinks.sink1.hdfs.roundUnit = minute
ag1.sinks.sink1.hdfs.useLocalTimeStamp = true
ag1.channels.channel1.type = memory
ag1.channels.channel1.capacity = 500000
ag1.channels.channel1.transactionCapacity = 600
ag1.sources.source1.channels = channel1
ag1.sinks.sink1.channel = channel1
EOF
hdfs.path需与core-site.xml中fs.defaultFS参数保持一致
在Worker节点(数据采集节点)上拷贝Hadoop配置文件至Flume目录:
cp /usr/local/hadoop-3.2.2/etc/hadoop/core-site.xml /usr/local/apache-flume-1.9.0-bin/conf/
cp /usr/local/hadoop-3.2.2/etc/hadoop/hdfs-site.xml /usr/local/apache-flume-1.9.0-bin/conf/
在Worker节点(数据采集节点)上后台运行Flume
nohup flume-ng agent -c /usr/local/apache-flume-1.9.0-bin/conf/ -f /usr/local/apache-flume-1.9.0-bin/conf/dir-hdfs.conf -n ag1 -Dflume.root.logger=INFO,console &
3、演示Flume采集目录到HDFS
在Worker节点(数据采集节点)上/file目录下创建文件file1:
mkdir /file
touch /file/file1
验证Flume数据采集:
4、配置Flume采集文件到HDFS
在Worker节点(数据采集节点)上安装Web服务,并启动:
略
在Worker节点(数据采集节点)上创建flume-env.sh文件:
cat > /usr/local/apache-flume-1.9.0-bin/conf/flume-env.sh << EOF
export JAVA_HOME=/usr/local/jdk1.8.0_291/
EOF
在Worker节点(数据采集节点)上同步guava文件,解决Flume版本过低问题:
cp /usr/local/hadoop-3.2.2/share/hadoop/common/lib/guava-27.0-jre.jar /usr/local/apache-flume-1.9.0-bin/lib/
rm -rf /usr/local/apache-flume-1.9.0-bin/lib/guava-11.0.2.jar
在Worker节点(数据采集节点)上创建Flume配置文件:
cat > /usr/local/apache-flume-1.9.0-bin/conf/tail-hdfs.conf << EOF
ag2.sources = source2
ag2.sinks = sink2
ag2.channels = channel2
ag2.sources.source2.type = exec
ag1.sources.source1.fileHeader = true
ag2.sources.source2.command = tail -F /var/log/httpd/access_log
ag2.sinks.sink2.type = hdfs
ag2.sinks.sink2.hdfs.path = hdfs://namenode:9000/access_log/%y-%m-%d/%H-%M
ag2.sinks.sink2.hdfs.filePrefix = log
ag2.sinks.sink2.hdfs.fileSuffix = .log
ag2.sinks.sink2.hdfs.batchSize = 100
ag2.sinks.sink2.hdfs.fileType = DataStream
ag2.sinks.sink2.hdfs.writeFormat = Text
ag2.sinks.sink2.hdfs.rollSize = 512000
ag2.sinks.sink2.hdfs.rollCount = 1000000
ag2.sinks.sink2.hdfs.rollInterval = 60
ag2.sinks.sink2.hdfs.round = true
ag2.sinks.sink2.hdfs.roundValue = 10
ag2.sinks.sink2.hdfs.roundUnit = minute
ag2.sinks.sink2.hdfs.useLocalTimeStamp = true
ag2.channels.channel2.type = memory
ag2.channels.channel2.capacity = 500000
ag2.channels.channel2.transactionCapacity = 600
ag2.sources.source2.channels = channel2
ag2.sinks.sink2.channel = channel2
EOF
hdfs.path需与core-site.xml中fs.defaultFS参数保持一致
在Worker节点(数据采集节点)上拷贝Hadoop配置文件至Flume目录:
cp /usr/local/hadoop-3.2.2/etc/hadoop/core-site.xml /usr/local/apache-flume-1.9.0-bin/conf/
cp /usr/local/hadoop-3.2.2/etc/hadoop/hdfs-site.xml /usr/local/apache-flume-1.9.0-bin/conf/
在Worker节点(数据采集节点)上后台运行Flume
nohup flume-ng agent -c /usr/local/apache-flume-1.9.0-bin/conf/ -f /usr/local/apache-flume-1.9.0-bin/conf/tail-hdfs.conf -n ag2 -Dflume.root.logger=INFO,console &
5、演示Flume采集文件到HDFS
访问Worker节点Web服务,刷新日志文件