Flume的安装和测试故障转移

[b]1.实现功能 [/b]
配置Flume监控本地文件夹变化,将变化的文件上传到hdfs上。
[b]2.集群规划(3台机器都需要安装)[/b]
[img]http://dl2.iteye.com/upload/attachment/0118/1297/28d83bb4-6f97-36c7-8f54-8b948969bfe4.jpg[/img]
[img]http://dl2.iteye.com/upload/attachment/0118/1299/3b4fea49-a0a0-335f-b225-0f8e3038bc37.png[/img]
[b]3.软件准备 [/b]
下载软件包:http://flume.apache.org/download.html 选择当前最新版本:apache-flume-1.6.0-bin.tar.gz
并将其上传到虚拟机的/usr/local/flume目录下,如果没有创建目录;
运行命令:root@master1:/usr/local/flume# tar -zxvf apache-flume-1.6.0-bin.tar.gz解压;

[b]4.配置环境变量(3台机器环境变量配置一样)[/b]
编辑.bashrc文件,添加下面的内容:
export FLUME_HOME=/usr/local/flume/apache-flume-1.6.0-bin 
export FLUME_CONF_DIR=${FLUME_HOME}/conf
export PATH=.:${JAVA_HOME}/bin:${SCALA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${SPARK_HOME}/bin:${ZOOKEEPER_HOME}/bin:${HIVE_HOME}/bin:${FLUME_HOME}/bin:$PATH

运行root@master1:/usr/local/flume# source ~/.bashrc命令使之生效;
[b]5.各个节点配置 [/b]
master节点上配置/conf/flume-client.properties(source的日志收集)

#agent1 name
agent1.channels = c1
agent1.sources = r1
agent1.sinks = k1 k2

#set gruop
agent1.sinkgroups = g1

#set channel
agent1.channels.c1.type = memory
agent1.channels.c1.capacity = 1000
agent1.channels.c1.transactionCapacity = 100

agent1.sources.r1.channels = c1
agent1.sources.r1.type = spooldir
agent1.sources.r1.spoolDir =/usr/local/flume/tmp/TestDir

agent1.sources.r1.interceptors = i1 i2
agent1.sources.r1.interceptors.i1.type = static
agent1.sources.r1.interceptors.i1.key = Type
agent1.sources.r1.interceptors.i1.value = LOGIN
agent1.sources.r1.interceptors.i2.type = timestamp

# set sink1
agent1.sinks.k1.channel = c1
agent1.sinks.k1.type = avro
agent1.sinks.k1.hostname = worker1
agent1.sinks.k1.port = 52020

# set sink2
agent1.sinks.k2.channel = c1
agent1.sinks.k2.type = avro
agent1.sinks.k2.hostname = worker2
agent1.sinks.k2.port = 52020

#set sink group
agent1.sinkgroups.g1.sinks = k1 k2

#set failover
agent1.sinkgroups.g1.processor.type = failover
agent1.sinkgroups.g1.processor.priority.k1 = 10
agent1.sinkgroups.g1.processor.priority.k2 = 1
agent1.sinkgroups.g1.processor.maxpenalty = 10000


worker1、worker2节点上配置/conf/flume-servre.properties
worker1上:
 
#set Agent name
a1.sources = r1
a1.channels = c1
a1.sinks = k1

#set channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# other node,nna to nns
a1.sources.r1.type = avro
a1.sources.r1.bind = worker1
a1.sources.r1.port = 52020
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = static
a1.sources.r1.interceptors.i1.key = Collector
a1.sources.r1.interceptors.i1.value = worker1
a1.sources.r1.channels = c1

#set sink to hdfs
a1.sinks.k1.type=hdfs
a1.sinks.k1.hdfs.path=/library/flume
a1.sinks.k1.hdfs.fileType=DataStream
a1.sinks.k1.hdfs.writeFormat=TEXT
a1.sinks.k1.hdfs.rollInterval=1
a1.sinks.k1.channel=c1
a1.sinks.k1.hdfs.filePrefix=%Y-%m-%d


worker2上:

#set Agent name
a1.sources = r1
a1.channels = c1
a1.sinks = k1

#set channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# other node,nna to nns
a1.sources.r1.type = avro
a1.sources.r1.bind = worker2
a1.sources.r1.port = 52020
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = static
a1.sources.r1.interceptors.i1.key = Collector
a1.sources.r1.interceptors.i1.value = worker2
a1.sources.r1.channels = c1
#set sink to hdfs
a1.sinks.k1.type=hdfs
a1.sinks.k1.hdfs.path=/library/flume
a1.sinks.k1.hdfs.fileType=DataStream
a1.sinks.k1.hdfs.writeFormat=TEXT
a1.sinks.k1.hdfs.rollInterval=1
a1.sinks.k1.channel=c1
a1.sinks.k1.hdfs.filePrefix=%Y-%m-%d


[b]6.启动Flume集群(必须先启动Hadoop集群) [/b]
a)先启动collector端,即worker1和worker2上的配置文件:
root@worker1:/usr/local/flume/apache-flume-1.6.0-bin/conf# flume-ng agent -n a1 -c conf -f flume-server.properties -Dflume.root.logger=DEBUG,console  
root@worker2:/usr/local/flume/apache-flume-1.6.0-bin/conf# flume-ng agent -n a1 -c conf -f flume-server.properties -Dflume.root.logger=DEBUG,console


b)再启动agent端,即master1的配置文件:
root@master1:/usr/local/flume/apache-flume-1.6.0-bin/conf# flume-ng agent -n agent1 -c conf -f flume-client.properties -Dflume.root.logger=DEBUG,console


注意因为在master1上配置了agent1.sources.r1.spoolDir =/usr/local/flume/tmp/TestDir 所以在启动master1上的agent前必须先确保该目录存在,如果不存在会报错; 在worker1和worker2上我们虽然配置了a1.sinks.k1.hdfs.path=/library/flume ,但是可以不用事先创建,如果/usr/local/flume/tmp/TestDir目录下有文件变化时,flume会自动在hdfs上创建/library/flume目录。
集群启动后,查看各自的进程:Flume的进程名字叫做“Application”

[img]http://dl2.iteye.com/upload/attachment/0118/1301/ce145b8c-423e-3e38-aee2-36df6a6ca74a.png[/img]

[b]7.测试数据的传输 [/b]
在master1节点指定的路径下,生成新的文件(模拟web Server产生新的日志文件):
首先,TestDir下没有文件(但是会有一个.flumespool的隐藏文件夹,以英文句号开头的文件是隐藏文件):
root@master1:/usr/local/flume/tmp/TestDir# ll 
总用量 12
drwxr-xr-x 3 root root 4096 6月 16 21:16 ./
drwxr-xr-x 3 root root 4096 6月 16 21:16 ../
drwxr-xr-x 2 root root 4096 6月 16 21:16 .flumespool/
root@master1:/usr/local/flume/tmp/TestDir#

而HDFS指定的路径下也没有文件!

我们在/usr/local/flume/tmp目录下创建一个test.log文件,内容如下:
This is a test file ! 
This is a test file !
This is a test file !
Spark Hadoop JAVA Scala
SPark Spark
Hadoop

之后,拷贝test.log文件到此目录下:
root@master1:/usr/local/flume/tmp# cp test.log ./TestDir/ 

可以从master1上的控制台可以看到下面的日志信息:
[img]http://dl2.iteye.com/upload/attachment/0118/1303/4db8e72a-8890-3584-a2cf-df848db65a6c.png[/img]
此时,看worker1上的控制台打印的日志如下:
[img]http://dl2.iteye.com/upload/attachment/0118/1305/749df0d8-1c5d-320e-abab-1a20da97ae53.png[/img]
最后,查看HDFS上的生成的文件的内容:
[img]http://dl2.iteye.com/upload/attachment/0118/1307/5e7cf526-844d-323d-afc8-56709e39a6b5.png[/img]

[b]测试同时上传两个文件到TestDir目录中。[/b]
创建test_1.log 内容如下:
Hadoop Hadoop Hadoop
Java Java Java
Hive Hive Hive
再创建test_2.log 内容如下:
Scala Scala Scala
Spark Spark Spark
然后将两个文件同时拷贝到/usr/local/flume/tmp/TestDir目录中:
root@master1:/usr/local/flume/tmp# cp test_* ./TestDir/
查看master1的控制台:
[img]http://dl2.iteye.com/upload/attachment/0118/1310/3dd411d0-7933-3ac6-99ac-4ff621fa2e14.png[/img]
查看worker1的控制台:
[img]http://dl2.iteye.com/upload/attachment/0118/1312/a3456ca2-e0ba-3381-8103-c033b0b47dda.png[/img]
我们看到,源端的2个文件合并到HDFS上的1个文件中了。
再看源端的文件名称:
[img]http://dl2.iteye.com/upload/attachment/0118/1314/132efc93-077f-3abb-95ff-ae6c3282479b.png[/img]
并且文件名后面都加了.completed后缀。
注意:如果/usr/local/flume/tmp/TestDir目录存在了一个test.log的文件,你再次拷贝test.log文件到TestDir目录下,则flume会报错,错误信息如下:

16/06/16 22:17:57 INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
16/06/16 22:17:57 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/flume/tmp/TestDir/test.log to /usr/local/flume/tmp/TestDir/test.log.COMPLETED
16/06/16 22:17:57 ERROR source.SpoolDirectorySource: FATAL: Spool Directory source r1: { spoolDir: /usr/local/flume/tmp/TestDir }: Uncaught exception in SpoolDirectorySource thread. Restart or reconfigure Flume to continue processing.
java.lang.IllegalStateException: File name has been re-used with different files. Spooling assumptions violated for /usr/local/flume/tmp/TestDir/test.log.COMPLETED
at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile(ReliableSpoolingFileEventReader.java:378)
at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.retireCurrentFile(ReliableSpoolingFileEventReader.java:330)
at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:259)
at org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:228)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


此时,必须删除TestDir目录中的test.log文件,再重启master1上的agent才行;
但是flume也会将你本次拷贝到TestDir目录中的文件上传到hdfs上面,测试如下:
root@master1:/usr/local/flume/tmp# vim test.log
编辑并在最后一行添加Hive
root@master1:/usr/local/flume/tmp# cp test.log ./TestDir/
查看WebUI:
[img]http://dl2.iteye.com/upload/attachment/0118/1316/7f08a2cc-06b5-3170-a76d-2f6676355116.png[/img]


root@master1:/usr/local/flume/tmp# hdfs dfs -cat /library/flume/2016-06-16.1466088454203
16/06/16 22:47:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
This is a test file !
This is a test file !
This is a test file !
Spark Hadoop JAVA Scala
SPark Spark
Hadoop
Hive

[b]8.测试Failover(故障转移) [/b]
我们来测试下Flume NG集群的高可用(故障转移)。场景如下:我们在Agent1节点上传文件,由于我们配置Collector1的权重比Collector2大,所以Collector1优先采集并上传到存储系统。然后我们kill掉Collector1,此时有Collector2负责日志的采集上传工作,之后,我们手动恢复Collector1节点的Flume服务,再次在Agent1上次文件,发现Collector1恢复优先级别的采集工作:
worker1上:
root@worker1:~# jps 
23970 Application
23826 NodeManager
24677 Jps
23690 DataNode
root@worker1:~# kill -9 23970
root@worker1:~# jps
24688 Jps
23826 NodeManager
23690 DataNode

此时worker1控制台上也显示Collector1被杀死了。
再次拷贝1个日志文件到TestDir目录:(拷贝的文件不要和之前拷贝到TestDir中的文件重名)
master1上:
root@master1:/usr/local/flume/tmp# vim test_3.log  
root@master1:/usr/local/flume/tmp# cat test_3.log
Test Failover
Test Failover
root@master1:/usr/local/flume/tmp# cp test_3.log ./TestDir/


master1控制台,显示监听到文件变化,并上传到hdfs,之后会显示sink1报错 ,并一直提示

Attempting to create Avro Rpc client.
16/06/16 22:41:11 INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
16/06/16 22:41:11 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/flume/tmp/TestDir/test_3.log to /usr/local/flume/tmp/TestDir/test_3.log.COMPLETED
16/06/16 22:41:19 WARN sink.FailoverSinkProcessor: Sink k1 failed and has been sent to failover list
org.apache.flume.EventDeliveryException: Failed to send events
at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:392)
at org.apache.flume.sink.FailoverSinkProcessor.process(FailoverSinkProcessor.java:182)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.flume.EventDeliveryException: NettyAvroRpcClient { host: worker1, port: 52020 }: Failed to send batch
at org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:315)
at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:376)
... 3 more
Caused by: org.apache.flume.EventDeliveryException: NettyAvroRpcClient { host: worker1, port: 52020 }: RPC request exception
at org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:365)
at org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:303)
... 4 more
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Error connecting to worker1/192.168.112.131:52020
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:357)
... 5 more
Caused by: java.io.IOException: Error connecting to worker1/192.168.112.131:52020
at org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:261)
at org.apache.avro.ipc.NettyTransceiver.getRemoteName(NettyTransceiver.java:386)
at org.apache.avro.ipc.Requestor.writeHandshake(Requestor.java:202)
at org.apache.avro.ipc.Requestor.access$300(Requestor.java:52)
at org.apache.avro.ipc.Requestor$Request.getBytes(Requestor.java:478)
at org.apache.avro.ipc.Requestor.request(Requestor.java:147)
at org.apache.avro.ipc.Requestor.request(Requestor.java:129)
at org.apache.avro.ipc.specific.SpecificRequestor.invoke(SpecificRequestor.java:84)
at com.sun.proxy.$Proxy5.appendBatch(Unknown Source)
at org.apache.flume.api.NettyAvroRpcClient$2.call(NettyAvroRpcClient.java:348)
at org.apache.flume.api.NettyAvroRpcClient$2.call(NettyAvroRpcClient.java:344)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Caused by: java.net.ConnectException: 拒绝连接
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:496)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:452)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:365)
... 3 more
16/06/16 22:41:22 INFO sink.AbstractRpcSink: Rpc sink k1: Building RpcClient with hostname: worker1, port: 52020
16/06/16 22:41:22 INFO sink.AvroSink: Attempting to create Avro Rpc client.
16/06/16 22:41:22 WARN api.NettyAvroRpcClient: Using default maxIOWorkers
16/06/16 22:41:26 INFO sink.AbstractRpcSink: Rpc sink k1: Building RpcClient with hostname: worker1, port: 52020
16/06/16 22:41:26 INFO sink.AvroSink: Attempting to create Avro Rpc client.
16/06/16 22:41:26 WARN api.NettyAvroRpcClient: Using default maxIOWorkers
16/06/16 22:41:37 INFO sink.AbstractRpcSink: Rpc sink k1: Building RpcClient with hostname: worker1, port: 52020


worker2的控制台日志输出:
 
16/06/16 22:41:22 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false
16/06/16 22:41:23 INFO hdfs.BucketWriter: Creating /library/flume/2016-06-16.1466088082911.tmp
16/06/16 22:41:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/06/16 22:41:26 INFO hdfs.BucketWriter: Closing /library/flume/2016-06-16.1466088082911.tmp
16/06/16 22:41:26 INFO hdfs.BucketWriter: Renaming /library/flume/2016-06-16.1466088082911.tmp to /library/flume/2016-06-16.1466088082911
16/06/16 22:41:26 INFO hdfs.HDFSEventSink: Writer callback called.


查看WebUI:
[img]http://dl2.iteye.com/upload/attachment/0118/1318/ae9b1be3-160c-3c26-bbfa-6a452d7d7978.png[/img]

查看内容:
root@master1:/usr/local/flume/tmp# hdfs dfs -cat /library/flume/2016-06-16.1466088082911  
16/06/16 22:51:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Test Failover
Test Failover


重启collector1的服务,worker1上运行:
root@worker1:/usr/local/flume/apache-flume-1.6.0-bin/conf# flume-ng agent -n a1 -c conf -f flume-server.properties -Dflume.root.logger=DEBUG,console  


重复上面过程,agent(master)上生成新文件,之后由collector1(worker1)重新接管服务:
master1:
root@master1:/usr/local/flume/tmp# vim test_4.log  
root@master1:/usr/local/flume/tmp# cat test_4.log
Test Failover is good !!!
root@master1:/usr/local/flume/tmp# cp test_4.log TestDir/


master1控制台:
16/06/16 22:57:00 INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.  
16/06/16 22:57:00 INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/flume/tmp/TestDir/test_4.log to /usr/local/flume/tmp/TestDir/test_4.log.COMPLETED


Worker1上日志:
16/06/16 22:57:03 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false  
16/06/16 22:57:03 INFO hdfs.BucketWriter: Creating /library/flume/2016-06-16.1466089023374.tmp
16/06/16 22:57:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/06/16 22:57:06 INFO hdfs.BucketWriter: Closing /library/flume/2016-06-16.1466089023374.tmp
16/06/16 22:57:06 INFO hdfs.BucketWriter: Renaming /library/flume/2016-06-16.1466089023374.tmp to /library/flume/2016-06-16.1466089023374
16/06/16 22:57:06 INFO hdfs.HDFSEventSink: Writer callback called.



测试成功,flume确实实现了故障转移!

来源新浪微博:[url]http://weibo.com.ilovepains/[/url]
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值