让你快速认识flume及安装和使用flume1.5传输数据(日志)到hadoop2.2 文档 安装问题

1.启动log4j警告,没反应了

log4j:WARN No appenders could be found for logger (org.apache.flume.lifecycle.LifecycleSupervisor).

log4j:WARN Please initialize the log4j system properly.

log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

因为我配了环境变量,所以在所有路径下flume-ng都没有问题,最后找到了http://stackoverflow.com/questions/12280403/flume-running-failed-in-linux,现象跟这个哥们一样,但是我敲了-c conf ,所以估计是没找的-c 下的conf文件,在flume下执行就没有问题了。



2. [ERROR - org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:207)] Failed to start agent because dependencies were not found in classpath. Error follows.
java.lang.NoClassDefFoundError: org/apache/hadoop/io/SequenceFile$CompressionType
  at org.apache.flume.sink.hdfs.HDFSEventSink.configure(HDFSEventSink.java:214)
    at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
    at org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.loadSinks(PropertiesFileConfigurationProvider.java:373)
    at org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.load(PropertiesFileConfigurationProvider.java:223)
    at org.apache.flume.conf.file.AbstractFileConfigurationProvider.doLoad(AbstractFileConfigurationProvider.java:123)
    at org.apache.flume.conf.file.AbstractFileConfigurationProvider.access$300(AbstractFileConfigurationProvider.java:38)
    at org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:202)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:679)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.io.SequenceFile$CompressionType
    at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
  ... 15 more

上网找了到这个文章http://cache.baiducontent.com/c?m=9f65cb4a8c8507ed4fece76310528c3e4a1fc2307c8c894f68d4e419ce3b4655023ba3ed2876435f8d922a7001de0f01fdf04733715060f18cc8f91988ecce6e38885664274dd50653840eafba10728377cc0cbef348bcedb12593df97849f0344ca225927c6e78b2b54498d33a6033192fdc55f152e41e4be7124bd0a3173882230a1478e&p=9772c54ad0c913e70be296385a00&newp=8f3d8416d9c159b10cbd9b78074492695803ed603cd6d50d6180&user=baidu&fm=sc&query=Failed+to+start+agent+because+dependencies+were+not+found+in+classpath%2E&qid=&p1=2

说是缺jar包

没说怎么弄啊,往上看看,上面有说明

cd ~
wget http://mirror.symnds.com/software/Apache/hadoop/common/hadoop-1.0.4/hadoop-1.0.4-bin.tar.gz
tar xvzf hadoop-1.0.4-bin.tar.gz
rm hadoop-1.0.4-bin.tar.gz
cp ~/hadoop-1.0.4/hadoop-core-1.0.4.jar ~/apache-flume-1.3.1-bin/lib/
cp ~/hadoop-1.0.4/lib/commons-configuration-1.6.jar ~/apache-flume-1.3.1-bin/lib/
cp ~/hadoop-1.0.4/lib/commons-httpclient-3.0.1.jar ~/apache-flume-1.3.1-bin/lib/
cp ~/hadoop-1.0.4/lib/jets3t-0.6.1.jar ~/apache-flume-1.3.1-bin/lib/
cp ~/hadoop-1.0.4/lib/commons-codec-1.4.jar ~/apache-flume-1.3.1-bin/lib/


知道怎么弄了吧,我的是hadoop2.3.0版本的,于是找到%hadoop_home%(hadoop安装目录下)/share/hadoop/下所有文件夹里面的jar,再加上common下lib下的jar,其他文件夹不用找第二层的,这块可能说的不清楚,可以留言,我没挨个删掉试试到底需要哪些jar,如果知道的请给我留言,谢谢。




3.2014-06-05 15:29:26,005 INFO  [lifecycleSupervisor-1-1] instrumentation.MonitoredCounterGroup (MonitoredCounterGroup.java:start(95)) - Component type: SINK, name: sink1 started
2014-06-05 15:29:26,006 ERROR [SinkRunner-PollingRunner-DefaultSinkProcessor] flume.SinkRunner (SinkRunner.java:run(160)) - Unable to deliver event. Exception follows.
java.lang.IllegalStateException: Channel closed [channel=channel1]. Due to java.io.FileNotFoundException: /home/hadoop/aboutyun_tmp123/in_use.lock (Permission denied)

权限问题,具体问题具体分析吧,我的是这几个文件夹我用的root建的,改一下用户和用户组ok了


总结一下吧,网上好多用“让你快速认识flume及安装和使用flume1.5传输数据(日志)到hadoop2.2 文档”,反正我是没有成功,不知道各位的,所以好像很多问题没有写。希望对大家有所帮助。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 5
    评论
抱歉,我作为AI语言模型无法提供图片,但我可以为您提供文字说明: Flume负载均衡可以通过在Flume配置中指定多个目标代理来实现。假设您有三个代理:hadoop102、hadoop103和hadoop104。要将hadoop102的Event数据均衡地传输到其他两个代理上,可以在Flume配置文件中的source和sink之间增加一个load balance channel,如下所示: agent.sources = source1 agent.channels = channel1 agent.sinks = sink1 sink2 agent.sinks.sink1.type = avro agent.sinks.sink1.hostname = hadoop103 agent.sinks.sink1.port = 41414 agent.sinks.sink2.type = avro agent.sinks.sink2.hostname = hadoop104 agent.sinks.sink2.port = 42424 agent.channels.channel1.type = memory agent.channels.channel1.capacity = 1000 agent.channels.channel1.transactionCapacity = 100 agent.sources.source1.type = exec agent.sources.source1.command = tail -F /var/log/syslog agent.sources.source1.channels = channel1 agent.sources.source1.selector.type = replicating agent.sources.source1.selector.header = hostname agent.sources.source1.selector.mapping.hadoop102 = sink1 sink2 agent.sources.source1.selector.mapping.* = sink1 在该配置中,我们指定了两个sink:sink1和sink2,它们分别代表hadoop103和hadoop104。在source中,我们使用了一个selector来选择数据应该被发送到哪个sink。对于hadoop102,我们将事件发送到sink1和sink2,这样它们就会平均分配到其他两个代理上。对于其他主机,我们只将事件发送到sink1。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值