(2) flume 入门学习 HelloWorld 及HDFS 遇到的问题 总结

(1) HelloWorld 

Starting an agent

An agent is started using a shell script called flume-ng which is located in the bin directory of the Flume distribution. You need to specify the agent name, the config directory, and the config file on the command line:

$ bin/flume-ng agent -n $agent_name -c conf -f conf/flume-conf.properties.template

Now the agent will start running source and sinks configured in the given properties file.

A simple example

Here, we give an example configuration file, describing a single-node Flume deployment. This configuration lets a user generate events and subsequently logs them to the console.

# example.conf: A single-node Flume configuration

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

This configuration defines a single agent named a1. a1 has a source that listens for data on port 44444, a channel that buffers event data in memory, and a sink that logs event data to the console. The configuration file names the various components, then describes their types and configuration parameters. A given configuration file might define several named agents; when a given Flume process is launched a flag is passed telling it which named agent to manifest.

Given this configuration file, we can start Flume as follows:

$ bin/flume-ng agent --conf conf --conf-file example.conf --name a1 -Dflume.root.logger=INFO,console

Note that in a full deployment we would typically include one more option: --conf=<conf-dir>. The <conf-dir> directory would include a shell script flume-env.sh and potentially a log4j properties file. In this example, we pass a Java option to force Flume to log to the console and we go without a custom environment script.

From a separate terminal, we can then telnet port 44444 and send Flume an event:

$ telnet localhost 44444
Trying 127.0.0.1...
Connected to localhost.localdomain (127.0.0.1).
Escape character is '^]'.
Hello world! <ENTER>
OK

The original Flume terminal will output the event in a log message.

12/06/19 15:32:19 INFO source.NetcatSource: Source starting
12/06/19 15:32:19 INFO source.NetcatSource: Created serverSocket:sun.nio.ch.ServerSocketChannelImpl[/127.0.0.1:44444]
12/06/19 15:32:34 INFO sink.LoggerSink: Event: { headers:{} body: 48 65 6C 6C 6F 20 77 6F 72 6C 64 21 0D          Hello world!. }

Congratulations - you’ve successfully configured and deployed a Flume agent! Subsequent sections cover agent configuration in much more detail.

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

(2)单节点 Flume 直接写入 HDFS

查阅文章:http://my.oschina.net/leejun2005/blog/288136#OSC_h2_10 按照3.2章节练习(flume版本1.5.2)

遇到问题:

▂ ▃ ▄ ▅ ▆ ▇ █

2014-12-15 17:22:28,418 (conf-file-poller-0) [ERROR - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:145)] Failed to start agent because dependencies were not found in classpath. Error follows.
java.lang.NoClassDefFoundError: org/apache/hadoop/io/SequenceFile$CompressionType
at org.apache.flume.sink.hdfs.HDFSEventSink.configure(HDFSEventSink.java:251)
at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:418)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:103)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.io.SequenceFile$CompressionType
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 12 more

意思就是:在类路径下找不到什么什么calss.....应该是缺少jar包了 。网上折腾了半天,官网的说明也看了。

单节点flume写入HDFS中 用到的sink是 HDFS Sink。

This sink writes events into the Hadoop Distributed File System (HDFS). It currently supports creating text and sequence files. It supports compression in both file types. The files can be rolled (close current file and create a new one) periodically based on the elapsed time or size of data or number of events. It also buckets/partitions data by attributes like timestamp or machine where the event originated. The HDFS directory path may contain formatting escape sequences that will replaced by the HDFS sink to generate a directory/file name to store the events. Using this sink requires hadoop to be installed so that Flume can use the Hadoop jars to communicate with the HDFS cluster. Note that a version of Hadoop that supports the sync() call is required.

大意就是:使用这种sink需要hadoop的jar包。由于也没说具体哪个jar包

(1)org/apache/hadoop/io/SequenceFile$CompressionType  是在hadoop-2.3.0-cdh5.1.0\share\hadoop\common\hadoop-common-2.3.0-cdh5.1.0.jar中 , 将该jar包放到flume的lib包中。再起动还是报同样的错。

(2)查看env (服务器装了两个flume  A配置了FLUME_HOME,B我练习的没有配置),于是进到conf中修改flume-env.sh

JAVA_HOME=/home/xxx/jdk/jdk1.7.0_55(原来已有)
FLUME_HOME=/home/xxx/pjmflume/flume(此处添加)

(3)再起报错 Caused by: java.lang.ClassNotFoundException: org.apache.commons.configuration.Configuration

缺少了 commons-configuration-1.6.jar 包。添上。

(4)再起报错 Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.util.PlatformName

缺少 hadoop-auth-2.3.0-cdh5.1.0.jar 包。添上。

(5)报错 

2014-12-15 17:42:34,356 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:463)] HDFS IO error
java.io.IOException: No FileSystem for scheme: hdfs

缺少 hadoop-hdfs-2.3.0-cdh5.1.0.jar 包。添上

----------------------------------------------------------------------------

理论上 就是flume的 hdfs sink需要用到hadoop的上述jar包。 懒得一个一个错误的排查可直接将hadoop share下的所有jar包 拷贝一份移至

flume的lib中即可。

----------------------------------------------------------------------------

▂ ▃ ▄ ▅ ▆ ▇ █
包不少了 又报错
org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:428)] HDFSWriter is already closed
java.net.NoRouteToHostException: No Route to Host from  gz-asp-nginx-10802106/127.0.0.1 to 10.111.15.4:9111 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see:  http://wiki.apache.org/hadoop/NoRouteToHost

Caused by: java.net.NoRouteToHostException: No route to host

保通信错误: 猜测应该是 本机 与 hdfs集群不在一个网络上。

telnet ip及端口信息

telnet 10.211.151.4 9111
Trying 10.211.151.4...
telnet: connect to address 10.211.151.4: No route to host
telnet: Unable to connect to remote host: No route to host

最后换了台可以hdfs交互的服务器练习。可以成功。

-----------------------------------------------------------------------------------------------------------------------

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,我可以为您提供一些关于使用Flume从Kafka采集数据写入HDFS代理的步骤: 1. 安装Flume和Kafka: 您需要在您的机器上安装Flume和Kafka。安装过程可以参考官方文档或者网上教程。 2. 配置Flume agent: 您需要配置Flume agent来从Kafka读取数据并写入HDFS代理。以下是一个简单的示例Flume配置文件: ``` agent.sources = kafka-source agent.channels = hdfs-channel agent.sinks = hdfs-sink agent.sources.kafka-source.type = org.apache.flume.source.kafka.KafkaSource agent.sources.kafka-source.kafka.bootstrap.servers = localhost:9092 agent.sources.kafka-source.kafka.topics = my-topic agent.sources.kafka-source.batchSize = 1000 agent.sources.kafka-source.batchDurationMillis = 1000 agent.channels.hdfs-channel.type = memory agent.channels.hdfs-channel.capacity = 10000 agent.channels.hdfs-channel.transactionCapacity = 1000 agent.sinks.hdfs-sink.type = hdfs agent.sinks.hdfs-sink.hdfs.path = /path/to/hdfs agent.sinks.hdfs-sink.hdfs.filePrefix = events- agent.sinks.hdfs-sink.hdfs.fileSuffix = .log agent.sinks.hdfs-sink.hdfs.rollInterval = 3600 agent.sinks.hdfs-sink.hdfs.rollSize = 0 agent.sinks.hdfs-sink.hdfs.rollCount = 10000 agent.sinks.hdfs-sink.channel = hdfs-channel ``` 这个配置文件定义了一个名为kafka-source的source,它从名为my-topic的Kafka主题中读取数据。数据被写入一个内存通道(memory channel),并由名为hdfs-sink的sink写入HDFS代理。 3. 运行Flume agent: 在您的机器上运行Flume agent,使用以下命令: ``` $ bin/flume-ng agent -n agent -c conf -f /path/to/flume.conf ``` 其中,/path/to/flume.conf是您的Flume配置文件的路径。 以上是使用Flume从Kafka采集数据写入HDFS代理的基本步骤,您可以根据您的需求进行修改和调整。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值