(未解决)flume监控目录,抓取文件内容推送给kafka,报错

flume监控目录,抓取文件内容推送给kafka,报错:

/export/datas/destFile/220104_YT1013_8c5f13f33c299316c6720cc51f94f7a0_2016101912_318.txt
2019-08-06 23:04:31,434 (pool-3-thread-1) [ERROR - org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:280)] FATAL: Spool Directory source r1: { spoolDir: /export/datas/destFile }: Uncaught exception in SpoolDirectorySource thread. Restart or reconfigure Flume to continue processing.
java.nio.charset.MalformedInputException: Input length = 1
at java.nio.charset.CoderResult.throwException(CoderResult.java:281)
at org.apache.flume.serialization.ResettableFileInputStream.readChar(ResettableFileInputStream.java:283)
at org.apache.flume.serialization.LineDeserializer.readLine(LineDeserializer.java:132)
at org.apache.flume.serialization.LineDeserializer.readEvent(LineDeserializer.java:70)
at org.apache.flume.serialization.LineDeserializer.readEvents(LineDeserializer.java:89)
at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readDeserializerEvents(ReliableSpoolingFileEventReader.java:343)
at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:331)
at org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:250)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

 

正常处理完文件后,应该删除该文件,应该提示这个信息:2019-08-06 23:04:31,412 (pool-3-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.deleteCurrentFile(ReliableSpoolingFileEventReader.java:492)] Preparing to delete file 

 

问题原因:文件的字符集编码是UTF-8,要手动设置一下字符集编码。

解决方案:a1.sources.r1.inputCharset = UTF-8  //将server.properties配置文件中的 inputCharset 设置为 UTF-8

还是没用,未解决。

 

 貌似修复了程序的bug,这个问题就没再发生了,不过又有新的问题出现了:

2019-08-07 01:41:45,377 (pool-3-thread-1) [ERROR - org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:280)] FATAL: Spool Directory source r1: { spoolDir: /export/datas/destFile }: Uncaught exception in SpoolDirectorySource thread. Restart or reconfigure Flume to continue processing.
java.lang.IllegalStateException: File has been modified since being read: /export/datas/destFile/220605_YT1013_ba701bb6bf74afb2f14a92de320bc023_2016101808_113.txt
at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.retireCurrentFile(ReliableSpoolingFileEventReader.java:406)
at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:326)
at org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:250)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
^C2019-08-07 01:42:11,310 (agent-shutdown-hook) [INFO - org.apache.flume.lifecycle.LifecycleSupervisor.stop(LifecycleSupervisor.java:78)] Stopping lifecycle supervisor 10
2019-08-07 01:42:11,320 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:149)] Component type: CHANNEL, name: c1 stopped

 

转载于:https://www.cnblogs.com/mediocreWorld/p/11312529.html

要使用Flume采集文件数据并将其发送到Kafka,需要进行以下步骤: 1. 安装和配置FlumeKafka。 2. 配置Flume的Source(数据来源)和Sink(数据接收端)。 例如,可以使用Exec Source来监控文件目录,然后使用Kafka Sink将数据发送到Kafka。在Flume的配置文件中配置如下: ``` # Source配置 agent.sources = mysource agent.sources.mysource.type = exec agent.sources.mysource.command = tail -F /path/to/myfile # Sink配置 agent.sinks = mysink agent.sinks.mysink.type = org.apache.flume.sink.kafka.KafkaSink agent.sinks.mysink.kafka.topic = mytopic agent.sinks.mysink.kafka.bootstrap.servers = localhost:9092 agent.sinks.mysink.kafka.flumeBatchSize = 20 agent.sinks.mysink.kafka.producer.acks = 1 # Channel配置 agent.channels = mychannel agent.channels.mychannel.type = memory agent.channels.mychannel.capacity = 1000 agent.channels.mychannel.transactionCapacity = 100 # Source和Sink绑定Channel agent.sources.mysource.channels = mychannel agent.sinks.mysink.channel = mychannel ``` 3. 启动Flume代理。 可以使用以下命令启动Flume代理: ``` $ bin/flume-ng agent --conf conf --conf-file example.conf --name agent -Dflume.root.logger=INFO,console ``` 其中,`--conf`参数指定Flume配置文件目录,`--conf-file`参数指定Flume配置文件的路径,`--name`参数指定Flume代理的名称,`-Dflume.root.logger`参数指定Flume的日志级别和输出位置。 4. 监控Kafka的消息。 可以使用命令行工具或Kafka客户端来监控Kafka的消息。例如,可以使用以下命令来监控`mytopic`主题的消息: ``` $ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mytopic --from-beginning ``` 这样就可以使用Flume采集文件数据并将其发送到Kafka了。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值