nginx 错误, flume 集锦,太多bugs netstat -ntpl

netstat -ntpl[root@bigdatahadoop sbin]# ./nginx -t -c /usr/tengine-2.1.0/conf/nginx.conf
nginx: [emerg] "upstream" directive is not allowed here in /usr/tengine-2.1.0/conf/nginx.conf:47
configuration file /usr/tengine-2.1.0/conf/nginx.conf test failed


多了一个 }


16/06/26 14:06:01 WARN node.AbstractConfigurationProvider: No configuration found for this host:clin1

java 环境变量   《这个可能不是错误 》


org.apache.commons.cli.ParseException: The specified configuration file does not exist: /usr/apache-flume-1.6.0-bin/bin/ile


错误1 ,conf 文件  = 等号两边不能有空格   《这个可能不是错误 》

错误2 ,启动命令错误

错误 bin/flume-ng agent -n clei -c conf -file /usr/apache-flume-1.6.0-bin/conf/test2 -Dflume.root.logger=INFO,console

正确 flume-ng agent --conf conf --conf-file test3 --name a1 -Dflume.root.logger=INFO,console



16/06/26 18:08:45 ERROR source.SpoolDirectorySource: FATAL: Spool Directory source r1: { spoolDir: /opt/sqooldir }: Uncaught exception in SpoolDirectorySource thread. Restart or reconfigure Flume to continue processing.
java.nio.charset.MalformedInputException: Input length = 1
    at java.nio.charset.CoderResult.throwException(CoderResult.java:277)
    at org.apache.flume.serialization.ResettableFileInputStream.readChar(ResettableFileInputStream.java:195)
    at org.apache.flume.serialization.LineDeserializer.readLine(LineDeserializer.java:133)
    at org.apache.flume.serialization.LineDeserializer.readEvent(LineDeserializer.java:71)
    at org.apache.flume.serialization.LineDeserializer.readEvents(LineDeserializer.java:90)
    at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:252)
    at org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:228)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)


不能上传压缩文件 ,文件名称有问题的文件,估计视频文件就更不行



16/06/26 18:18:59 INFO ipc.NettyServer: [id: 0x6fef6466, /192.168.184.188:40594 => /192.168.184.188:44444] CONNECTED: /192.168.184.188:40594
16/06/26 18:19:05 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false
16/06/26 18:19:08 INFO hdfs.BucketWriter: Creating hdfs://bigdatastorm:8020/flume/data/FlumeData.1466936345775.tmp
16/06/26 18:19:18 WARN hdfs.HDFSEventSink: HDFS IO error
java.io.IOException: Callable timed out after 10000 ms on file: hdfs://bigdatastorm:8020/flume/data/FlumeData.1466936345775.tmp
    at org.apache.flume.sink.hdfs.BucketWriter.callWithTimeout(BucketWriter.java:693)
    at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:235)
    at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:514)
    at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:418)
    at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
    at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException
    at java.util.concurrent.FutureTask.get(FutureTask.java:201)
    at org.apache.flume.sink.hdfs.BucketWriter.callWithTimeout(BucketWriter.java:686)
    ... 6 more
16/06/26 18:19:24 INFO hdfs.BucketWriter: Creating hdfs://bigdatastorm:8020/flume/data/FlumeData.1466936345776.tmp
16/06/26 18:19:38 INFO hdfs.BucketWriter: Closing idle bucketWriter hdfs://bigdatastorm:8020/flume/data/FlumeData.1466936345776.tmp at 1466936378715
16/06/26 18:19:38 INFO hdfs.BucketWriter: Closing hdfs://bigdatastorm:8020/flume/data/FlumeData.1466936345776.tmp
16/06/26 18:19:39 INFO hdfs.BucketWriter: Renaming hdfs://bigdatastorm:8020/flume/data/FlumeData.1466936345776.tmp to hdfs://bigdatastorm:8020/flume/data/FlumeData.1466936345776
16/06/26 18:19:39 INFO hdfs.HDFSEventSink: Writer callback called.




莫名奇妙的错误


16/06/26 22:55:52 INFO hdfs.HDFSEventSink: Writer callback called.
16/06/26 22:55:52 INFO hdfs.HDFSEventSink: Bucket was closed while trying to append, reinitializing bucket and writing event.
16/06/26 22:55:52 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false
16/06/26 22:55:53 INFO hdfs.BucketWriter: Creating hdfs://mycluster/flume/data/16-06-26/FlumeData.1466952952985.tmp
16/06/26 22:55:56 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
    at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1526)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1328)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526)
16/06/26 22:55:56 INFO hdfs.DFSClient: Abandoning BP-731760634-192.168.184.188-1463927117534:blk_1073742940_2145
16/06/26 22:55:56 INFO hdfs.DFSClient: Excluding datanode 192.168.184.188:50010
16/06/26 22:56:01 INFO hdfs.BucketWriter: Closing idle bucketWriter hdfs://mycluster/flume/data/16-06-26/FlumeData.1466952952985.tmp at 1466952961311
16/06/26 22:56:01 INFO hdfs.BucketWriter: Closing hdfs://mycluster/flume/data/16-06-26/FlumeData.1466952952985.tmp
16/06/26 22:56:01 INFO hdfs.BucketWriter: Renaming hdfs://mycluster/flume/data/16-06-26/FlumeData.1466952952985.tmp to hdfs://mycluster/flume/data/16-06-26/FlumeData.1466952952985
16/06/26 22:56:01 INFO hdfs.HDFSEventSink: Writer callback called.





16/06/26 23:12:10 ERROR source.AvroSource: Avro source r1: Unable to process event batch. Exception follows.
org.apache.flume.ChannelException: Unable to put batch on required channel: org.apache.flume.channel.MemoryChannel{name: c1}
    at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:200)
    at org.apache.flume.source.AvroSource.appendBatch(AvroSource.java:386)
    at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.avro.ipc.specific.SpecificResponder.respond(SpecificResponder.java:91)
    at org.apache.avro.ipc.Responder.respond(Responder.java:151)
    at org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.messageReceived(NettyServer.java:188)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:173)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:786)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:458)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:439)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:553)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:471)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:332)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.flume.ChannelFullException: Space for commit to queue couldn't be acquired. Sinks are likely not keeping up with sources, or the buffer size is too tight
    at org.apache.flume.channel.MemoryChannel$MemoryTransaction.doCommit(MemoryChannel.java:130)
    at org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151)
    at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:192)
    ... 29 more
16/06/26 23:12:10 INFO ipc.NettyServer: [id: 0x83b4ecf4, /192.168.184.188:43082 :> /192.168.184.188:44444] DISCONNECTED
16/06/26 23:12:10 INFO ipc.NettyServer: [id: 0x83b4ecf4, /192.168.184.188:43082 :> /192.168.184.188:44444] UNBOUND
16/06/26 23:12:10 INFO ipc.NettyServer: [id: 0x83b4ecf4, /192.168.184.188:43082 :> /192.168.184.188:44444] CLOSED
16/06/26 23:12:10 INFO ipc.NettyServer: Connection to /192.168.184.188:43082 disconnected.
16/06/26 23:12:12 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false
16/06/26 23:12:12 INFO hdfs.BucketWriter: Creating hdfs://mycluster/flume/data/16-06-26/FlumeData.1466953932153.tmp
16/06/26 23:12:15 INFO ipc.NettyServer: [id: 0xcab3190c, /192.168.184.188:43085 => /192.168.184.188:44444] OPEN
16/06/26 23:12:15 INFO ipc.NettyServer: [id: 0xcab3190c, /192.168.184.188:43085 => /192.168.184.188:44444] BOUND: /192.168.184.188:44444
16/06/26 23:12:15 INFO ipc.NettyServer: [id: 0xcab3190c, /192.168.184.188:43085 => /192.168.184.188:44444] CONNECTED: /192.168.184.188:43085
16/06/26 23:12:38 INFO hdfs.BucketWriter: Closing hdfs://mycluster/flume/data/16-06-26/FlumeData.1466953932153.tmp





  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值