java 另一个程序正在使用此文件_进程无法访问._kafka启动报错:另一个程序正在使用此文件,进程无法访问。...

这两天在windows10上使用kafka_2.11-1.0.0,zookeeper-3.4.11。

启动kafka的时候报错:

ERROR Error while deleting the clean shutdown file in dir E:\kafka_2.11-1.0.0\tmp\kafka-logs (kafka.server.LogDirFailureChannel)

java.nio.file.FileSystemException: E:\kafka_2.11-1.0.0\tmp\kafka-logs\__consumer_offsets-9\00000000000000000000.timeindex: 另一个程序正在使用此文件,进程无法访问。

我查了一下,发现kafka在windows平台就是有这个BUG,没办法。只能手动删除\kafka-logs里的日志文件重启kafka。

参考官方:https://issues.apache.org/jira/browse/KAFKA-1194

错误日志:

[2017-12-14 15:23:08,516] ERROR Error while deleting the clean shutdown file in dir E:\kafka_2.11-1.0.0\tmp\kafka-logs (kafka.server.LogDirFailureChannel)

java.nio.file.FileSystemException: E:\kafka_2.11-1.0.0\tmp\kafka-logs\__consumer_offsets-9\00000000000000000000.timeindex: 另一个程序正在使用此文件,进程无法访问。

at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)

at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)

at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)

at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)

at sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)

at java.nio.file.Files.deleteIfExists(Files.java:1165)

at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:335)

at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:297)

at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)

at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)

at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)

at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)

at kafka.log.Log.loadSegmentFiles(Log.scala:297)

at kafka.log.Log.loadSegments(Log.scala:406)

at kafka.log.Log.(Log.scala:203)

at kafka.log.Log$.apply(Log.scala:1734)

at kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:221)

at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$8$$anonfun$apply$16$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:292)

at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

[2017-12-14 15:23:08,536] INFO Logs loading complete in 2695 ms. (kafka.log.LogManager)

[2017-12-14 15:23:08,609] WARN Error processing kafka.log:type=LogManager,name=LogDirectoryOffline,logDirectory=E:\kafka_2.11-1.0.0\tmp\kafka-logs (com.yammer.metrics.reporting.JmxReporter)

javax.management.MalformedObjectNameException: Invalid character ':' in value part of property

at javax.management.ObjectName.construct(ObjectName.java:618)

at javax.management.ObjectName.(ObjectName.java:1382)

at com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)

at com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)

at com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)

at com.yammer.metrics.core.MetricsRegistry.newGauge(MetricsRegistry.java:79)

at kafka.metrics.KafkaMetricsGroup$class.newGauge(KafkaMetricsGroup.scala:74)

at kafka.log.LogManager.newGauge(LogManager.scala:50)

at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:117)

at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:116)

at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)

at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)

at kafka.log.LogManager.(LogManager.scala:116)

at kafka.log.LogManager$.apply(LogManager.scala:799)

at kafka.server.KafkaServer.startup(KafkaServer.scala:222)

at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)

at kafka.Kafka$.main(Kafka.scala:92)

at kafka.Kafka.main(Kafka.scala)

[2017-12-14 15:23:08,614] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)

[2017-12-14 15:23:08,624] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)

[2017-12-14 15:23:08,968] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)

[2017-12-14 15:23:08,981] INFO [SocketServer brokerId=0] Started 1 acceptor threads (kafka.network.SocketServer)

[2017-12-14 15:23:09,019] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2017-12-14 15:23:09,019] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2017-12-14 15:23:09,027] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)

[2017-12-14 15:23:09,095] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)

[2017-12-14 15:23:09,103] INFO [ReplicaManager broker=0] Stopping serving replicas in dir E:\kafka_2.11-1.0.0\tmp\kafka-logs (kafka.server.ReplicaManager)

[2017-12-14 15:23:09,113] INFO [ReplicaManager broker=0] Partitions are offline due to failure on log directory E:\kafka_2.11-1.0.0\tmp\kafka-logs (kafka.server.ReplicaManager)

[2017-12-14 15:23:09,122] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions (kafka.server.ReplicaFetcherManager)

[2017-12-14 15:23:09,137] INFO [ReplicaManager broker=0] Broker 0 stopped fetcher for partitions because they are in the failed log dir E:\kafka_2.11-1.0.0\tmp\kafka-logs (kafka.server.ReplicaManager)

[2017-12-14 15:23:09,142] INFO Stopping serving logs in dir E:\kafka_2.11-1.0.0\tmp\kafka-logs (kafka.log.LogManager)

[2017-12-14 15:23:09,152] FATAL Shutdown broker because all log dirs in E:\kafka_2.11-1.0.0\tmp\kafka-logs have failed (kafka.log.LogManager)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值