jvm启动一段时间后无法使用的原因

​     在生产环境中,有时候会遇到程序从开发到测试一切正常,但是将程序部署到线上后,一段时间之后程序无法向外提供服务,但是端口却正常暴露出来的情况。

     最近在搞分布式服务时遇到了这个问题,服务框架选用dubbo,服务通过spring容器部署使用,底层默认用netty进行消息通信。服务由开发完成到测试完成一切正常,但是部署到线上一段时间后,服务服务提供服务,注册中心显示服务依然在在线,但是服务使用者总是报超时异常。

     无奈,到线上进行排查之后,发现了如下的问题:

"New I/O server worker #1-2" #1414 daemon prio=5 os_prio=0 tid=0x00007f334c03d000 nid=0x3749 waiting on condition [0x00007f3324b3f000]

   java.lang.Thread.State: WAITING (parking)

at sun.misc.Unsafe.park(Native Method)

- parking to wait for  <0x00000000b24f8350> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)

at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)

at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)

at java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:353)

at ch.qos.logback.core.AsyncAppenderBase.put(AsyncAppenderBase.java:156)

at ch.qos.logback.core.AsyncAppenderBase.append(AsyncAppenderBase.java:147)

at ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)

at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)

at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:273)

at ch.qos.logback.classic.Logger.callAppenders(Logger.java:260)

at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:442)

at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:396)

at ch.qos.logback.classic.Logger.debug(Logger.java:503)

at com.alibaba.dubbo.common.logger.slf4j.Slf4jLogger.debug(Slf4jLogger.java:30)

at com.alibaba.dubbo.common.logger.support.FailsafeLogger.debug(FailsafeLogger.java:79)

at com.alibaba.dubbo.remoting.exchange.support.header.HeartbeatHandler.received(HeartbeatHandler.java:82)

at com.alibaba.dubbo.remoting.transport.MultiMessageHandler.received(MultiMessageHandler.java:28)

at com.alibaba.dubbo.remoting.transport.AbstractPeer.received(AbstractPeer.java:123)

at com.alibaba.dubbo.remoting.transport.netty.NettyHandler.messageReceived(NettyHandler.java:91)

at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:100)

at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:783)

at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:302)

at com.alibaba.dubbo.remoting.transport.netty.NettyCodecAdapter$InternalDecoder.messageReceived(NettyCodecAdapter.java:148)

at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)

at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)

at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:274)

at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:261)

at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:349)

at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:280)

at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:200)

at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)


"DubboServerHandler-121.196.245.7:20880-thread-200" #243 daemon prio=5 os_prio=0 tid=0x00007f333c117800 nid=0x1bca waiting on condition [0x00007f3324d81000]

   java.lang.Thread.State: WAITING (parking)

at sun.misc.Unsafe.park(Native Method)

- parking to wait for  <0x00000000b24213f8> (a java.util.concurrent.SynchronousQueue$TransferStack)

at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)

at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)

at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)

at java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924)

at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)


"DubboServerHandler-121.196.245.7:20880-thread-199" #242 daemon prio=5 os_prio=0 tid=0x00007f333c115800 nid=0x1bc9 waiting on condition [0x00007f3324dc2000]

   java.lang.Thread.State: WAITING (parking)

at sun.misc.Unsafe.park(Native Method)

- parking to wait for  <0x00000000b24213f8> (a java.util.concurrent.SynchronousQueue$TransferStack)

at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)

at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)

at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)

at java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924)

at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)


"DubboServerHandler-121.196.245.7:20880-thread-198" #241 daemon prio=5 os_prio=0 tid=0x00007f333c114000 nid=0x1bc8 waiting on condition [0x00007f3324e03000]

   java.lang.Thread.State: WAITING (parking)

at sun.misc.Unsafe.park(Native Method)

- parking to wait for  <0x00000000b24213f8> (a java.util.concurrent.SynchronousQueue$TransferStack)

at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)

at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458)

at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)

at java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924)

at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)

从线程dump数据看,有200多个线程正在同时waiting 这个<0x00000000b24213f8> 东西,同时还有类似logback的waiting信息,因为我们所有的日志都是发送到kafka进行存储查看,而且发送日志都是异步的过程,所以怀疑kafka消息队列不通,导致线程阻塞,有了怀疑后,迅速到线上验证,发现kafka服务器版本和客户端版本不一致,升级kafka服务器版本后,问题不在出现。

总结:异步操作确实会提高程序的响应能力,但是上线的过程中,一定要确认异步操作可以正确执行,防止异步线程堆积,拖死正常业务,同时异步线程也要考虑做堆积线程的清理工作。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值