通过JDK jstack 定位问题

1 篇文章 0 订阅
1 篇文章 0 订阅

常用到的几个线程状态有:RUNNABLE,BLOCKED,WAITING,TIMED_WAITING

RUNNABLE

从虚拟机的角度看,RUNNABLE状态代表线程正处于运行状态。一般情况下处于运行状态线程是会消耗CPU的,但不是所有的RUNNABLE都会消耗CPU,比如线程进行网络IO时,这时线程状态是挂起的,但由于挂起发生在本地代码,虚拟机并不感知,所以不会像显示调用Java的sleep()或者wait()等方法进入WAITING状态,只有等数据到来时才消耗一点CPU.

TIMED_WAITING/WATING

这两种状态表示线程被挂起,等待被唤醒,当设置超时时间时状态为TIMED_WAITING,如果是未设置超时时间,这时的状态为WATING,必须等待lock.notify()或lock.notifyAll()或接收到interrupt信号才能退出等待状态,TIMED_WAITING/WATING下还需要关注下面几个线程状态:

  • waiting on condition:说明线程等待另一个条件的发生,来把自己唤醒;
  • on object monitor: 说明该线程正在执行obj.wait()方法,放弃了 Monitor,进入 “Wait Set”队列;

BLOCKED

此时的线程处于阻塞状态,一般是在等待进入一个临界区“waiting for monitor entry”,这种状态是需要重点关注的

 

哪些线程状态占用CPU?

处于TIMED_WAITING、WATING、BLOCKED状态的线程是不消耗CPU的,而处于RUNNABLE状态的线程要结合当前线程代码的性质判断是否消耗CPU:

  • 纯java运算代码,并且未被挂起,是消耗CPU的;
  • 网络IO操作,在等待数据时是不消耗CPU的;

 

使用jstack定位问题

tomcat应用无法启动

问题现象:环境搭建时,部署应用后tomcat无法启动,查看日志并无报错现象,直观感觉tomcat启动时好像卡在了哪里,所以我们希望看到tomcat启动时究竟发生了什么,导致启动无法完成,这时线程堆栈中的函数调用关系也许可以帮上忙,jstack得到对应tomcat应用的线程堆栈,如下:

"localhost-startStop-1" daemon prio=10 tid=0x0000000002da2000 nid=0x630b in Object.wait() [0x00007f1863538000]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        - waiting on <0x00000007d6924800> (a org.apache.curator.framework.state.ConnectionStateManager)
        at java.lang.Object.wait(Object.java:503)
        at org.apache.curator.framework.state.ConnectionStateManager.blockUntilConnected(ConnectionStateManager.java:215)
        - locked <0x00000007d6924800> (a org.apache.curator.framework.state.ConnectionStateManager)
        at org.apache.curator.framework.imps.CuratorFrameworkImpl.blockUntilConnected(CuratorFrameworkImpl.java:223)
        at org.apache.curator.framework.imps.CuratorFrameworkImpl.blockUntilConnected(CuratorFrameworkImpl.java:229)
        at com.netease.kaola.kschedule.client.curator.CuratorSupport.initZK(CuratorSupport.java:81)
        at com.netease.kaola.kschedule.client.curator.CuratorSupport.check(CuratorSupport.java:293)
        - locked <0x00000007d64af060> (a java.lang.Class for com.netease.kaola.kschedule.client.curator.CuratorSupport)
        at com.netease.kaola.kschedule.client.curator.CuratorSupport.checkExists(CuratorSupport.java:113)
        at com.netease.kaola.kschedule.client.job.JobManagerImpl.isSchedulerStop(JobManagerImpl.java:218)
        at com.netease.kaola.kschedule.client.job.JobManagerImpl.startScheduler(JobManagerImpl.java:134)
        at com.netease.kaola.kschedule.client.KScheduleClientFactory.init(KScheduleClientFactory.java:90)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        ...

问题分析:首先关注线程状态,是处于WATING(on object monitor),这时线程执行了Object.wait(),处于挂起状态,在等待被唤醒,而且这里并没有设置超时时间,所以只要线程没被唤醒,tomcat会一直等下去。但tomcat在等什么呢,查看函数调用信息可以看到“com.netease.kaola.kschedule.client.curator.CuratorSupport.initZK”,这个函数是kschedule启动时需要初始化zookeeper,应用启动就是卡在了这里。知道问题所在就好办, 查看kschedule的配置,发现zookeeper的ip用的是私有ip,与应用不通,更改成机房ip后问题解决。

 

数据库连接池不够用导致响应时间久

问题现象:在测试一个场景时,发现响应时间很长,日志也无报错现象,根据调用链逐级定位,发现80%的时间都是消耗在DAO层的方法上,这时首先考虑的是sql会不会有问题?于是找DBA同学帮忙抓sql看下,但DBA同学反映sql执行很快,执行计划也没有问题,那问题出现在哪里呢,找不到原因就看下线程堆栈,系统在dao层方法后做了什么?jstack线程堆栈如下:

"DubboServerHandler-10.165.184.51:20881-thread-200" daemon prio=10 tid=0x00007f2fd6208800 nid=0x504b waiting on condition [0x00007f2fc0280000]
   java.lang.Thread.State: TIMED_WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0x000000078172f2c0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
        at com.alibaba.druid.pool.DruidDataSource.pollLast(DruidDataSource.java:1487)
        at com.alibaba.druid.pool.DruidDataSource.getConnectionInternal(DruidDataSource.java:1086)
        at com.alibaba.druid.pool.DruidDataSource.getConnectionDirect(DruidDataSource.java:953)
        at com.alibaba.druid.filter.FilterChainImpl.dataSource_connect(FilterChainImpl.java:4544)
        at com.alibaba.druid.filter.logging.LogFilter.dataSource_getConnection(LogFilter.java:827)
        at com.alibaba.druid.filter.FilterChainImpl.dataSource_connect(FilterChainImpl.java:4540)
        at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:931)
        at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:923)
        at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:100)
        at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111)
        at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:77)
        at org.mybatis.spring.transaction.SpringManagedTransaction.openConnection(SpringManagedTransaction.java:81)
        at org.mybatis.spring.transaction.SpringManagedTransaction.getConnection(SpringManagedTransaction.java:67)
        at org.apache.ibatis.executor.BaseExecutor.getConnection(BaseExecutor.java:279)
        at org.apache.ibatis.executor.SimpleExecutor.prepareStatement(SimpleExecutor.java:72)
        at org.apache.ibatis.executor.SimpleExecutor.doQuery(SimpleExecutor.java:59)
        ...

问题分析:先关注线程状态,发现堆栈信息里大量的dubbo线程处于TIMED_WAITING状态,从“waiting on condition”可以看出系统在等待一个条件发生,这时的线程处于sleep状态,一般会有超时时间唤醒,一般出现TIMED_WAITING很正常,一些等待IO都会出现这种状态,但是大量的TIMED_WAITING就要找原因了,观察线程堆栈发现处于TIMED_WAITING状态的线程都在等待druid获取连接池的连接,这种现象很想连接池不够用了,于是增加数据库连接池的连接数,TPS直接提升了3倍。

 

线程阻塞导致响应变慢

问题现象:同样是在测试场景时发现响应时间变慢,并且响应时间的毛刺现象比较严重,依次排查系统可能的瓶颈点没有明显收获,这时jstack又排上用场了,先看线程堆栈:

"DubboServerHandler-10.165.184.34:20880-thread-199" daemon prio=10 tid=0x00007f3cb4196000 nid=0x6048 waiting for monitor entry [0x00007f3cabb79000]
   java.lang.Thread.State: BLOCKED (on object monitor)
    at org.apache.log4j.Category.callAppenders(Category.java:204)
    - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    at org.apache.log4j.Category.forcedLog(Category.java:391)
    at org.apache.log4j.Category.info(Category.java:666)
    at netease.qp.spellcheck.Facade.logInfo(Facade.java:26)
    at netease.qp.kaola.service.impl.SearchPhraseServiceImpl.getSearchGoodsResult(SearchPhraseServiceImpl.java:60)
    at com.alibaba.dubbo.common.bytecode.Wrapper1.invokeMethod(Wrapper1.java)
    at com.alibaba.dubbo.rpc.proxy.javassist.JavassistProxyFactory$1.doInvoke(JavassistProxyFactory.java:46)
    at com.alibaba.dubbo.rpc.proxy.AbstractProxyInvoker.invoke(AbstractProxyInvoker.java:72)
    at com.alibaba.dubbo.rpc.protocol.InvokerWrapper.invoke(InvokerWrapper.java:53)
    at com.alibaba.dubbo.rpc.protocol.dubbo.filter.TraceFilter.invoke(TraceFilter.java:78)
    at com.alibaba.dubbo.rpc.protocol.ProtocolFilterWrapper$1.invoke(ProtocolFilterWrapper.java:91)
    ...

问题分析:可以看到线程是处于BLOCKED状态的,这种状态我们需要重点关注,这时的线程是被阻塞的,进一步查看发现几乎所有的dubbo线程都处于block状态,都在“waiting to lock <0x000000078c312718>”,这个<0x000000078c312718>又是个什么鬼?

Line 57:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 119:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 169:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 207:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 257:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 295:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 345:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 407:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 588:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 686:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 790:     - locked <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 840:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 954:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 995:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 1105:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 1143:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    Line 1197:     - waiting to lock <0x000000078c312718> (a org.apache.log4j.spi.RootLogger)
    ...

通过排查发现这个锁是log4j拿到的,同时阻塞了其他线程通过log4j打日志,Google类似问题才知道是log4j的一个bug,可以通过升级log4j版本或者精简日志避免,知道原因后经过相应的处理,性能得到大幅度提升

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值