一次RAC实例驱逐详细分析及解决方案

一 问题说明

接同事邮件,紧急处理客户extent RAC节点实例驱逐故障分析。我们根据客户提供的日志,结合具体的环境及详细的trace文件进行分析,在尽可能的情况下,给出故障结论,由于故障发生时候,缺失相关的核心trace文件等,所以我们根据现有的日志,结合该环境本身的一些技术指标,给出一个可预判的故障结论。

二 故障处理的关键时间点

时间点

故障处理关键点

03-17 15:28:00

接到邮件,开始具体分析故障日志

03-17 16:00:00

结合现有的日志,给出故障初步结论

03-17 16:20:00

邮件回复技术细节内容

03-17 17:00:00

详细提交技术分析细节


三 故障具体日志分析

3.1 故障时间一节点集群告警日志分析

2017-03-04 01:28:01.282:

[ctssd(265396)]CRS-2409:The clock on host ggudb-kl1p4-14 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.

2017-03-04 01:58:01.915:

[ctssd(265396)]CRS-2409:The clock on host ggudb-kl1p4-14 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.

2017-03-04 02:02:43.306:

[/u01/11.2.0/grid/bin/oraagent.bin(267603)]CRS-5011:Check of resource "oltp" failed: details at "(:CLSN00007:)" in "/u01/11.2.0/grid/log/ggudb-kl1p4-14/agent/crsd/oraagent_oracle/oraagent_oracle.log"

2017-03-04 02:03:17.323:

[/u01/11.2.0/grid/bin/oraagent.bin(267603)]CRS-5818:Aborted command 'check' for resource 'ora.oltp.mynode1.svc'. Details at (:CRSAGF00113:) {0:9:19} in /u01/11.2.0/grid/log/ggudb-kl1p4-14/agent/crsd/oraagent_oracle/oraagent_oracle.log.

2017-03-04 02:05:16.514:

[/u01/11.2.0/grid/bin/oraagent.bin(267603)]CRS-5818:Aborted command 'check' for resource 'ora.oltp.mynode1.svc'. Details at (:CRSAGF00113:) {0:9:19} in /u01/11.2.0/grid/log/ggudb-kl1p4-14/agent/crsd/oraagent_oracle/oraagent_oracle.log.

2017-03-04 02:28:02.558:

[ctssd(265396)]CRS-2409:The clock on host ggudb-kl1p4-14 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.

2017-03-04 02:58:05.193:

从一节点集群告警日志看,在故障时间节点(2017-03-04 02:02:43)数据库CRS进程接收到本节点数据库实例及数据库服务CRASH的信息。从这里我们可以分析到,在该故障时间段,集群本身并没有crash只是1节点实例被驱逐。

3.2 故障时间二节点集群告警日志分析

[crsd(243374)]CRS-2772:Server 'ggudb-kl1p4-14' has been assigned to pool 'ora.oltp_mynode1'.

2017-03-04 02:02:47.120:

[crsd(243374)]CRS-2765:Resource 'ora.oltp.db' has failed on server 'ggudb-kl1p4-14'.

2017-03-04 02:05:34.367:

[crsd(243374)]CRS-2765:Resource 'ora.oltp.mynode1.svc' has failed on server 'ggudb-kl1p4-14'.

2017-03-04 02:05:34.367:

[crsd(243374)]CRS-2771:Maximum restart attempts reached for resource 'ora.oltp.mynode1.svc'; will not restart.

2017-03-05 00:12:21.690:

可以看到在同一时间,二节点也发出了一节点实例crash的信息

3.3 故障时间一节点rdbms日志

IPC Send timeout detected. Sender: ospid 267775 [oracle@ggudb-kl1p4-14 (PING)]

Receiver: inst 2 binc 429635545 ospid 244165

Sat Mar 04 02:02:34 2017

minact-scn master exiting with err:12751

Sat Mar 04 02:02:41 2017

IPC Send timeout detected. Sender: ospid 267880 [oracle@ggudb-kl1p4-14 (LMD0)]

Receiver: inst 2 binc 429635551 ospid 244173

IPC Send timeout to 2.0 inc 4 for msg type 65521 from opid 12

Sat Mar 04 02:02:42 2017

Suppressed nested communications reconfiguration: instance_number 2

Detected an inconsistent instance membership by instance 2

Sat Mar 04 02:02:42 2017

Received an instance abort message from instance 2

Received an instance abort message from instance 2

Please check instance 2 alert and LMON trace files for detail.

Please check instance 2 alert and LMON trace files for detail.

LMS0 (ospid: 267888): terminating the instance due to error 481

Sat Mar 04 02:02:42 2017

System state dump requested by (instance=1, osid=267888 (LMS0)), summary=[abnormal instance termination].

System State dumped to trace file /u01/app/oracle/diag/rdbms/oltp/oltp1/trace/oltp1_diag_267757_20170304020242.trc

Sat Mar 04 02:02:47 2017

ORA-1092 : opitsk aborting process

Sat Mar 04 02:02:47 2017

License high water mark = 438

Instance terminated by LMS0, pid = 267888

USER (ospid: 715453): terminating the instance

Instance terminated by USER, pid = 715453

Sun Mar 05 22:51:42 2017

可以看到,在故障发生时候,一节点因为接收到来自二节点的实例驱逐信息(Received an instance abort message from instance 2进而导致实例crash

3.4 故障时间二节点rdbms日志

Sat Mar 04 02:00:33 2017

IPC Send timeout detected. Receiver ospid 244324 [

Sat Mar 04 02:00:33 2017

Errors in file /u01/app/oracle/diag/rdbms/oltp/oltp2/trace/oltp2_lck0_244324.trc:

Sat Mar 04 02:02:10 2017

##########发现节点无法和一节点LMD进程进行通信###############

LMD0 (ospid: 244173) has detected no messaging activity from instance 1

LMD0 (ospid: 244173) issues an IMR to resolve the situation

Please check LMD0 trace file for more detail.

Sat Mar 04 02:02:10 2017

########重置一节点通信|################

Communications reconfiguration: instance_number 1

Sat Mar 04 02:02:41 2017

IPC Send timeout detected. Receiver ospid 244173 [

Sat Mar 04 02:02:41 2017

Errors in file /u01/app/oracle/diag/rdbms/oltp/oltp2/trace/oltp2_lmd0_244173.trc:

Sat Mar 04 02:02:42 2017

##############对一节点发起驱逐######################

Detected an inconsistent instance membership by instance 2

Evicting instance 1 from cluster

Waiting for instances to leave: 1

Dumping diagnostic data in directory=[cdmp_20170304020242], requested by (instance=1, osid=267888 (LMS0)), summary=[abnormal instance termination].

Reconfiguration started (old inc 4, new inc 8)

List of instances:

 2 (myinst: 2)

 Global Resource Directory frozen

可以详细看到,在2节点的rdbms日志中,因LMD进程通信超时,2节点向一节点发起了驱逐,简单来说,在故障时间中,一节点的LMD进程因某些原因,导致无法与2节点进行进行通信,为了保证数据一致性,集群在数据库层面发起了驱逐(Instance Evicting

3.5 一节点集群心跳日志

2017-03-04 02:02:37.347: [    CSSD][3417790208]clssnmSendingThread: sent 5 status msgs to all nodes

2017-03-04 02:02:42.349: [    CSSD][3417790208]clssnmSendingThread: sending status msg to all nodes

2017-03-04 02:02:42.349: [    CSSD][3417790208]clssnmSendingThread: sent 5 status msgs to all nodes

2017-03-04 02:02:42.694: [    CSSD][3646260992]clssgmDeadProc: proc 0x7f42d030a6a0

2017-03-04 02:02:42.694: [    CSSD][3646260992]clssgmDestroyProc: cleaning up proc(0x7f42d030a6a0) con(0x47f7) skgpid 267966 ospid 267966 with 0 clients, refcount 0

2017-03-04 02:02:42.694: [    CSSD][3646260992]clssgmDiscEndpcl: gipcDestroy 0x47f7

2017-03-04 02:02:42.703: [    CSSD][3646260992]clssgmDeadProc: proc 0x7f42d02f21f0

2017-03-04 02:02:42.703: [    CSSD][3646260992]clssgmFenceClient: fencing client(0x7f42d02e77f0), member(3) in group(GR+GCR1), death fence(1), SAGE fence(0)

2017-03-04 02:02:42.703: [    CSSD][3646260992]clssgmUnreferenceMember: global grock GR+GCR1 member 3 refcount is 1

2017-03-04 02:02:42.703: [    CSSD][3646260992]clssgmFenceProcessDeath: client (0x7f42d02e77f0) pid 267914 undead

2017-03-04 02:02:42.703: [    CSSD][3646260992]clssgmQueueFenceForCheck: (0x7f42d02f5730) Death check for object type 3, pid 267914

2017-03-04 02:02:42.703: [    CSSD][3646260992]clssgmDiscEndpcl: gipcDestroy 0x471d

2017-03-04 02:02:42.703: [    CSSD][2348807936]clssgmFenceCompletion: (0x7f42d02f5730) process death fence completed for process 267914, object type 3

2017-03-04 02:02:42.703: [    CSSD][3646260992]clssgmDestroyProc: cleaning up proc(0x7f42d02f21f0) con(0x46ee) skgpid 267914 ospid 267914 with 0 clients, refcount 0

2017-03-04 02:02:42.703: [    CSSD][3646260992]clssgmDiscEndpcl: gipcDestroy 0x46ee

2017-03-04 02:02:42.703: [    CSSD][2348807936]clssgmTermMember: Terminating member 3 (0x7f42d02f29e0) in grock GR+GCR1

2017-03-04 02:02:42.703: [    CSSD][2348807936]clssgmQueueGrockEvent: groupName(GR+GCR1) count(4) master(0) event(2), incarn 6, mbrc 4, to member 1, events 0x280, state 0x0

2017-03-04 02:02:42.703: [    CSSD][2348807936]clssgmUnreferenceMember: global grock GR+GCR1 member 3 refcount is 0

2017-03-04 02:02:42.703: [    CSSD][2348807936]clssgmAllocateRPCIndex: allocated rpc 121 (0x7f42d96d2f78)

2017-03-04 02:02:42.703: [    CSSD][2348807936]clssgmRPC: rpc 0x7f42d96d2f78 (RPC#121) tag(79002a) sent to node 2

2017-03-04 02:02:42.704: [    CSSD][3646260992]clssgmDeadProc: proc 0x7f42d039c680

2017-03-04 02:02:42.704: [    CSSD][3646260992]clssgmFenceClient: fencing client(0x7f42d0399e40), member(0) in group(DG_LOCAL_DATA), death fence(1), SAGE fence(0)

2017-03-04 02:02:42.704: [    CSSD][3646260992]clssgmUnreferenceMember: local grock DG_LOCAL_DATA member 0 refcount is 23

2017-03-04 02:02:42.704: [    CSSD][3646260992]clssgmFenceProcessDeath: client (0x7f42d0399e40) pid 267964 undead

2017-03-04 02:02:42.704: [    CSSD][3646260992]clssgmQueueFenceForCheck: (0x7f42d0394610) Death check for object type 3, pid 267964

2017-03-04 02:02:42.704: [    CSSD][3646260992]clssgmFenceClient: fencing client(0x7f42d0391770), member(0) in group(DG_LOCAL_REDO), death fence(1), SAGE fence(0)

2017-03-04 02:02:42.704: [    CSSD][3646260992]clssgmUnreferenceMember: local grock DG_LOCAL_REDO member 0 refcount is 2

2017-03-04 02:02:42.704: [    CSSD][3646260992]clssgmFenceProcessDeath: client (0x7f42d0391770) pid 267964 undead

可以看到CSS进城已经检测到部分death进城通过分析数据库告警日志可以获得这部分操作系统进程对应的oracle进程:


从上述的日志分析看,故障时候,LMSx相关的进程因某些原因导致僵死,进而导致通信失败,从而二节点对一节点发起了实例层的crash



四 故障总结

4.1 故障结论分析

结合第三章节的故障分析,从现有的日志上看,LMS进程僵死超时是导致数据库被驱逐的最主要原因。而在RAC实例间主要的通讯进程有LMON, LMD, LMSx等进程。正常来说,当一个消息被发送给其它实例之后,发送者期望接收者会回复一个确认消息,但是如果这个确认消息没有在指定的时间内收到(默认300秒),发送者就会认为消息没有达到接收者,于是会出现“IPC Send timeout”问题。而导致上述这些进程僵死或者超时的原因不外乎以下几种:

?  网络问题造成丢包或者通讯异常。

?  由于主机资源(CPU、内存、I/O等)问题造成这些进程无法被调度或者这些进程无响应。

?  Oracle Bug.
从目前来看,ORACLE 的BUG 可能性也存在,但是相对可能性较小,之所以说bug可能存在,原因在于数据库并没有安装任何PSU:

There are no Interim patches installed in this Oracle Home.

Rac system comprising of multiple nodes

  Local node = ggudb-kl1p4-14

  Remote node = xhmdb-kl1p4-24

--------------------------------------------------------------------------------

OPatch succeeded.

而主机资源方面,从目前的情况看,原则上这个时间段系统并不会特别卡,除非存在夜间批处理,或者备份等定时任务在大量开销主机资源(CPU 内存 IO)

最终我们来说网络问题,这也是可能性最大的,之所以这么说,因为在故障时间段,大量的IPC Send timeout detected. 这也在一定程度上说明故障时候心跳之间的延迟还是比较厉害。

上述的三种情况,BUG这块相对确定并且排除是最方便的,而资源不足和网络延迟等,需要更进一步的日志,比较遗憾,在咨询客户后,客户告知在故障时间段的时候并没有部署OSwatch ,这就导致故障时间段的操作系统trace相对缺失。

4.2 解决方案

从上述4.1节的分析,我们给出如下建议:

1.       开启OSwatch ,我们在这里稍微在说明一下:

开启OSwatch的自启动

双节点部署如下信息:

linux/etc/rc.local下部署如下信息:(文件调用路径自行修改)

##########################

/bin/sh /osw/autostartosw.sh

#######################################################################

编辑如下文件(文件调用路径自行修改)

cd /osw/oswbb

/usr/bin/nohup sh startOSWbb.sh 10 48 gzip /osw/archive&

同时开启private监控:

# vi private.net(添加如下信息,后面跟别名)

traceroute -r -F hisrac01-priv

traceroute -r -F hisrac02-priv

# ll private.net

-rwx------ 1 root root 94 May 25 16:16 private.net 确保可以执行,有执行权限

# ./private.net(手工尝试执行)

traceroute to hisrac01-priv (10.10.10.31), 30 hops max, 60 byte packets

1 hisrac01-priv (10.10.10.31) 0.111 ms 0.021 ms 0.016 ms

traceroute to hisrac02-priv (10.10.10.32), 30 hops max, 60 byte packets

1 hisrac02-priv (10.10.10.32) 0.215 ms 0.200 ms 0.189 ms

手工开启 nohup sh startOSWbb.sh 10 48 gzip /osw/archive&

2.  作为extent RAC 尽量创建单独的service确保多个应用各自连接自己的数据库实例,减小心跳压力,如果采用普通normal存储配置,建议当前节点连接当前节点对应的存储。较少裸光纤压力。

3.  从目前来看,数据库及集群未安装任何PSU,作为生产环境,这是绝对不允许的,建议可以针对最新的PSU进行patch



------------------------------------------------------------------------------------
<版权所有,文章允许转载,但必须以链接方式注明源地址,否则追究法律责任!>
原博客地址:http://blog.itpub.net/23732248/
原作者:应以峰 (frank-ying)
-------------------------------------------------------------------------------------

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/23732248/viewspace-2155709/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/23732248/viewspace-2155709/

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值