RAC第一个节点被剔除集群故障分析

 对于管理RAC的dba来说,肯定遇到过某一个节点被另一个节点reboot,其实出现这样的原因很多,但大多数情况,都是由于某个节点的资源(cpu,内存,磁盘,网络等)出现了问题,阻碍了节点间的通信,为保证数据的完整性以及RAC特性,才将出问题的节点剔除集群的,从维护RAC以来,遇到过很多次这样的事情,不是os被重启了,就是db资源被重启了,有时候出问题是晚上,有时候是凌晨,有时候是白天。以下为最近一次RAC出问题的故障分析以及解决方案。
时间发生在:2013-12-10 晚上21:24,RAC的第一个节点被重启(2节点RAC集群)
1. 查看RAC的alert日志
2013-12-10 21:16:05.917
[/u01/11.2.0/grid/bin/oraagent.bin(4278)]CRS-5818:Aborted command 'check' for resource 'ora.LISTENER.lsnr'. Details at (:CRSAGF001
13:) {1:43062:2} in /u01/11.2.0/grid/log/rac1/agent/crsd/oraagent_grid/oraagent_grid.log.
----这里发现出现了crs-5818错误,其实rac会每隔1分钟检查一次各个资源的状态,如果发现检查失败就会报错,这里应该是首先检查监听资源,所以我们每次出问题的时候都是先报监听检查失败。
2013-12-10 21:16:05.491
[/u01/11.2.0/grid/bin/oraagent.bin(4278)]CRS-5818:Aborted command 'check' for resource 'ora.LISTENER_SCAN1.lsnr'. Details at (:CRS
AGF00113:) {1:43062:2} in /u01/11.2.0/grid/log/rac1/agent/crsd/oraagent_grid/oraagent_grid.log.
2013-12-10 21:25:47.436
[ohasd(2989)]CRS-2112:The OLR service started on node rac1.--这是时候已经发生重启,并重启完毕了
2013-12-10 21:25:47.622
[ohasd(2989)]CRS-1301:Oracle High Availability Service started on node rac1.
2013-12-10 21:25:47.864
[ohasd(2989)]CRS-8017:location: /var/opt/oracle/lastgasp has 34 reboot advisory log files, 0 were announced and 0 errors occurred
2013-12-10 21:25:52.660
[/u01/11.2.0/grid/bin/oraagent.bin(3270)]CRS-5011:Check of resource "+ASM" failed: details at "(:CLSN00006:)" in "/u01/11.2.0/gr
id/log/rac1/agent/ohasd/oraagent_grid/oraagent_grid.log"
2013-12-10 21:26:06.605
[gpnpd(3622)]CRS-2328:GPNPD started on node rac1.
2013-12-10 21:26:09.620
[cssd(3640)]CRS-1713:CSSD daemon is started in clustered mode
2013-12-10 21:26:11.437
[ohasd(2989)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
2013-12-10 21:27:28.324
[cssd(3640)]CRS-1707:Lease acquisition for node rac1 number 1 completed
2013-12-10 21:27:29.718
[cssd(3640)]CRS-1605:CSSD voting file is online: /dev/rdisk/disk104; details in /u01/11.2.0/grid/log/rac1/cssd/ocssd.log.
2013-12-10 21:27:35.244
[cssd(3640)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .
2013-12-10 21:27:37.720
[ctssd(3970)]CRS-2403:The Cluster Time Synchronization Service on host rac1 is in observer mode.
[ctssd(3970)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac2.
2013-12-10 21:27:38.091
[ctssd(3970)]CRS-2401:The Cluster Time Synchronization Service started on host rac1.
2013-12-10 21:27:48.864
[/u01/11.2.0/grid/bin/oraagent.bin(3613)]CRS-5011:Check of resource "+ASM" failed: details at "(:CLSN00006:)" in "/u01/11.2.0/gr
id/log/rac1/agent/ohasd/oraagent_grid/oraagent_grid.log"
2013-12-10 21:27:51.323
[ctssd(3970)]CRS-2409:The clock on host rac1 is not synchronous with the mean cluster time. No action has been taken as the Clus
ter Time Synchronization Service is running in observer mode.
2013-12-10 21:27:52.351
[/u01/11.2.0/grid/bin/oraagent.bin(3613)]CRS-5011:Check of resource "+ASM" failed: details at "(:CLSN00006:)" in "/u01/11.2.0/gr
id/log/rac1/agent/ohasd/oraagent_grid/oraagent_grid.log"
2013-12-10 21:28:23.621
[crsd(4274)]CRS-1012:The OCR service started on node rac1.
2013-12-10 21:28:23.765
[evmd(3974)]CRS-1401:EVMD started on node rac1.
2013-12-10 21:28:26.161
[crsd(4274)]CRS-1201:CRSD started on node rac1.
2013-12-10 21:28:29.194
[/u01/11.2.0/grid/bin/oraagent.bin(4325)]CRS-5016:Process "/u01/11.2.0/grid/opmn/bin/onsctli" spawned by agent "/u01/11.2.0/gr
id/bin/oraagent.bin" for action "check" failed: details at "(:CLSN00010:)" in "/u01/11.2.0/grid/log/rac1/agent/crsd/oraagent_g
rid/oraagent_grid.log"
2013-12-10 21:28:31.115
[/u01/11.2.0/grid/bin/oraagent.bin(4325)]CRS-5016:Process "/u01/11.2.0/grid/bin/lsnrctl" spawned by agent "/u01/11.2.0/grid/bi
n/oraagent.bin" for action "check" failed: details at "(:CLSN00010:)" in "/u01/11.2.0/grid/log/rac1/agent/crsd/oraagent_grid/o
raagent_grid.log"
查看第二个节点的rac日志:
2013-12-10 21:16:31.893
[ohasd(2898)]CRS-8011:reboot advisory message from host: rac1, component: cssagent, with time stamp: L-2013-12-10-21:16:31.851---可以看到此时建议节点1重启
[ohasd(2898)]CRS-8013:reboot advisory message text: Rebooting after limit 28050 exceeded; disk timeout 28050, network timeout 27848,
 last heartbeat from CSSD at epoch seconds 1386681351.547, 40303 milliseconds ago based on invariant clock value of 460929194
2013-12-10 21:16:46.333
[cssd(3579)]CRS-1612:Network communication with node rac1 (1) missing for 50% of timeout interval.  Removal of this node from cl
uster in 14.615 seconds---这里千万不要以为是网络问题,在这个过程中,其实节点一正在重启,所以当然网络不通啦。
2013-12-10 21:16:54.413
[cssd(3579)]CRS-1611:Network communication with node rac1 (1) missing for 75% of timeout interval.  Removal of this node from cl
uster in 6.535 seconds
2013-12-10 21:16:58.453
[cssd(3579)]CRS-1610:Network communication with node rac1 (1) missing for 90% of timeout interval.  Removal of this node from cl
uster in 2.495 seconds
2013-12-10 21:17:00.955
[cssd(3579)]CRS-1632:Node rac1 is being removed from the cluster in cluster incarnation 209651233
2013-12-10 21:17:00.969
[cssd(3579)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac2 .
2013-12-10 21:17:01.688
[crsd(5250)]CRS-5504:Node down event reported for node 'rac1'.
2013-12-10 21:17:07.875
[crsd(5250)]CRS-2773:Server 'rac1' has been removed from pool 'Generic'.
2013-12-10 21:25:45.235
[/u01/11.2.0/grid/bin/oraagent.bin(10256)]CRS-5818:Aborted command 'check' for resource 'ora.LISTENER_SCAN3.lsnr'. Details at (:CR
SAGF00113:) {2:31202:2} in /u01/11.2.0/grid/log/rac2/agent/crsd/oraagent_grid/oraagent_grid.log.
2013-12-10 21:25:43.653
[/u01/11.2.0/grid/bin/oraagent.bin(10256)]CRS-5818:Aborted command 'check' for resource 'ora.LISTENER.lsnr'. Details at (:CRSAGF00
113:) {2:31202:2} in /u01/11.2.0/grid/log/rac2/agent/crsd/oraagent_grid/oraagent_grid.log.
2013-12-10 21:25:48.567
[/u01/11.2.0/grid/bin/oraagent.bin(10256)]CRS-5818:Aborted command 'check' for resource 'ora.LISTENER_SCAN1.lsnr'. Details at (:CR
SAGF00113:) {2:31202:55563} in /u01/11.2.0/grid/log/rac2/agent/crsd/oraagent_grid/oraagent_grid.log.
2013-12-10 21:26:13.939
[/u01/11.2.0/grid/bin/oraagent.bin(10256)]CRS-5014:Agent "/u01/11.2.0/grid/bin/oraagent.bin" timed out starting process "/u01/
11.2.0/grid/bin/lsnrctl" for action "check": details at "(:CLSN00009:)" in "/u01/11.2.0/grid/log/rac2/agent/crsd/oraagent_grid
/oraagent_grid.log"
2013-12-10 21:26:13.939
[/u01/11.2.0/grid/bin/oraagent.bin(10256)]CRS-5014:Agent "/u01/11.2.0/grid/bin/oraagent.bin" timed out starting process "/u01/
11.2.0/grid/bin/lsnrctl" for action "check": details at "(:CLSN00009:)" in "/u01/11.2.0/grid/log/rac2/agent/crsd/oraagent_grid
/oraagent_grid.log"
2013-12-10 21:26:16.053
[/u01/11.2.0/grid/bin/oraagent.bin(10256)]CRS-5014:Agent "/u01/11.2.0/grid/bin/oraagent.bin" timed out starting process "/u01/
11.2.0/grid/bin/lsnrctl" for action "check": details at "(:CLSN00009:)" in "/u01/11.2.0/grid/log/rac2/agent/crsd/oraagent_grid
/oraagent_grid.log"
2013-12-10 21:26:23.751
[/u01/11.2.0/grid/bin/oraagent.bin(10256)]CRS-5818:Aborted command 'check' for resource 'ora.ons'. Details at (:CRSAGF00113:) {0:5
:2} in /u01/11.2.0/grid/log/rac2/agent/crsd/oraagent_grid/oraagent_grid.log.
2013-12-10 21:26:24.082
[/u01/11.2.0/grid/bin/oraagent.bin(10256)]CRS-5014:Agent "/u01/11.2.0/grid/bin/oraagent.bin" timed out starting process "/u01/
11.2.0/grid/opmn/bin/onsctli" for action "check": details at "(:CLSN00009:)" in "/u01/11.2.0/grid/log/rac2/agent/crsd/oraagent
_grid/oraagent_grid.log"
2013-12-10 21:26:53.757
[/u01/11.2.0/grid/bin/oraagent.bin(10256)]CRS-5818:Aborted command 'check' for resource 'ora.LISTENER_SCAN2.lsnr'. Details at (:CR
SAGF00113:) {2:31202:2} in /u01/11.2.0/grid/log/rac2/agent/crsd/oraagent_grid/oraagent_grid.log.
2013-12-10 21:26:53.772
[/u01/11.2.0/grid/bin/oraagent.bin(10256)]CRS-5014:Agent "/u01/11.2.0/grid/bin/oraagent.bin" timed out starting process "/u01/
11.2.0/grid/bin/lsnrctl" for action "check": details at "(:CLSN00009:)" in "/u01/11.2.0/grid/log/rac2/agent/crsd/oraagent_grid
/oraagent_grid.log"
2013-12-10 21:27:35.294
[cssd(3579)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .---节点一重启完毕并加入集群。
2 查看ocrsd和ocssd日志
此时可以在查看crsd和cssd和ctssd日志来进一步判断引起节点一重启的原因。
比如在节点一的ocssd日志中看到:Core file size limit extended等的信息。这里就不一一列举。

3 查看OSW日志
其实帮助我们诊断原因还有一个重要的工具就是OSWater。
 由于我的osw信息之保留2天,所以现在已经找不到10号的日志了,但我记得我当时查看节点的osw中的oswvmstat的信息,发现在重启之前,节点的系统内存仅仅剩下3M。再分析前几次宕机的时候,也发现系统内存过低的情况。
4 查看系统日志
 查看系统日志,比如dmesg啊,还有linux下的message,hpunix下的syslog.log啊,判断下是否是由于系统的原因导致机器重启,或者可以使用其他分析工具,比如hpux下的crashinfo等,都是可以诊断出当时系统重启的原因,但我这边,最后都指向是oracle触发的机器重启。
5 查看数据库当时正在做些什么
 根据osw中oswtop和oswps等,可以查看到在重启前,那些消耗性能,或者我们这里干脆说是消耗内存的进程是谁,我这里查看到当时是有很多serverprocess进程都是run状态,也就是此时db很忙,而查看当时磁盘使用率也到了80%,由此应该可以推断出系统正在执行sql,而且还可能是全表扫描。如果此时你的awr的时间间隔设的很短,那你可以查看那个时间点系统正在执行什么,但由于我这里是默认1h,所以我无法看到,只能一直问开发人员。但大多数开发人员都有推卸责的嫌疑,所以他们的话,我们只能简单采纳有用的。
6 诊断
 由于最开始出问题的时候时间的不固定性,所以一直都没有找打问题的原因,直到我已经基本知道是内存问题,这边在10号的21:24出现了重启。经过一系列的沟通,最后得知在20:50的时候,连接这个数据库的应用因为需要发布新版本而重启应用了,那么在晚上9点的时候,肯定是没有多少业务的,因为用户都是白天干活,所以是不是可以断定应用重启时有一些自动任务呢,大量的自动任务导致磁盘大量访问,内存消耗殆尽了呢,带着这个问题,找了一个晚上,我将应用重启,查看db到底执行了什么,果然不出所料,发现有至少8个大sql同时执行,而且这些sql晚上可以只执行一次,或者做成db的job,在db层面执行。其实这里我一直犯了一个错误,我们的系统是64G的物理内存,为了更大限度的让oracle使用内存,我给memory_target 50G,系统平稳的时候,sga处于38G左右,pga约为2G。但由于出现了8个大排序的sql,导致pga使用急剧上升,最终导致物理内存过低,也就出现了一系列rac资源的各种情况。这里说各种情况,是因为有时候出现crs-5818错误,并不是每次都会引起os重启,有时候可能会引起比如asm资源状态显示不正常,或者db资源重启等各种稀奇古怪的事情。
7 解决
 调整memory_target为42G,并且将应用重启时执行的自动任务改为db 层面的job,保证每天只执行一次而不是多次并且执行。
 最终问题解决,现在系统运行了7天了,还是比较好的。
8 误区
 以前查看内存使用总是在90%左右,swap使用在60%左右,因为这是好的事情,或者因为内存只要是平稳的使用,这就是好的,或者说不需要修改,但其实不是这样的,如果是rac系统的话,那很有可能导致rac故障,我们需要给rac预留足够的资源来让他们通信等。如果我们的db上的连接数很多的话,那还是建议memory-target为系统内存的50%左右就好。

 

 

 

 

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/24500180/viewspace-1063503/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/24500180/viewspace-1063503/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值