在Linux系统上安装12.1.0.2 集群GRID/GI软件,节点2运行root.sh失败,屏幕的错误信息:
OLR initialization - successful
2015/12/15 13:16:55
CLSRSC-507: The root script cannot proceed on this node rac2 because either the first-node operations have not completed on node rac1 or there was an error in obtaining the status of the first-node operations.
以上错误说明节点2无法确认节点1安装状态是否完成。Root.sh是如果来确认节点1安装是否完成呢?需要检查日志:
$GRID_HOME>/cfgtoollogs/crsconfig/rootcrs_rac2_2015-12-18_09-41-53PM.log
2015-12-18 21:42:39: Trying to get the value of key: SYSTEM.rootcrs.checkpoints.firstnode in OCR.
2015-12-18 21:42:39: setting ORAASM_UPGRADE to 1
2015-12-18 21:42:39: Check the existence of key pair with key name: SYSTEM.rootcrs.checkpoints.firstnode in OCR.
2015-12-18 21:42:39: setting ORAASM_UPGRADE to 1
2015-12-18 21:42:39: Invoking "/u01/gridsoft/12.1.0/bin/cluutil -exec -keyexists -key checkpoints.firstnode"
2015-12-18 21:42:39: trace file=/u01/gridbase/crsdata/rac2/crsconfig/cluutil9.log
2015-12-18 21:42:39: Running as user grid: /u01/gridsoft/12.1.0/bin/cluutil -exec -keyexists -key checkpoints.firstnode
2015-12-18 21:42:39: s_run_as_user2: Running /bin/su grid -c ' echo CLSRSC_START; /u01/gridsoft/12.1.0/bin/cluutil -exec -keyexists -key checkpoints.firstnode '
2015-12-18 21:42:39: Removing file /tmp/filexr1WwO
2015-12-18 21:42:39: Successfully removed file: /tmp/filexr1WwO
2015-12-18 21:42:39:
pipe exit code: 256
2015-12-18 21:42:39:
/bin/su exited with rc=1
2015-12-18 21:42:39: oracle.ops.mgmt.rawdevice.OCRException: PROC-32: Cluster Ready Services on the local node is not running Messaging error [gipcretConnectionRefused] [29]
2015-12-18 21:42:39:
Cannot get OCR key with CLUUTIL, try using OCRDUMP.
2015-12-18 21:42:39:
Check OCR key using ocrdump
2015-12-18 21:42:54: ocrdump output: PROT-302: Failed to initialize ocrdump
2015-12-18 21:42:54:
The key pair with keyname: SYSTEM.rootcrs.checkpoints.firstnode does not exist in OCR.
以上信息说明节点2首先执行cluutil -exec -keyexists -key checkpoints.firstnode命令来查看OCR中的key: SYSTEM.rootcrs.checkpoints.firstnode,失败后又尝试执行OCRDUMP命令,但是OCRDUMP命令也失败。接下来分析OCRDUMP命令也失败的原因:
$GRID_BASE/diag/crs/<node>/crs/trace/ocrdump_13146.trc
2015-12-18 21:42:48.098879 : OCRASM: ASM Error Stack : ORA-29701: unable to connect to Cluster Synchronization Service
2015-12-18 21:42:48.098885 : OCRASM: proprasmo: ASM instance is down. Proceed to open the file in dirty mode.
CLWAL: clsw_Initialize: Error [32] from procr_init_ext
CLWAL: clsw_Initialize: Error [PROCL-32: Oracle High Availability Services on the local node is not running Messaging error [gipcretConnectionRefused] [29]] from procr_init_ext
2015-12-18 21:42:48.101773 : GPNP: clsgpnpkww_initclswcx: [at clsgpnpkww.c:351] Result: (56) CLSGPNP_OCR_INIT. (:GPNP01201: )Failed to init CLSW-OLR context. CLSW Error (3): CLSW-3: Error in the cluster registry (OCR) layer. [32] [PROCL-32: Oracle High Availability Services on the local node is not running Messaging error [gipcretConnectionRefused] [29]]
2015-12-18 21:42:48.112746 : OCRASM: proprasmo: Error [13] in opening the GPNP profile. Try to get offline profile
2015-12-18 21:42:48.220769 : OCRRAW: kgfo_kge2slos error stack at kgfolclcpi1:
AMDU-00210: No disks found in diskgroup OCR_VOTING
以上信息提示无法连接ORA-29701 CSS和PROCL-32 OHASD这些都是正常的,因为节点2集群没有启动,这些错误可能会干扰我们分析问题。关键的错误信息是AMDU-00210: No disks found in diskgroup OCR_VOTING,也就是说节点2没有找到ASM disk导致OCRDUMP失败,因此无法确认节点1安装的状态是否完成。接下来我们执行kfed确认ASM disk是否有问题:
节点1查看disk /dev/raw/raw1
$ /u01/gridsoft/12.1.0/bin/kfed read /dev/raw/raw1
kfbh.endian: 1 ; 0x000: 0x01
kfbh.hard: 130 ; 0x001: 0x82
kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD
<=========disk raw1类型是KFBTYP_DISKHEAD,是正常的asm disk
kfbh.datfmt: 1 ; 0x003: 0x01
kfbh.block.blk: 0 ; 0x004: blk=0
kfbh.block.obj: 2147483648 ; 0x008: disk=0
kfbh.check: 420965027 ; 0x00c: 0x19176aa3
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
...
kfdhdb.vfstart: 128 ; 0x0ec: 0x00000080
<=========vfstart 值说明这个disk是vote file
kfdhdb.vfend: 160 ; 0x0f0: 0x000000a0
<=========vfend 值说明这个disk是vote file
节点2查看disk /dev/raw/raw1
$ /u01/gridsoft/12.1.0/bin/kfed read /dev/raw/raw1
kfbh.endian: 0 ; 0x000: 0x00
kfbh.hard: 0 ; 0x001: 0x00
kfbh.type: 0 ; 0x002: KFBTYP_INVALID
<=========节点2上查看raw1类型是无效的KFBTYP_INVALID
kfbh.datfmt: 0 ; 0x003: 0x00
kfbh.block.blk: 0 ; 0x004: blk=0
kfbh.block.obj: 0 ; 0x008: file=0
kfbh.check: 0 ; 0x00c: 0x00000000
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000
000000000 00000000 00000000 00000000 00000000 [................]
Repeat 255 times
KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]
在节点1查看/dev/raw/raw1显示disk 类型是
KFBTYP_DISKHEAD,并且kfdhdb.vfstart有值,说明raw1在节点1是正常的asm disk,并且是vote disk。但是节点2查看相同的disk,显示完全不同的信息。正常情况下,配置的共享设备raw1在节点1和节点2看到的信息应该是一致的,但是这个case中节点1和节点2看到的是不同的信息,说明共享disk配置是不正确的。
同时,在节点1手动执行OCRDUMP确认key
SYSTEM.rootcrs.checkpoints.firstnode是存在的,并且
状态是” SUCCESS”
su – root
ocrdump /tmp/ocrdump1.out
more /tmp/ocrdump1.out
[SYSTEM.rootcrs.checkpoints.firstnode]
ORATEXT : SUCCESS
最后,修改UDEV配置文件(/etc/udev/rules.d/99-oracle-asmdevices.rules)后问题解决。
之所以转载该文档,是因为遇到相同的问题,不过我的问题是共享存储有问题
先用kfed读取2个节点的相同共享磁盘,发现内容不一致。
之后使用dd命令清除ASM信息,再在一个节点上使用fdisk命令对共享存储进行分区,发现另一个节点无法识别到分区的信息。
最终判定共享存储有问题,删除共享存储,再次添加共享存储,节点A新建分区,节点B扫描新的分区,可以认为共享存储功能正常。