[oracle@rac1 ~]$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 293624 Used space (kbytes) : 3864 Available space (kbytes) : 289760 ID : 450284450 Device/File Name : /dev/raw/raw5 Device/File integrity check succeeded Device/File Name : /dev/raw/raw6 Device/File integrity check succeeded
[root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl query css votedisk OCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage
[oracle@rac1 ~]$ crs_start -all rac1 : CRS-1019: Resource ora.rac2.ASM2.asm (application) cannot run on rac1 rac1 : CRS-1019: Resource ora.rac2.ASM2.asm (application) cannot run on rac1 CRS-0184: Cannot communicate with the CRS daemon.
四:利用备份恢复表决盘和ocr
[root@rac1 ~]# for i in {7..9};do dd if=/home/oracle/votedisk.bak of=/dev/raw/raw$i;done 读入了 587744+0 个块 输出了 587744+0 个块 读入了 587744+0 个块 输出了 587744+0 个块 读入了 587744+0 个块 输出了 587744+0 个块
表决盘恢复后,crs服务依旧无法启动 [oracle@rac1 ~]$ crs_start -all CRS-0184: Cannot communicate with the CRS daemon.
这个时候对表决盘进行查询,发现格式不对 [oracle@rac1 ~]$ crsctl query css votedisk OCR initialization failed with invalid format: PROC-22: The OCR backend has an invalid format
重启crs进程后,一切正常! [root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl stop crs Stopping resources. This could take several minutes. Successfully stopped CRS resources. Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued. [root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl start crs Attempting to start CRS stack The CRS stack will be started shortly
[oracle@rac1 ~]$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 293624 Used space (kbytes) : 3864 Available space (kbytes) : 289760 ID : 450284450 Device/File Name : /dev/raw/raw5 Device/File integrity check succeeded Device/File Name : /dev/raw/raw6 Device/File integrity check succeeded
[oracle@rac1 ~]$ srvctl stop database -d racdb -o immediate [oracle@rac1 ~]$ crs_stop -all Attempting to stop `ora.rac1.gsd` on member `rac1` Attempting to stop `ora.rac1.ons` on member `rac1` Attempting to stop `ora.rac2.gsd` on member `rac2` Attempting to stop `ora.rac2.ons` on member `rac2` Stop of `ora.rac1.gsd` on member `rac1` succeeded. Stop of `ora.rac2.gsd` on member `rac2` succeeded. Stop of `ora.rac1.ons` on member `rac1` succeeded. Stop of `ora.rac2.ons` on member `rac2` succeeded. Attempting to stop `ora.rac1.LISTENER_RAC1.lsnr` on member `rac1` Attempting to stop `ora.rac1.ASM1.asm` on member `rac1` Attempting to stop `ora.rac2.LISTENER_RAC2.lsnr` on member `rac2` Attempting to stop `ora.rac2.ASM2.asm` on member `rac2` Stop of `ora.rac1.LISTENER_RAC1.lsnr` on member `rac1` succeeded. Stop of `ora.rac2.LISTENER_RAC2.lsnr` on member `rac2` succeeded. Attempting to stop `ora.rac1.vip` on member `rac1` Attempting to stop `ora.rac2.vip` on member `rac2` Stop of `ora.rac1.vip` on member `rac1` succeeded. Stop of `ora.rac2.vip` on member `rac2` succeeded. Stop of `ora.rac2.ASM2.asm` on member `rac2` succeeded. Stop of `ora.rac1.ASM1.asm` on member `rac1` succeeded.
[root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl stop crs Stopping resources. This could take several minutes. Successfully stopped CRS resources. Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued.
[root@rac2 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crsctl stop crs Stopping resources. This could take several minutes. Successfully stopped CRS resources. Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued.
[root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/install/rootdelete.sh Shutting down Oracle Cluster Ready Services (CRS): OCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage Shutdown has begun. The daemons should exit soon. Checking to see if Oracle CRS stack is down... Oracle CRS stack is not running. Oracle CRS stack is down now. Removing script for Oracle Cluster Ready services Updating ocr file for downgrade Cleaning up SCR settings in '/etc/oracle/scls_scr' Cleaning up Network socket directories
[root@rac2 ~]# /u01/app/oracle/product/10.2.0/crs_1/install/rootdelete.sh Shutting down Oracle Cluster Ready Services (CRS): OCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage Shutdown has begun. The daemons should exit soon. Checking to see if Oracle CRS stack is down... Oracle CRS stack is not running. Oracle CRS stack is down now. Removing script for Oracle Cluster Ready services Updating ocr file for downgrade Cleaning up SCR settings in '/etc/oracle/scls_scr' Cleaning up Network socket directories
[root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: rac1 rac1-priv rac1 node 2: rac2 rac2-priv rac2 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Now formatting voting device: /dev/raw/raw7 Now formatting voting device: /dev/raw/raw8 Now formatting voting device: /dev/raw/raw9 Format of 3 voting devices complete. Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. rac1 CSS is inactive on these nodes. rac2 Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.
[root@rac2 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: rac1 rac1-priv rac1 node 2: rac2 rac2-priv rac2 clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. rac1 rac2 CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps 在输入参数中指定的接口 "255.255.255.0/eth0" 无效。
[oracle@rac1 ~]$ srvctl status asm -n rac1 ASM instance +ASM1 is running on node rac1. [oracle@rac1 ~]$ srvctl status asm -n rac2 ASM instance +ASM2 is running on node rac2.