哈哈,刚说最后一篇,闲的无聊又搞了个测试。
环境:
OS:redhat 5.8
DB:Oracle 10.2.0.5 raw device
我们要养成经常备份ocr跟votedisk的习惯。但是ocr跟votedisk没有备份也是可以重建的。就像控制文件。但是过程较为麻烦。以下为详细步骤:
首先备份ocr跟votedisk:
[root@rac1 ~]#
[root@rac1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 1048296
Used space (kbytes) : 3292
Available space (kbytes) : 1045004
ID : 1891074113
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File Name : /dev/raw/raw2
Device/File integrity check succeeded
Cluster registry integrity check succeeded
[root@rac1 ~]# crsctl qeury css votedisk
Unknown parameter: qeury
[root@rac1 ~]# crsctl query css votedisk
0. 0 /dev/raw/raw3
1. 0 /dev/raw/raw4
2. 0 /dev/raw/raw5
[root@rac1 ~]# dd if=/dev/raw/raw3 of=/opt/votedisk.bak
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 462.285 seconds, 2.3 MB/s
[root@rac1 ~]# ocrconfig -export /opt/ocr.bak
[root@rac1 ~]# cd /opt/
[root@rac1 opt]# ls
ocr.bak ORCLfmap votedisk.bak
删除配置:
[root@rac1 install]# ./rootdelete.sh
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources. This could take several minutes.
Error while stopping resources. Possible cause: CRSD is down.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in '/etc/oracle/scls_scr'
Cleaning up Network socket directories
[root@rac2 install]# ./rootdelete.sh
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources. This could take several minutes.
Error while stopping resources. Possible cause: CRSD is down.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in '/etc/oracle/scls_scr'
Cleaning up Network socket directories
在安装节点执行:
[root@rac1 install]# ./rootdeinstall.sh
Removing contents from OCR mirror device
2560+0 records in
2560+0 records out
10485760 bytes (10 MB) copied, 0.521651 seconds, 20.1 MB/s
Removing contents from OCR device
2560+0 records in
2560+0 records out
10485760 bytes (10 MB) copied, 0.496207 seconds, 21.1 MB/s
重跑root脚本:
[root@rac1 crs]# ./root.sh
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
CSS is inactive on these nodes.
rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@rac2 crs]# ./root.sh
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Invalid syntax used for option "nodevips". Check usage (vipca -help) for proper syntax.
配置VIP
[root@rac2 crs]# oifcfg getif
[root@rac2 crs]# oifcfg iflist
eth0 192.168.56.0
eth1 192.168.11.0
[root@rac2 crs]# oifcfg setif -global eth0/192.168.56.0:public
[root@rac2 crs]# oifcfg setif -global eht1/192.168.11.0:cluster_interconnect
[root@rac2 crs]# oifcfg getif
eth0 192.168.56.0 global public
eht1 192.168.11.0 global cluster_interconnect
运行vipca配置VIP
查看结果:
[root@rac1 crs]# crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
配置监听:
[root@rac1 admin]# mv listener.ora listener.ora.bak
[root@rac2 admin]# mv listener.ora listener.ora.bak
使用netca创建监听
[root@rac1 admin]# crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
将数据库加入CRS:
[oracle@rac1 ~]$ srvctl add database -d zhdb -o $ORACLE_HOME
启动数据库:
srvctl start database -d zhdb
[oracle@rac1 dbs]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
ora.zhdb.db application ONLINE ONLINE rac1
ora....b2.inst application ONLINE ONLINE rac2
自此OCR跟Votedisk重建成功