10G RAC删除和添加节点



假设rac有两个节点rac1和rac2,rac2发生物理损坏重装系统,重新将rac2加入集群


1. 在rac1节点执行对rac2节点资源的停止任务,一定要手动执行停止操作,即使显示offline

[oracle@rac1 ~]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.orcl.db    application    ONLINE    ONLINE    rac1        
ora....l1.inst application    ONLINE    ONLINE    rac1        
ora....l2.inst application    ONLINE    OFFLINE               
ora....SM1.asm application    ONLINE    ONLINE    rac1        
ora....C1.lsnr application    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    ONLINE    ONLINE    rac1        
ora.rac1.ons   application    ONLINE    ONLINE    rac1        
ora.rac1.vip   application    ONLINE    ONLINE    rac1        
ora....SM2.asm application    ONLINE    OFFLINE               
ora....C2.lsnr application    ONLINE    OFFLINE               
ora.rac2.gsd   application    ONLINE    OFFLINE               
ora.rac2.ons   application    ONLINE    OFFLINE               
ora.rac2.vip   application    ONLINE    ONLINE    rac1        


[oracle@rac1 ~]$ srvctl stop inst -d orcl -i orcl2
[oracle@rac1 ~]$ srvctl stop nodeapps -n rac2
[oracle@rac1 ~]$ srvctl stop listener -n rac2
[oracle@rac1 ~]$ srvctl stop asm -n rac2

2. 删除群集中的rac2的资源  ---oracle

建议用图形化dbca  -->  instance management  -->  delete an instance

这里不建议用 srvctl remove instance -d orcl -i orcl2

因为 srvctl 仅仅是删除了 OCR 中的配置信息,但不会删除节点 1 的 thread、redo groups 以及 undo tablespace,而这些 dbca 都会自动做了。  

[oracle@rac1 ~]$ srvctl remove asm -n rac2

[root@rac1 ~]# cd /u01/app/oracle/product/10.2.0/crs_1/bin/
[root@rac1 bin]# ./crs_unregister ora.rac2.LISTENER_RAC2.lsnr


3. 在rac1节点,手工删除crs的远程端口记录

[oracle@rac1 ~]$ cat $ORA_CRS_HOME/opmn/conf/ons.config
localport=6113 
remoteport=6200 
loglevel=3
useocr=on

[root@rac1 bin]# ./racgons remove_config rac2:6200
racgons: Existing key value on rac2 = 6200.
racgons: rac2:6200 removed from OCR.

[root@rac1 bin]#  ./srvctl remove nodeapps -n rac2
Please confirm that you intend to remove the node-level applications on node rac2 (y/[n]) y

4. 在rac1节点,执行节点删除脚本
[oracle@rac1 orcl]$ olsnodes -p -i -n        --查看下需要删除的实例号 查询结果的第二列
rac1    1       rac1-priv       rac1-vip
rac2    2       rac2-priv       rac2-vip

[root@rac1 bin]# cd ../install
[root@rac1 install]# ./rootdeletenode.sh rac2,2       ----逗号后面点的2就是上面olsnodes 查出来的实例号
CRS-0210: Could not find resource 'ora.rac2.LISTENER_RAC2.lsnr'.
CRS-0210: Could not find resource 'ora.rac2.ons'.
CRS-0210: Could not find resource 'ora.rac2.vip'.
CRS-0210: Could not find resource 'ora.rac2.gsd'.
CRS-0210: Could not find resource ora.rac2.vip.
CRS nodeapps are deleted successfully
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully deleted 14 values from OCR.
Key SYSTEM.css.interfaces.noderac2 marked for deletion is not there. Ignoring.
Successfully deleted 5 keys from OCR.
Node deletion operation successful.
'rac2,2' deleted successfully 

[oracle@rac1 ~]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.orcl.db    application    ONLINE    ONLINE    rac1        
ora....l1.inst application    ONLINE    ONLINE    rac1        
ora....SM1.asm application    ONLINE    ONLINE    rac1        
ora....C1.lsnr application    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    ONLINE    ONLINE    rac1        
ora.rac1.ons   application    ONLINE    ONLINE    rac1        
ora.rac1.vip   application    ONLINE    ONLINE    rac1    

5. 执行数据级别和crs级别的清单删除   --oracle用户

[oracle@rac1 ~]$ cd /u01/app/oracle/product/10.2.0/db_1/oui/bin/
[oracle@rac1 bin]$ ./runInstaller  -silent -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=rac1"
Starting Oracle Universal Installer...

No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oracle/oraInventory
'UpdateNodeList' was successful.

[oracle@rac1 bin]$ cd /u01/app/oracle/product/10.2.0/crs_1/oui/bin/
[oracle@rac1 bin]$ ./runInstaller  -silent -updateNodeList ORACLE_HOME=$ORA_CRS_HOME "CLUSTER_NODES=rac1" CRS=TURE
Starting Oracle Universal Installer...

No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oracle/oraInventory
'UpdateNodeList' was successful.

6. 执行crs的节点添加任务 (图形化)

[oracle@rac1 ~]$ $ORA_CRS_HOME/oui/bin/addNode.sh

[root@rac2 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh 
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete

[root@rac1 ~]# cd /u01/app/oracle/product/10.2.0/crs_1/install
[root@rac1 install]# sh -x rootaddnode.sh
+ ORA_CRS_HOME=/u01/app/oracle/product/10.2.0/crs_1
+ export ORA_CRS_HOME
+ CH=/u01/app/oracle/product/10.2.0/crs_1
+ ORACLE_HOME=/u01/app/oracle/product/10.2.0/crs_1
+ export ORACLE_HOME
+ ORACLE_OWNER=oracle
+ CRS_NEW_HOST_NAME_LIST=rac2,2
+ CRS_NEW_NODE_NAME_LIST=rac2,2
+ CRS_NEW_PRIVATE_NAME_LIST=rac2-priv,2
+ CRS_NEW_NODEVIPS=rac2-vip
+ UNAME=/bin/uname
++ /bin/uname
+ PLATFORM=Linux
+ '[' -z '' ']'
+ AWK=/bin/awk
+ '[' -z '' ']'
+ SED=/bin/sed
+ '[' -z '' ']'
+ ECHO=/bin/echo
+ '[' -z '' ']'
+ CUT=/usr/bin/cut
+ '[' -z '' ']'
+ ID=/usr/bin/id
+ '[' -z '' ']'
+ SU=/bin/su
+ '[' -z '' ']'
+ GREP=/bin/grep
+ '[' -z '' ']'
+ CAT=/bin/cat
++ /usr/bin/id
++ /bin/awk '-F(' '{print $2}'
++ /bin/awk '-F)' '{print $1}'
+ RUID=root
+ '[' root '!=' root ']'
+ case $PLATFORM in
+ LD_LIBRARY_PATH=/u01/app/oracle/product/10.2.0/crs_1/lib
+ OCRCONFIG=/etc/oracle/ocr.loc
+ SU='/bin/su -l'
+ export LD_LIBRARY_PATH
+ CLSCFG=/u01/app/oracle/product/10.2.0/crs_1/bin/clscfg
+ /u01/app/oracle/product/10.2.0/crs_1/bin/clscfg -add -nn rac2,2 -pn rac2-priv,2 -hn rac2,2
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Attempting to add 1 new nodes to the configuration
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
++ /bin/awk '{
        nElems = split($1, nodeList, ",");
        for (i = 1; i < nElems;)
        {
            print nodeList[i];
            i += 2;
        }
      }'
++ /bin/echo rac2,2
+ NODES_LIST=rac2
++ /u01/app/oracle/product/10.2.0/crs_1/bin/olsnodes -l
+ LOCALNODE=rac1
++ /u01/app/oracle/product/10.2.0/crs_1/bin/srvctl config nodeapps -a -n rac1
+ VIP_STRING='VIP exists.: /rac1-vip/192.168.100.47/255.255.255.0/eth0'
++ /bin/echo VIP exists.: /rac1-vip/192.168.100.47/255.255.255.0/eth0
++ /bin/awk -F/ '{ print $(NF-1)}'
+ NETMASK=255.255.255.0
++ /bin/echo VIP exists.: /rac1-vip/192.168.100.47/255.255.255.0/eth0
++ /bin/awk -F/ '{ print $NF}'
+ NETIFs=eth0
++ /bin/echo eth0
++ /bin/sed 's|:|\||g'
+ NETIFs=eth0
+ Ni=1
++ /bin/echo rac2
+ for i in '`$ECHO $NODES_LIST`'
+ NODE_NAME=rac2
++ /bin/echo rac2-vip
++ /usr/bin/cut -d, -f1
+ NODE_VIP=rac2-vip
+ NODEVIP=rac2-vip/255.255.255.0/eth0
+ /bin/echo /u01/app/oracle/product/10.2.0/crs_1/bin/srvctl add nodeapps -n rac2 -A rac2-vip/255.255.255.0/eth0 -o /u01/app/oracle/product/10.2.0/crs_1
/u01/app/oracle/product/10.2.0/crs_1/bin/srvctl add nodeapps -n rac2 -A rac2-vip/255.255.255.0/eth0 -o /u01/app/oracle/product/10.2.0/crs_1
+ /u01/app/oracle/product/10.2.0/crs_1/bin/srvctl add nodeapps -n rac2 -A rac2-vip/255.255.255.0/eth0 -o /u01/app/oracle/product/10.2.0/crs_1
++ expr 1 + 1
+ Ni=2
+ . /u01/app/oracle/product/10.2.0/crs_1/install/paramfile.crs
++ ORA_CRS_HOME=/u01/app/oracle/product/10.2.0/crs_1
++ CRS_ORACLE_OWNER=oracle
++ CRS_DBA_GROUP=oinstall
++ CRS_VNDR_CLUSTER=false
++ CRS_OCR_LOCATIONS=/dev/raw/raw1,/dev/raw/raw2
++ CRS_CLUSTER_NAME=crs
++ CRS_HOST_NAME_LIST=rac1,1,rac2,2
++ CRS_NODE_NAME_LIST=rac1,1,rac2,2
++ CRS_PRIVATE_NAME_LIST=rac1-priv,1,rac2-priv,2
++ CRS_LANGUAGE_ID=AMERICAN_AMERICA.WE8ISO8859P1
++ CRS_VOTING_DISKS=/dev/raw/raw3,/dev/raw/raw4,/dev/raw/raw5
++ CRS_NODELIST=rac1,rac2
++ CRS_NODEVIPS=rac1/rac1-vip/255.255.255.0/eth0,rac2/rac2-vip/255.255.255.0/eth0
++ eval echo '$CRS_ORACLE_OWNER'
+++ echo oracle
+ PARAM_VALUE=oracle
++ /bin/echo oracle
++ /bin/awk '/^%/ { print "false"; }'
+ valid=
+ '[' '' = false ']'
++ /bin/echo rac2
++ /bin/sed 's| |,|g'
+ COMMASEPARATED_NODE_LIST=rac2
+ /bin/su -l oracle -c '/u01/app/oracle/product/10.2.0/crs_1/bin/cluutil -sourcefile /etc/oracle/ocr.loc -destfile /u01/app/oracle/product/10.2.0/crs_1/srvm/admin/ocr.loc -nodelist rac2'
[root@rac1 install]# cat /u01/app/oracle/product/10.2.0/crs_1/opmn/conf/ons.config
localport=6113 
remoteport=6200 
loglevel=3
useocr=on

[root@rac2 ~]# cd /u01/app/oracle/product/10.2.0/crs_1/
[root@rac2 crs_1]# sh -x root.sh 
+ /u01/app/oracle/product/10.2.0/crs_1/install/rootinstall
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
+ /u01/app/oracle/product/10.2.0/crs_1/install/rootconfig
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
OCR LOCATIONS =  /dev/raw/raw1,/dev/raw/raw2
OCR backup directory '/u01/app/oracle/product/10.2.0/crs_1/cdata/crs' does not exist. Creating now
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
        rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps

Creating VIP application resource on (0) nodes.
Creating GSD application resource on (0) nodes.
Creating ONS application resource on (0) nodes.
Starting VIP application resource on (2) nodes.1:CRS-0233: Resource or relatives are currently involved with another operation.
Check the log file "/u01/app/oracle/product/10.2.0/crs_1/log/rac2/racg/ora.rac2.vip.log" for more details
..
Starting GSD application resource on (2) nodes...
Starting ONS application resource on (2) nodes.1:CRS-0233: Resource or relatives are currently involved with another operation.
Check the log file "/u01/app/oracle/product/10.2.0/crs_1/log/rac2/racg/ora.rac2.ons.log" for more details
..


Done.

[oracle@rac1 ~]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.orcl.db    application    ONLINE    ONLINE    rac1        
ora....l1.inst application    ONLINE    ONLINE    rac1        
ora....SM1.asm application    ONLINE    ONLINE    rac1        
ora....C1.lsnr application    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    ONLINE    ONLINE    rac1        
ora.rac1.ons   application    ONLINE    ONLINE    rac1        
ora.rac1.vip   application    ONLINE    ONLINE    rac1        
ora.rac2.gsd   application    ONLINE    ONLINE    rac2        
ora.rac2.ons   application    ONLINE    ONLINE    rac2        
ora.rac2.vip   application    ONLINE    ONLINE    rac2        

7. 在rac1执行远程节点的crs添加任务

[root@rac1 bin]# pwd
/u01/app/oracle/product/10.2.0/crs_1/bin
[root@rac1 bin]# ./racgons add_config rac2:6200

[root@rac2 crs_1]# cat /u01/app/oracle/product/10.2.0/crs_1/opmn/conf/ons.config
localport=6113 
remoteport=6200 
loglevel=3
useocr=on

8. 在rac1执行节点添加任务向导  (图形化)

[oracle@rac1 ~]$ $ORACLE_HOME/oui/bin/addNode.sh

[root@rac2 crs_1]# cd /u01/app/oracle/product/10.2.0/db_1/
[root@rac2 db_1]# sh root.sh
Running Oracle10 root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/10.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

9. 添加实例,有两种方法,一是图形化添加,简单方便,二是手动添加


9.1 使用图形化添加实例
    dbca   instance management   add an instance

至此,全部结束



9.2  以下是使用手动添加实例,如果9.1能添加成功,以下不需要做

在rac2手工创建asm所需的目录

[oracle@rac2 oracle]$ mkdir -p /u01/app/oracle/admin/+ASM
[oracle@rac2 oracle]$ cd admin/+ASM/
[oracle@rac2 +ASM]$ mkdir bdump cdump hdump pfile udump

将rac1的init+ASM1和orapw+ASM1文件改为2结尾的,拷贝到rac2的$ORACLE_HOME/dbs

[oracle@rac1 ~]$ scp $ORACLE_HOME/dbs/init+ASM1.ora rac2:$ORACLE_HOME/dbs/init+ASM2.ora
scp $ORACLE_HOME/dbs/orapw+ASM1 rac2:$ORACLE_HOME/dbs/orapw+ASM2
scp $ORACLE_HOME/dbs/initorcl1.ora  rac2:$ORACLE_HOME/dbs/initorcl2.ora
scp $ORACLE_HOME/dbs/orapworcl1 rac2:$ORACLE_HOME/dbs/orapworcl2


启动rac1的asm

[oracle@rac1 ~]$ srvctl add asm -n rac2 -i +ASM2 -o $ORACLE_HOME
[oracle@rac1 ~]$ srvctl start asm -n rac2

在rac2查询asm磁盘组是否正确

[oracle@rac2 ~]$ export ORACLE_SID=+ASM2
[oracle@rac2 ~]$ 
[oracle@rac2 ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Thu Jun 4 12:11:39 2015

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> select name,state from v$asm_diskgroup;

NAME                           STATE
------------------------------ -----------
DATA                           MOUNTED
FRA                            MOUNTED

SQL> select name, value from v$spparameter where sid = 'orcl1';

no rows selected

在rac2手工创建所需的目录,注意权限

[oracle@rac2 admin]$ mkdir orcl
[oracle@rac2 orcl]$ mkdir adump bdump cdump dpdump hdump pfile udump

在rac1通过sqlplus创建rac2的相关参数  --使用手动添加的原rac2的好处,如果是改为添加rac4 ,那数据库里面就要相应的建立以下信息 (如果使用dbca添加实例则会自动添加,但是原来rac2的信息还是要在数据库里手动删除如undotbs2)


alter system set instance_number=1 scope=spfile sid='wql1';

create undo tablespace undotbs1 datafile '+WQL' size 200M; 可选

alter system set undo_tablespace='undotbs1' scope=spfile sid='wql1';

alter database add logfile thread 1 group 1 '+WQL' size 50M;

alter database add logfile thread 1 group 2 '+WQL' size 50M;

alter system set thread=1 sid='wql1';

alter database enable thread 1 ;

重启实例1  启动实例2


在RAC1上使用 netca (图形化) 配置监听 


在RAC1上使用 dbca  (图形化) 添加RAC2实例


[oracle@rac1 orcl]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.orcl.db    application    ONLINE    ONLINE    rac1        
ora....l1.inst application    ONLINE    ONLINE    rac1        
ora....l2.inst application    ONLINE    ONLINE    rac2        
ora....SM1.asm application    ONLINE    ONLINE    rac1        
ora....C1.lsnr application    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    ONLINE    ONLINE    rac1        
ora.rac1.ons   application    ONLINE    ONLINE    rac1        
ora.rac1.vip   application    ONLINE    ONLINE    rac1        
ora....SM2.asm application    ONLINE    ONLINE    rac2        
ora....C2.lsnr application    ONLINE    ONLINE    rac2        
ora.rac2.gsd   application    ONLINE    ONLINE    rac2        
ora.rac2.ons   application    ONLINE    ONLINE    rac2        
ora.rac2.vip   application    ONLINE    ONLINE    rac2       

至此,全部结束 


参考文档:

https://blogs.oracle.com/Database4CN/entry/%E8%8A%82%E7%82%B9os%E9%87%8D%E8%A3%85%E5%90%8E%E5%8A%A0%E5%9B%9E%E9%9B%86%E7%BE%A4%E7%9A%84%E6%AD%A5%E9%AA%A4_10g_rac

Removing a Node from a 10gR1 RAC Cluster (Doc ID 269320.1)


Adding New Nodes to Your Oracle RAC 10g Cluster on Linux


http://www.oracle.com/technetwork/articles/vallath-nodes-095339.html















评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值