Removing a Node from a 10gR1 RAC Cluster [ID 269320.1]

Removing a Node from a 10gR1 RAC Cluster [ID 269320.1]

 Modified 20-OCT-2010     Type BULLETIN     Status PUBLISHED 


Note:  This article is only relevant for 10gR1 RAC environments.  

For 10gR2 RAC environments please follow the documented procedures in the manual: 
Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide
10g Release 2 (10.2)
Part Number B14197-03

PURPOSE
-------

The purpose of this note is to provide the user with a  document that 
can be used as a guide to remove a cluster node from an Oracle 10g Real
Applications environment.
 
SCOPE & APPLICATION
-------------------

This document can be used by DBAs and support analsyts who need to 
either remove a cluster node or assist another in removing a cluster
node in a 10g Unix Real Applications environment.

REMOVING A NODE FROM A 10g RAC CLUSTER
--------------------------------------

If you have to remove a node from a RAC 10g database, even if the node 
will no longer be available to the environment, there is a certain 
amount of cleanup that needs to be done.  The remaining nodes need to 
be informed of the change of status of the departing node.  If there are
any steps that must be run on the node being removed and the node is no 
longer available those commands can be skipped.

The most important 3 steps that need to be followed are;

A.	Remove the instance using DBCA.
B.	Remove the node from the cluster.
C.	Reconfigure the OS and remaining hardware.

Here is a breakdown of the above steps.

A.	Remove the instance using DBCA.
--------------------------------------

1.      Verify that you have a good backup of the OCR (Oracle Configuration
        Repository) using ocrconfig -showbackup.

2.	Run DBCA from one of the nodes you are going to keep.  Leave the    
        database up and also leave the departing instance up and running.

3.	Choose "Instance Management"

4.	Choose "Delete an instance"

5.      On the next screen, select the cluster database from which you
	will delete an instance.  Supply the system privilege username
        and password.

6.	On the next screen, a list of cluster database instances will 
        appear.  Highlight the instance you would like to delete then 
        click next.

7.	If you have services configured, reassign the services.  Modify the 
        services so that each service can run on one of the remaining 
	instances.  Set "not used" for each service regarding the instance 
        that is to be deleted.  Click Finish.

8.	If your database is in archive log mode you may encounter the 
        following errors:
        ORA-350  
        ORA-312  
        This may occur because the DBCA cannot drop the current log, as 
        it needs archiving.  This issue is fixed in the 10.1.0.3 
        patchset. But previous to this patchset you should click the 
        ignore button and when the DBCA completes, manually archive 
        the logs for the deleted instance and dropt the log group.

        SQL>  alter system archive log all;
        SQL>  alter database drop logfile group 2;  

9.	Verify that the dropped instance's redo thread has been removed by
	querying v$log.  If for any reason the redo thread is not disabled 
        then disable the thread.  

        SQL> alter database disable thread 2;

10.     Verify that the instance was removed from the OCR (Oracle 
        Configuration Repository) with the following commands:

   	srvctl config database -d <db_name>
	cd <CRS_HOME>/bin
	./crs_stat

11.	If this node had an ASM instance and the node will no longer be a 
	part of the cluster you will now need to remove the ASM instance with:

        srvctl stop asm -n <nodename>
        srvctl remove asm -n <nodename>

	Verify that asm is removed with:

	srvctl config asm -n <nodename>


B.	Remove the Node from the Cluster
----------------------------------------

Once the instance has been deleted.  The process of removing the node 
from the cluster is a manual process. This is accomplished by running 
scripts on the deleted node to remove the CRS install, as well as 
scripts on the remaining nodes to update the node list.  The following 
steps assume that the node to be removed is still functioning.

1.	To delete node number 2 first stop and remove the nodeapps on the 
	node you are removing.  Assuming that you have removed the ASM 
	instance as the root user on a remaining node;

        # srvctl stop nodeapps -n <nodename>

2.      Run netca.  Choose "Cluster Configuration". 

3.	Only select the node you are removing and click next.

4.	Choose "Listener Configuration" and click next.

5. 	Choose "Delete" and delete any listeners configured on the node 
	you are removing.

6.  	Run <CRS_HOME>/bin/crs_stat.  Make sure that all database 
	resources are running on nodes that are going to be kept.  For
	example:

	NAME=ora.<db_name>.db
	TYPE=application
	TARGET=ONLINE
	STATE=ONLINE on <node2>

	Ensure that this resource is not running on a node that will be 
	removed.  Use <CRS_HOME>/bin/crs_relocate to perform this.
	Example:

	crs_relocate ora.<db_name>.db

7. 	As the root user, remove the nodeapps on the node you are removing.

        # srvctl remove nodeapps -n <nodename>

8.	Next as the Oracle user run the installer with the 
        updateNodeList option on any remaining node in the cluster.

        a.  DISPLAY=ipaddress:0.0; export DISPLAY
        This should be set even though the gui does not run.
        
        b.  $ORACLE_HOME/oui/bin/runInstaller -updateNodeList 
        ORACLE_HOME=<Oracle Home Location> CLUSTER_NODES=<node1>, 
        <node3>,<node4>

	With this command we are defining the RDBMS $ORACLE_HOME's that 
	now are part of the cluster in the Oracle inventory.  If there is 
	no $ORACLE_HOME this step can be skipped.

9.	Change to the root user to finish the removal on a node that
	is being removed.  This command will stop the CRS stack 
        and delete the ocr.loc file on the node to be removed.  The 
        nosharedvar option assumes the ocr.loc file is not on a shared 
        file sytem.  If it does exist on a shared file system then 
        specify sharedvar instead.  The nosharedhome option specifies 
	that the CRS_HOME is on a local filesystem.  If the CRS_HOME is
	on a shared file system, specify sharedhome instead.
	Run the rootdelete.sh script from <CRS_HOME>/install.  Example:

        # cd <CRS_HOME>/install
        # ./rootdelete.sh local nosharedvar nosharedhome

10.	On a node that will be kept, the root user should run the 
        rootdeletenode.sh script from the <CRS_HOME>/install directory.  
        When running this script from the CRS home specify both the node
        name and the node number.  The node name and the node number are
	visiable in olsnodes -n.  Also do NOT put a space after the 
        comma between the two. 
 
	# olsnodes -n
	<node1>       1
	<node2>       2

        # cd <CRS_HOME>/install
	# ./rootdeletenode.sh <node2name>,2

11.	Confirm success by running OLSNODES.

        <CRS_HOME>/bin>: ./olsnodes -n
        <node1>	      1

12.	Now switch back to the oracle user account and run the same 
        runInstaller command as before.  Run it this time from the 
        <CRS_HOME> instead of the ORACLE_HOME.  Specify all of the 
        remaining nodes.

        a.  DISPLAY=ipaddress:0.0; export DISPLAY

        b.  <CRS_HOME>/oui/bin/runInstaller -updateNodeList 
        ORACLE_HOME=<CRS Home> CLUSTER_NODES=<node1>, 
        <node3>, <node4> CRS=TRUE

	With this command we are defining the CRS HOME's that now are 
	part of the cluster in the Oracle inventory.  

13.	Once the node updates are done you will need to manually delete
        the $ORACLE_HOME and $CRS_HOME from the node to be expunged, 
        unless, of course, either of these is on a shared file system 
        that is still being used.

        a.  $ORACLE_HOME>: rm -rf *
        b.  $CRS_HOME> : rm -rf *   (as root)

14.	Next, as root, from the deleted node, verify that all init scripts
	and soft links are removed:
        
Sun:

	rm /etc/init.d/init.cssd 
	rm /etc/init.d/init.crs 
	rm /etc/init.d/init.crsd 
	rm /etc/init.d/init.evmd 
	rm /etc/rc3.d/K96init.crs
	rm /etc/rc3.d/S96init.crs
        rm -Rf /var/opt/oracle/scls_scr 
        rm -Rf /var/opt/oracle/oprocd

Linux:

	rm -f /etc/init.d/init.cssd 
	rm -f /etc/init.d/init.crs 
	rm -f /etc/init.d/init.crsd 
	rm -f /etc/init.d/init.evmd 
	rm -f /etc/rc2.d/K96init.crs
	rm -f /etc/rc2.d/S96init.crs
	rm -f /etc/rc3.d/K96init.crs
	rm -f /etc/rc3.d/S96init.crs
	rm -f /etc/rc5.d/K96init.crs
	rm -f /etc/rc5.d/S96init.crs
        rm -Rf /etc/oracle/scls_scr

HP-UX:

	rm /sbin/init.d/init.cssd 
	rm /sbin/init.d/init.crs 
	rm /sbin/init.d/init.crsd 
	rm /sbin/init.d/init.evmd 
	rm /sbin/rc3.d/K960init.crs
	rm /sbin/rc3.d/S960init.crs
	rm /sbin/rc2.d/K960init.crs
	rm /sbin/rc2.d/K001init.crs
        rm -Rf /var/opt/oracle/scls_scr 
        rm -Rf /var/opt/oracle/oprocd

HP Tru64:

	rm /sbin/init.d/init.cssd 
	rm /sbin/init.d/init.crs 
	rm /sbin/init.d/init.crsd 
	rm /sbin/init.d/init.evmd 
	rm /sbin/rc3.d/K96init.crs
	rm /sbin/rc3.d/S96init.crs
        rm -Rf /var/opt/oracle/scls_scr 
        rm -Rf /var/opt/oracle/oprocd

IBM AIX:

	rm /etc/init.cssd 
	rm /etc/init.crs 
	rm /etc/init.crsd 
	rm /etc/init.evmd 
	rm /etc/rc.d/rc2.d/K96init.crs
	rm /etc/rc.d/rc2.d/S96init.crs
        rm -Rf /etc/oracle/scls_scr
        rm -Rf /etc/oracle/oprocd

16.	You can also remove the /etc/oracle directory, the 
        /etc/oratab file, and the Oracle inventory (if desired)

17.     To remove an ADDITIONAL ORACLE_HOME, ASM_HOME, or EM_HOME from the 
        inventory on all remaining nodes, run the installer to update the 
        node list.  Example (if removing node 2):

        runInstaller -updateNodeList -local /
        ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=node1,node3,node4
(If you are using private home installations, please ignore the "-local" flag.)



RELATED DOCUMENTS
-----------------

Oracle® Real Application Clusters Administrator's Guide 10g Release 1 (10.1)
Part Number B10765-02
Chapter 5
Oracle Series/Oracle Database 10g High Availabilty Chapter 5 28-34.  
Note 239998.1
Oracle Clusterware and RAC Admin and Deployment Guide - Ch. 10 and 11
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值