11gR203 add node for RAC

本文提供了一套详细的Oracle RAC(Real Application Clusters)集群节点操作流程,包括节点清理、重新配置步骤及遇到的问题解决方案。重点介绍了如何在不影响其他节点运行的情况下,高效地移除故障节点,确保集群稳定性和高效性。
摘要由CSDN通过智能技术生成

RAC添加节点要点、步骤和遇到的问题:

要点:

1、不需要停其他节点,可以说不影响正在运行的节点。

2、不需要下载安装介质。

步骤:

--delete

清理失败的节点:

1. Reconfigure the RDBMS Services in the cluster to take into account node 2 is gone.

	1.1 Reconfigure the Service plb1 so that it is only running on the remaining instance.
		[oracle@sage ~]$ srvctl modify service -d db112i -s plb1 -n -i db112i1 -f
	
	1.2 Examine the configuration to ensure the service is removed from instance db112i2 and node Thyme.
	
	
	[oracle@sage ~]$ srvctl status service -d db112i -s plb1
	Service plb1 is running on instance(s) db112i1
	
	[root@sage ~]# /opt/app/oracle/product/grid/bin/crsctl stat res -t
	..
	ora.db112i.plb1.svc
	1 ONLINE ONLINE sage 
2. Reconfigure the RDBMS Instances in the cluster to take into account node 2 is gone.

	2.1. Remove the database instances. As this is an Administrator Managed database this can be performed through dbca. From the RAC Instance Management section in dbca follow the wizard to remove the Instance db112i2 from node 2 thyme.
	
	
	[oracle@sage ~]$ dbca
	[oracle@sage ~]$ 
	
	[oracle@sage ~]$ srvctl config database -d db112i
	Database unique name: db112i
	Database name: db112i
	Oracle home: /opt/app/oracle/database/11.2/db_1
	Oracle user: oracle
	Spfile: +DATA1/db112i/spfiledb112i.ora
	Domain: vmdom
	Start options: open
	Stop options: immediate
	Database role: PRIMARY
	Management policy: AUTOMATIC
	Server pools: db112i
	Database instances: db112i1
	Disk Groups: DATA1
	Services: plb1
	Database is administrator managed
	
	[root@sage ~]# /opt/app/oracle/product/grid/bin/crsctl stat res -t
	..
	ora.db112i.db
	1 ONLINE ONLINE sage Open 
	ora.db112i.plb1.svc
	1 ONLINE ONLINE sage 
	..
	
3. Remove the Node from the RAC Cluster

	3.1 Using the Installer remove the failed node from Inventory of the Remaining Node(s)
	
	[oracle@sage ~]$ cd $ORACLE_HOME/oui/bin
	[oracle@sage bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/database/11.2/db_1 "CLUSTER_NODES={sage}" 
	Starting Oracle Universal Installer...
	
	Checking swap space: must be greater than 500 MB. Actual 2601 MB Passed
	The inventory pointer is located at /etc/oraInst.loc
	The inventory is located at /opt/app/oracle/oraInventory
	'UpdateNodeList' was successful.

4. Remove the Node from the Grid Cluster
The process for performing the removal of a failed node has been based on the node deletion processes documented in the Grid and RAC administration guides.

From any node that you are not deleting,  run the following commands from the Grid_home/bin directory as root to delete the node from the cluster:

	4.1 Stop the VIP resource for the node thyme
	
		[root@sage bin]# ./srvctl stop vip -i thyme
	
	4.2 Remove the VIP for the node thyme
	
		[root@sage bin]# ./srvctl remove vip -i thyme -f
	
	4.3 Check the state of the environment and ensure the VIP for node thyme is removed.
	
		[root@sage bin]# ./crsctl stat res -t
		..
		ora.sage.vip
		1 ONLINE ONLINE sage 
		..

	4.4 Remove node 2, thyme from the Grid Infrastructure/clusterware

		# crsctl delete node -n thyme
	
	4.5 As the owner of the Grid Infrastructure Installation perform the following to clean up the Grid Infrastructure inventory on the remaining nodes (in this case node 1, sage).
	
		[root@sage bin]# su - oracle
		[oracle@sage ~]$ . oraenv db112i1
		
		[oracle@sage ~]$ cd $ORACLE_HOME/oui/bin
		
		[oracle@sage ~]$ ./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/grid "CLUSTER_NODES={sage}" CRS=TRUE -silent
	
	4.6  As root now list the nodes that are a part of the cluster to confirm the node required (thyme) has been removed successfully and the only remaining node in this case is node sage.
	At the end of this process only the node sage remains as a part of the cluster.
	
		[root@sage bin]# ./olsnodes
		sage


删除存在的节点:

1.
	$ olsnodes -s -t
	If the node is pinned, then run the crsctl unpin css command. Otherwise,proceed to the next step.
2.Disable the Oracle Clusterware applications and daemons running on the node.
	Run the rootcrs.pl script as root from the Grid_home/crs/install
	directory on the node to be deleted, as follows:
	# ./rootcrs.pl -deconfig -deinstall -force
	Note: Before you run this command, you must stop the EMAGENT, as follows:
	$ emctl stop dbconsole
3.From any node that you are not deleting
	# crsctl delete node -n node_to_be_deleted

4.On the node you want to delete, run the following command as the user that
	installed Oracle Clusterware from the Grid_home/oui/bin directory where
	node_to_be_deleted is the name of the node that you are deleting:
	$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES=
	{node_to_be_deleted}" CRS=TRUE -silent -local

5.For a local home, deinstall the Oracle Clusterware home from the node that
	you want to delete, as follows, by running the following command, where
	Grid_home is the path defined for the Oracle Clusterware home:
	$ Grid_home/deinstall/deinstall –local
	Caution: If you do not specify the -local flag, then the command removes the Grid Infrastructure home from every node in the cluster.

6. On any node other than the node you are deleting, run the following command
	from the Grid_home/oui/bin directory where remaining_nodes_list is a
	comma-delimited list of the nodes that are going to remain part of your cluster:
	$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES={remaining_nodes_list}" CRS=TRUE -silent

--add

	1. Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment.
	2. Verify the integrity of the cluster and node3: 
		$ cluvfy stage -pre nodeadd -n node3 [-fixup [-fixupdir fixup_dir]] [-verbose]
	3. To extend the Grid Infrastructure home to the node3("CLUSTER_NEW_NODES={node3,node4,node5}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip,node4-vip,node5-vip}")
		Grid_home/oui/bin/addNode.sh directory on node1
		If you are using Grid Naming Service (GNS), run the following command:
		$ ./addNode.sh "CLUSTER_NEW_NODES={node3}"
		If you are not using GNS, run the following command:
		$ ./addNode.sh "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}"
	
		Alternatively,
		$ ./addNode.sh -responseFile file_name
		$ vi file_name
		RESPONSEFILE_VERSION=2.2.1.0.0
		CLUSTER_NEW_NODES={node3}
		CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}
	4. If you have an Oracle RAC or Oracle RAC One Node database configured on the
	cluster and you have a local Oracle home, then do the following to extend the
	Oracle database home to node3:

		$ $Oracle_home/oui/bin/addNode.sh "CLUSTER_NEW_NODES={node3}"
		$ $Oracle_home/root.sh script on node3 as root, where Oracle_home is the Oracle RAC home.
	
	5. Run the Grid_home/root.sh script on the node3 as root and run the subsequent script, as instructed.
	6. Run the following CVU command to check cluster integrity.
		$ cluvfy stage -post nodeadd -n node3 [-verbose]

遇到的问题:

预检不通过,无法继续安装,但实际已满足安装条件,比如:/u01/11.2.0/grid目录存在,但预检时却检查失败。

prod02:/u01/11.2.0/grid/oui/bin$./addNode.sh -ignoreSysPrereqs -force "CLUSTER_NEW_NODES={prod01}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={prod01-vip}"

Performing pre-checks for node addition 

Checking node reachability...
Node reachability check passed from node "prod02"

:::::

Checking CRS home location...
PRVG-1013 : The path "/u01/11.2.0/grid" does not exist or cannot be created on the nodes to be added
Shared resources check for node addition failed

Check failed on nodes: 
        prod01

Checking node connectivity...


可以通过设置环境变量IGNORE_PREADDNODE_CHECKS=Y来解决,详细如下:

prod02:/u01/11.2.0/grid/oui/bin$cat addNode.sh 
#!/bin/sh
OHOME=/u01/11.2.0/grid
INVPTRLOC=$OHOME/oraInst.loc
EXIT_CODE=0
ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC ORACLE_HOME=$OHOME $*"
if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f "$OHOME/cv/cvutl/check_nodeadd.pl" ]      <<<<<<<<<脚本这里可以看到这个参数
then
        $ADDNODE
        EXIT_CODE=$?;
else
        CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre ORACLE_HOME=$OHOME $*"
        $CHECK_NODEADD
        EXIT_CODE=$?;
        if [ $EXIT_CODE -eq 0 ]
        then
                $ADDNODE
                EXIT_CODE=$?;
        fi
fi
exit $EXIT_CODE ;
                                                              
prod02:/u01/11.2.0/grid/oui/bin$export IGNORE_PREADDNODE_CHECKS=Y
prod02:/u01/11.2.0/grid/oui/bin$./addNode.sh -ignoreSysPrereqs -force "CLUSTER_NEW_NODES={prod01}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={prod01-vip}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4727 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.


Performing tests to see whether nodes prod01 are available
............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/11.2.0/grid
   New Nodes
Space Requirements
   New Nodes
      prod01
         /u01: Required 3.91GB : Available 17.60GB
Installed Products
   Product Names
      Oracle Grid Infrastructure 11.2.0.3.0 
      Sun JDK 1.5.0.30.03 
      Installer SDK Component 11.2.0.3.0 
      Oracle One-Off Patch Installer 11.2.0.1.7 
      Oracle Universal Installer 11.2.0.3.0 
      Oracle USM Deconfiguration 11.2.0.3.0 
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0 
      Enterprise Manager Common Core Files 10.2.0.4.4 
      Oracle DBCA Deconfiguration 11.2.0.3.0 
      Oracle RAC Deconfiguration 11.2.0.3.0 
      Oracle Quality of Service Management (Server) 11.2.0.3.0 
      Installation Plugin Files 11.2.0.3.0 
      Universal Storage Manager Files 11.2.0.3.0 
      Oracle Text Required Support Files 11.2.0.3.0 
      Automatic Storage Management Assistant 11.2.0.3.0 
      Oracle Database 11g Multimedia Files 11.2.0.3.0 
      Oracle Multimedia Java Advanced Imaging 11.2.0.3.0 
      Oracle Globalization Support 11.2.0.3.0 
      Oracle Multimedia Locator RDBMS Files 11.2.0.3.0 
      Oracle Core Required Support Files 11.2.0.3.0 
      Bali Share 1.1.18.0.0 
      Oracle Database Deconfiguration 11.2.0.3.0 
      Oracle Quality of Service Management (Client) 11.2.0.3.0 
      Expat libraries 2.0.1.0.1 
      Oracle Containers for Java 11.2.0.3.0 
      Perl Modules 5.10.0.0.1 
      Secure Socket Layer 11.2.0.3.0 
      Oracle JDBC/OCI Instant Client 11.2.0.3.0 
      Oracle Multimedia Client Option 11.2.0.3.0 
      LDAP Required Support Files 11.2.0.3.0 
      Character Set Migration Utility 11.2.0.3.0 
      Perl Interpreter 5.10.0.0.2 
      PL/SQL Embedded Gateway 11.2.0.3.0 
      OLAP SQL Scripts 11.2.0.3.0 
      Database SQL Scripts 11.2.0.3.0 
      Oracle Extended Windowing Toolkit 3.4.47.0.0 
      SSL Required Support Files for InstantClient 11.2.0.3.0 
      SQL*Plus Files for Instant Client 11.2.0.3.0 
      Oracle Net Required Support Files 11.2.0.3.0 
      Oracle Database User Interface 2.2.13.0.0 
      RDBMS Required Support Files for Instant Client 11.2.0.3.0 
      RDBMS Required Support Files Runtime 11.2.0.3.0 
      XML Parser for Java 11.2.0.3.0 
      Oracle Security Developer Tools 11.2.0.3.0 
      Oracle Wallet Manager 11.2.0.3.0 
      Enterprise Manager plugin Common Files 11.2.0.3.0 
      Platform Required Support Files 11.2.0.3.0 
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0 
      RDBMS Required Support Files 11.2.0.3.0 
      Oracle Ice Browser 5.2.3.6.0 
      Oracle Help For Java 4.2.9.0.0 
      Enterprise Manager Common Files 10.2.0.4.3 
      Deinstallation Tool 11.2.0.3.0 
      Oracle Java Client 11.2.0.3.0 
      Cluster Verification Utility Files 11.2.0.3.0 
      Oracle Notification Service (eONS) 11.2.0.3.0 
      Oracle LDAP administration 11.2.0.3.0 
      Cluster Verification Utility Common Files 11.2.0.3.0 
      Oracle Clusterware RDBMS Files 11.2.0.3.0 
      Oracle Locale Builder 11.2.0.3.0 
      Oracle Globalization Support 11.2.0.3.0 
      Buildtools Common Files 11.2.0.3.0 
      Oracle RAC Required Support Files-HAS 11.2.0.3.0 
      SQL*Plus Required Support Files 11.2.0.3.0 
      XDK Required Support Files 11.2.0.3.0 
      Agent Required Support Files 10.2.0.4.3 
      Parser Generator Required Support Files 11.2.0.3.0 
      Precompiler Required Support Files 11.2.0.3.0 
      Installation Common Files 11.2.0.3.0 
      Required Support Files 11.2.0.3.0 
      Oracle JDBC/THIN Interfaces 11.2.0.3.0 
      Oracle Multimedia Locator 11.2.0.3.0 
      Oracle Multimedia 11.2.0.3.0 
      HAS Common Files 11.2.0.3.0 
      Assistant Common Files 11.2.0.3.0 
      PL/SQL 11.2.0.3.0 
      HAS Files for DB 11.2.0.3.0 
      Oracle Recovery Manager 11.2.0.3.0 
      Oracle Database Utilities 11.2.0.3.0 
      Oracle Notification Service 11.2.0.3.0 
      SQL*Plus 11.2.0.3.0 
      Oracle Netca Client 11.2.0.3.0 
      Oracle Net 11.2.0.3.0 
      Oracle JVM 11.2.0.3.0 
      Oracle Internet Directory Client 11.2.0.3.0 
      Oracle Net Listener 11.2.0.3.0 
      Cluster Ready Services Files 11.2.0.3.0 
      Oracle Database 11g 11.2.0.3.0 
-----------------------------------------------------------------------------


Instantiating scripts for add node (Monday, July 6, 2015 7:53:36 PM CST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Monday, July 6, 2015 7:53:39 PM CST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Monday, July 6, 2015 7:57:51 PM CST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/11.2.0/grid/root.sh #On nodes prod01
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node
    
The Cluster Node Addition of /u01/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
prod02:/u01/11.2.0/grid/oui/bin$


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值