11gR203 add node for RAC








1. Reconfigure the RDBMS Services in the cluster to take into account node 2 is gone.

	1.1 Reconfigure the Service plb1 so that it is only running on the remaining instance.
		[oracle@sage ~]$ srvctl modify service -d db112i -s plb1 -n -i db112i1 -f
	1.2 Examine the configuration to ensure the service is removed from instance db112i2 and node Thyme.
	[oracle@sage ~]$ srvctl status service -d db112i -s plb1
	Service plb1 is running on instance(s) db112i1
	[root@sage ~]# /opt/app/oracle/product/grid/bin/crsctl stat res -t
2. Reconfigure the RDBMS Instances in the cluster to take into account node 2 is gone.

	2.1. Remove the database instances. As this is an Administrator Managed database this can be performed through dbca. From the RAC Instance Management section in dbca follow the wizard to remove the Instance db112i2 from node 2 thyme.
	[oracle@sage ~]$ dbca
	[oracle@sage ~]$ 
	[oracle@sage ~]$ srvctl config database -d db112i
	Database unique name: db112i
	Database name: db112i
	Oracle home: /opt/app/oracle/database/11.2/db_1
	Oracle user: oracle
	Spfile: +DATA1/db112i/spfiledb112i.ora
	Domain: vmdom
	Start options: open
	Stop options: immediate
	Database role: PRIMARY
	Management policy: AUTOMATIC
	Server pools: db112i
	Database instances: db112i1
	Disk Groups: DATA1
	Services: plb1
	Database is administrator managed
	[root@sage ~]# /opt/app/oracle/product/grid/bin/crsctl stat res -t
	1 ONLINE ONLINE sage Open 
3. Remove the Node from the RAC Cluster

	3.1 Using the Installer remove the failed node from Inventory of the Remaining Node(s)
	[oracle@sage ~]$ cd $ORACLE_HOME/oui/bin
	[oracle@sage bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/database/11.2/db_1 "CLUSTER_NODES={sage}" 
	Starting Oracle Universal Installer...
	Checking swap space: must be greater than 500 MB. Actual 2601 MB Passed
	The inventory pointer is located at /etc/oraInst.loc
	The inventory is located at /opt/app/oracle/oraInventory
	'UpdateNodeList' was successful.

4. Remove the Node from the Grid Cluster
The process for performing the removal of a failed node has been based on the node deletion processes documented in the Grid and RAC administration guides.

From any node that you are not deleting,  run the following commands from the Grid_home/bin directory as root to delete the node from the cluster:

	4.1 Stop the VIP resource for the node thyme
		[root@sage bin]# ./srvctl stop vip -i thyme
	4.2 Remove the VIP for the node thyme
		[root@sage bin]# ./srvctl remove vip -i thyme -f
	4.3 Check the state of the environment and ensure the VIP for node thyme is removed.
		[root@sage bin]# ./crsctl stat res -t

	4.4 Remove node 2, thyme from the Grid Infrastructure/clusterware

		# crsctl delete node -n thyme
	4.5 As the owner of the Grid Infrastructure Installation perform the following to clean up the Grid Infrastructure inventory on the remaining nodes (in this case node 1, sage).
		[root@sage bin]# su - oracle
		[oracle@sage ~]$ . oraenv db112i1
		[oracle@sage ~]$ cd $ORACLE_HOME/oui/bin
		[oracle@sage ~]$ ./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/grid "CLUSTER_NODES={sage}" CRS=TRUE -silent
	4.6  As root now list the nodes that are a part of the cluster to confirm the node required (thyme) has been removed successfully and the only remaining node in this case is node sage.
	At the end of this process only the node sage remains as a part of the cluster.
		[root@sage bin]# ./olsnodes


	$ olsnodes -s -t
	If the node is pinned, then run the crsctl unpin css command. Otherwise,proceed to the next step.
2.Disable the Oracle Clusterware applications and daemons running on the node.
	Run the script as root from the Grid_home/crs/install
	directory on the node to be deleted, as follows:
	# ./ -deconfig -deinstall -force
	Note: Before you run this command, you must stop the EMAGENT, as follows:
	$ emctl stop dbconsole
3.From any node that you are not deleting
	# crsctl delete node -n node_to_be_deleted

4.On the node you want to delete, run the following command as the user that
	installed Oracle Clusterware from the Grid_home/oui/bin directory where
	node_to_be_deleted is the name of the node that you are deleting:
	$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES=
	{node_to_be_deleted}" CRS=TRUE -silent -local

5.For a local home, deinstall the Oracle Clusterware home from the node that
	you want to delete, as follows, by running the following command, where
	Grid_home is the path defined for the Oracle Clusterware home:
	$ Grid_home/deinstall/deinstall –local
	Caution: If you do not specify the -local flag, then the command removes the Grid Infrastructure home from every node in the cluster.

6. On any node other than the node you are deleting, run the following command
	from the Grid_home/oui/bin directory where remaining_nodes_list is a
	comma-delimited list of the nodes that are going to remain part of your cluster:
	$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES={remaining_nodes_list}" CRS=TRUE -silent


	1. Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment.
	2. Verify the integrity of the cluster and node3: 
		$ cluvfy stage -pre nodeadd -n node3 [-fixup [-fixupdir fixup_dir]] [-verbose]
	3. To extend the Grid Infrastructure home to the node3("CLUSTER_NEW_NODES={node3,node4,node5}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip,node4-vip,node5-vip}")
		Grid_home/oui/bin/ directory on node1
		If you are using Grid Naming Service (GNS), run the following command:
		$ ./ "CLUSTER_NEW_NODES={node3}"
		If you are not using GNS, run the following command:
		$ ./ -responseFile file_name
		$ vi file_name
	4. If you have an Oracle RAC or Oracle RAC One Node database configured on the
	cluster and you have a local Oracle home, then do the following to extend the
	Oracle database home to node3:

		$ $Oracle_home/oui/bin/ "CLUSTER_NEW_NODES={node3}"
		$ $Oracle_home/ script on node3 as root, where Oracle_home is the Oracle RAC home.
	5. Run the Grid_home/ script on the node3 as root and run the subsequent script, as instructed.
	6. Run the following CVU command to check cluster integrity.
		$ cluvfy stage -post nodeadd -n node3 [-verbose]



prod02:/u01/11.2.0/grid/oui/bin$./ -ignoreSysPrereqs -force "CLUSTER_NEW_NODES={prod01}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={prod01-vip}"

Performing pre-checks for node addition 

Checking node reachability...
Node reachability check passed from node "prod02"


Checking CRS home location...
PRVG-1013 : The path "/u01/11.2.0/grid" does not exist or cannot be created on the nodes to be added
Shared resources check for node addition failed

Check failed on nodes: 

Checking node connectivity...


ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC ORACLE_HOME=$OHOME $*"
if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f "$OHOME/cv/cvutl/" ]      <<<<<<<<<脚本这里可以看到这个参数
        CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/ -pre ORACLE_HOME=$OHOME $*"
        if [ $EXIT_CODE -eq 0 ]
exit $EXIT_CODE ;
prod02:/u01/11.2.0/grid/oui/bin$export IGNORE_PREADDNODE_CHECKS=Y
prod02:/u01/11.2.0/grid/oui/bin$./ -ignoreSysPrereqs -force "CLUSTER_NEW_NODES={prod01}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={prod01-vip}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4727 MB    Passed
Oracle Universal Installer, Version Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.

Performing tests to see whether nodes prod01 are available
............................................................... 100% Done.

Cluster Node Addition Summary
Global Settings
   Source: /u01/11.2.0/grid
   New Nodes
Space Requirements
   New Nodes
         /u01: Required 3.91GB : Available 17.60GB
Installed Products
   Product Names
      Oracle Grid Infrastructure 
      Sun JDK 
      Installer SDK Component 
      Oracle One-Off Patch Installer 
      Oracle Universal Installer 
      Oracle USM Deconfiguration 
      Oracle Configuration Manager Deconfiguration 
      Enterprise Manager Common Core Files 
      Oracle DBCA Deconfiguration 
      Oracle RAC Deconfiguration 
      Oracle Quality of Service Management (Server) 
      Installation Plugin Files 
      Universal Storage Manager Files 
      Oracle Text Required Support Files 
      Automatic Storage Management Assistant 
      Oracle Database 11g Multimedia Files 
      Oracle Multimedia Java Advanced Imaging 
      Oracle Globalization Support 
      Oracle Multimedia Locator RDBMS Files 
      Oracle Core Required Support Files 
      Bali Share 
      Oracle Database Deconfiguration 
      Oracle Quality of Service Management (Client) 
      Expat libraries 
      Oracle Containers for Java 
      Perl Modules 
      Secure Socket Layer 
      Oracle JDBC/OCI Instant Client 
      Oracle Multimedia Client Option 
      LDAP Required Support Files 
      Character Set Migration Utility 
      Perl Interpreter 
      PL/SQL Embedded Gateway 
      OLAP SQL Scripts 
      Database SQL Scripts 
      Oracle Extended Windowing Toolkit 
      SSL Required Support Files for InstantClient 
      SQL*Plus Files for Instant Client 
      Oracle Net Required Support Files 
      Oracle Database User Interface 
      RDBMS Required Support Files for Instant Client 
      RDBMS Required Support Files Runtime 
      XML Parser for Java 
      Oracle Security Developer Tools 
      Oracle Wallet Manager 
      Enterprise Manager plugin Common Files 
      Platform Required Support Files 
      Oracle JFC Extended Windowing Toolkit 
      RDBMS Required Support Files 
      Oracle Ice Browser 
      Oracle Help For Java 
      Enterprise Manager Common Files 
      Deinstallation Tool 
      Oracle Java Client 
      Cluster Verification Utility Files 
      Oracle Notification Service (eONS) 
      Oracle LDAP administration 
      Cluster Verification Utility Common Files 
      Oracle Clusterware RDBMS Files 
      Oracle Locale Builder 
      Oracle Globalization Support 
      Buildtools Common Files 
      Oracle RAC Required Support Files-HAS 
      SQL*Plus Required Support Files 
      XDK Required Support Files 
      Agent Required Support Files 
      Parser Generator Required Support Files 
      Precompiler Required Support Files 
      Installation Common Files 
      Required Support Files 
      Oracle JDBC/THIN Interfaces 
      Oracle Multimedia Locator 
      Oracle Multimedia 
      HAS Common Files 
      Assistant Common Files 
      HAS Files for DB 
      Oracle Recovery Manager 
      Oracle Database Utilities 
      Oracle Notification Service 
      Oracle Netca Client 
      Oracle Net 
      Oracle JVM 
      Oracle Internet Directory Client 
      Oracle Net Listener 
      Cluster Ready Services Files 
      Oracle Database 11g 

Instantiating scripts for add node (Monday, July 6, 2015 7:53:36 PM CST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Monday, July 6, 2015 7:53:39 PM CST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Monday, July 6, 2015 7:57:51 PM CST)
.                                                               100% Done.
Save inventory complete
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/11.2.0/grid/ #On nodes prod01
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.

文章标签: add node
上一篇11gR203 inventory 被误删后的恢复
想对作者说点什么? 我来说一句


2015年07月25日 653KB 下载