Adding a Cluster Node on Linux and UNIX Systems
This procedure describes how to add a node to your cluster. This procedure assumes that:
-
There is an existing cluster with two nodes named
node1
andnode2
-
You are adding a node named
node3
using a virtual node name,node3-vip
, that resolves to an IP address, if you are not using Grid Naming Service (GNS) -
You have successfully installed Oracle Clusterware on
node1
andnode2
in a local (non-shared) home, whereGrid_home
represents the successfully installed home
-
Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment. To perform the following procedure,
Grid_home
must identify your successfully installed Oracle Clusterware home.See Also:
Oracle Grid Infrastructure Installation Guide for Oracle Clusterware installation instructions -
Verify the integrity of the cluster and
node3
:$ cluvfy stage -pre nodeadd -n node3 [-fixup [-fixupdir fixup_dir]] [-verbose]
You can specify the
-fixup
option and a directory into which CVU prints instructions to fix the cluster or node if the verification fails. -
To extend the Grid Infrastructure home to the
node3
, navigate to theGrid_home
/oui/bin
directory onnode1
and run theaddNode.sh
script as the user that installed Oracle Clusterware using the following syntax, wherenode3
is the name of the node that you are adding andnode3-vip
is the VIP name for the node:If you are using Grid Naming Service (GNS), run the following command:
$ ./addNode.sh "CLUSTER_NEW_NODES={node3}"
If you are not using GNS, run the following command:
$ ./addNode.sh "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_ HOSTNAMES={node3-vip}"
Note:
You can specify multiple nodes for theCLUSTER_NEW_NODES
and theCLUSTER_NEW_VIRTUAL_HOSTNAMES
parameters by entering a comma-separated list of nodes between the braces. For example:"CLUSTER_NEW_NODES={node3,node4,node5}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip,node4-vip,node5-vip}"
Alternatively, you can specify the entries shown in Example 4-1 in a response file, where
file_name
is the name of the file, and run theaddNode.sh
script, as follows:$ ./addNode.sh -responseFile file_name
Example 4-1 Response File Entries for Adding Oracle Clusterware Home
RESPONSEFILE_VERSION=2.2.1.0.0 CLUSTER_NEW_NODES={node3} CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}
See Also:
Oracle Universal Installer and OPatch User's Guide for details about how to configure command-line response filesNotes:
-
If you are not using Oracle Grid Naming Service (GNS), then you must add the name and IP address of
node3
to DNS. -
Command-line values always override response file values.
-
-
If you have an Oracle RAC or Oracle RAC One Node database configured on the cluster and you have a local Oracle home, then do the following to extend the Oracle database home to
node3
:-
Navigate to the
Oracle_home
/oui/bin
directory onnode1
and run theaddNode.sh
script as the user that installed Oracle RAC using the following syntax:$ ./addNode.sh "CLUSTER_NEW_NODES={node3}"
-
Run the
Oracle_home
/root.sh
script onnode3
asroot
, whereOracle_home
is the Oracle RAC home.
If you have a shared Oracle home that is shared using Oracle Automatic Storage Management Cluster File System (Oracle ACFS), then do the following as the user that installed Oracle RAC to extend the Oracle database home to
node3
:-
Start the Oracle ACFS resource on the new node by running the following command as
root
from theGrid_home
/bin
directory:# srvctl start filesystem -d volume_device_name [-n node_name]
Note:
.Make sure the Oracle ACFS resources, including Oracle ACFS registry resource and Oracle ACFS file system resource where the ORACLE_HOME is located, are online on the newly added node. -
Navigate to the
Oracle_home
/oui/bin
directory onnode1
and run theaddNode.sh
script as the user that installed Oracle RAC using the following syntax:$ ./addNode.sh -noCopy "CLUSTER_NEW_NODES={node3}"
Note:
Use the-noCopy
option because the Oracle home on the destination node is already fully populated with software.
If you have a shared Oracle home on a shared file system that is not Oracle ACFS, then you must first create a mount point for the Oracle RAC database home on the target node, mount and attach the Oracle RAC database home, and update the Oracle Inventory, as follows:
-
Run the
srvctl config db -d
db_name
command on an existing node in the cluster to obtain the mount point information. -
Run the following command as
root
onnode3
to create the mount point:# mkdir -p mount_point_path
-
Mount the file system that hosts the Oracle RAC database home.
-
Run the following command as the user that installed Oracle RAC from the
Oracle_home
/oui/bin
directory on the node you are adding to add the Oracle RAC database home:$ ./runInstaller -attachHome ORACLE_HOME="ORACLE_HOME" "CLUSTER _NODES={node_list}" LOCAL_NODE="node_name"
-
Update the Oracle Inventory as the user that installed Oracle RAC, as follows:
$ ./runInstaller -updateNodeList ORACLE_HOME=mount_point_path "CLUSTER _NODES={node_list}"
In the preceding command,
node_list
refers to a list of all nodes where the Oracle RAC database home is installed, including the node you are adding.
-
-
Run the
Grid_home
/root.sh
script on thenode3
asroot
and run the subsequent script, as instructed. -
Run the following CVU command as the user that installed Oracle Clusterware to check cluster integrity. This command verifies that any number of specified nodes has been successfully added to the cluster at the network, shared storage, and clusterware levels:
$ cluvfy stage -post nodeadd -n node3 [-verbose]
See Also:
"cluvfy stage [-pre | -post] nodeadd"
for more information about this CVU command
Check whether either a policy-managed or administrator-managed Oracle RAC database is configured to run on node3
(the newly added node). If you configured an administrator-managed Oracle RAC database, you may need to use DBCA to add an instance to the database to run on this newly added node.
See Also:
-
Oracle Real Application Clusters Administration and Deployment Guide for more information about using DBCA to add administrator-managed Oracle RAC database instances