RAC节点添加

Adding a Cluster Node on Linux and UNIX Systems

This procedure describes how to add a node to your cluster. This procedure assumes that:

  • There is an existing cluster with two nodes named node1 and node2

  • You are adding a node named node3 using a virtual node name, node3-vip, that resolves to an IP address, if you are not using Grid Naming Service (GNS)

  • You have successfully installed Oracle Clusterware on node1 and node2 in a local (non-shared) home, where Grid_home represents the successfully installed home

To add a node:

  1. Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment. To perform the following procedure, Grid_home must identify your successfully installed Oracle Clusterware home.

    See Also:

    Oracle Grid Infrastructure Installation Guide for Oracle Clusterware installation instructions
  2. Verify the integrity of the cluster and node3:

    $ cluvfy stage -pre nodeadd -n node3 [-fixup [-fixupdir fixup_dir]] [-verbose]
    

    You can specify the -fixup option and a directory into which CVU prints instructions to fix the cluster or node if the verification fails.

  3. To extend the Grid Infrastructure home to the node3, navigate to the Grid_home/oui/bin directory on node1 and run the addNode.sh script as the user that installed Oracle Clusterware using the following syntax, where node3 is the name of the node that you are adding and node3-vip is the VIP name for the node:

    If you are using Grid Naming Service (GNS), run the following command:

    $ ./addNode.sh "CLUSTER_NEW_NODES={node3}"
    

    If you are not using GNS, run the following command:

    $ ./addNode.sh "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_
    HOSTNAMES={node3-vip}"
    

    Note:

    You can specify multiple nodes for the CLUSTER_NEW_NODES and the CLUSTER_NEW_VIRTUAL_HOSTNAMES parameters by entering a comma-separated list of nodes between the braces. For example:
    "CLUSTER_NEW_NODES={node3,node4,node5}"
    "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip,node4-vip,node5-vip}"
    

    Alternatively, you can specify the entries shown in Example 4-1 in a response file, where file_name is the name of the file, and run the addNode.sh script, as follows:

    $ ./addNode.sh -responseFile file_name
    

    Example 4-1 Response File Entries for Adding Oracle Clusterware Home

    RESPONSEFILE_VERSION=2.2.1.0.0
    
    CLUSTER_NEW_NODES={node3}
    CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}
    

    See Also:

    Oracle Universal Installer and OPatch User's Guide for details about how to configure command-line response files

    Notes:

    • If you are not using Oracle Grid Naming Service (GNS), then you must add the name and IP address of node3 to DNS.

    • Command-line values always override response file values.

  4. If you have an Oracle RAC or Oracle RAC One Node database configured on the cluster and you have a local Oracle home, then do the following to extend the Oracle database home to node3:

    1. Navigate to the Oracle_home/oui/bin directory on node1 and run the addNode.sh script as the user that installed Oracle RAC using the following syntax:

      $ ./addNode.sh "CLUSTER_NEW_NODES={node3}"
      
    2. Run the Oracle_home/root.sh script on node3 as root, where Oracle_home is the Oracle RAC home.

    If you have a shared Oracle home that is shared using Oracle Automatic Storage Management Cluster File System (Oracle ACFS), then do the following as the user that installed Oracle RAC to extend the Oracle database home to node3:

    1. Start the Oracle ACFS resource on the new node by running the following command as root from the Grid_home/bin directory:

      # srvctl start filesystem -d volume_device_name [-n node_name]
      

      Note:

      .Make sure the Oracle ACFS resources, including Oracle ACFS registry resource and Oracle ACFS file system resource where the ORACLE_HOME is located, are online on the newly added node.
    2. Navigate to the Oracle_home/oui/bin directory on node1 and run the addNode.sh script as the user that installed Oracle RAC using the following syntax:

      $ ./addNode.sh -noCopy "CLUSTER_NEW_NODES={node3}"
      

      Note:

      Use the -noCopy option because the Oracle home on the destination node is already fully populated with software.

    If you have a shared Oracle home on a shared file system that is not Oracle ACFS, then you must first create a mount point for the Oracle RAC database home on the target node, mount and attach the Oracle RAC database home, and update the Oracle Inventory, as follows:

    1. Run the srvctl config db -d db_name command on an existing node in the cluster to obtain the mount point information.

    2. Run the following command as root on node3 to create the mount point:

      # mkdir -p mount_point_path
      
    3. Mount the file system that hosts the Oracle RAC database home.

    4. Run the following command as the user that installed Oracle RAC from the Oracle_home/oui/bin directory on the node you are adding to add the Oracle RAC database home:

      $ ./runInstaller -attachHome ORACLE_HOME="ORACLE_HOME" "CLUSTER
      _NODES={node_list}" LOCAL_NODE="node_name"
      
    5. Update the Oracle Inventory as the user that installed Oracle RAC, as follows:

      $ ./runInstaller -updateNodeList ORACLE_HOME=mount_point_path "CLUSTER
      _NODES={node_list}"
      

      In the preceding command, node_list refers to a list of all nodes where the Oracle RAC database home is installed, including the node you are adding.

  5. Run the Grid_home/root.sh script on the node3 as root and run the subsequent script, as instructed.

  6. Run the following CVU command as the user that installed Oracle Clusterware to check cluster integrity. This command verifies that any number of specified nodes has been successfully added to the cluster at the network, shared storage, and clusterware levels:

    $ cluvfy stage -post nodeadd -n node3 [-verbose]
    

    See Also:

    "cluvfy stage [-pre | -post] nodeadd" for more information about this CVU command

Check whether either a policy-managed or administrator-managed Oracle RAC database is configured to run on node3 (the newly added node). If you configured an administrator-managed Oracle RAC database, you may need to use DBCA to add an instance to the database to run on this newly added node.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Oracle 12c RAC故障节点删除主要包括以下步骤: 1. 检查故障节点:首先,我们需要确认故障节点是否真的无法恢复。可以使用集群管理工具(如CRSCTL或SRVCTL)来检查节点状态和资源的可用性。 2. 卸载软件:如果节点无法修复,我们需要停止Oracle服务,并使用操作系统工具卸载Oracle软件。可以使用软件管理工具(如OPATCH)来卸载Oracle Patch。 3. 移除节点:在集群环境中,我们需要从集群配置中移除故障节点。可以使用CRSCTL或SRVCTL工具来执行此操作。首先,我们需要将节点的监听器和资源(如数据库实例和服务)从集群配置中删除。然后,我们需要将节点从集群中移除。 4. 清理相关配置:移除节点后,我们需要更新其他节点上的相关配置。可以使用CRSCTL工具更新OCR和Voting Disk的配置。此外,还可以使用Oracle Grid Infrastructure安装程序重新配置集群。 5. 恢复节点:如果我们计划将故障节点重新纳入集群,我们可以根据需要重新安装Oracle软件,并将节点添加回集群。在添加节点之前,确保在节点上进行必要的操作系统和网络配置,并使用CRSCTL或SRVCTL工具进行节点添加操作。 总结来说,Oracle 12c RAC故障节点的删除需要按照一定的步骤进行操作。这些步骤包括卸载软件、移除节点、清理配置和恢复节点等。请谨慎操作,确保数据的安全和集群的稳定。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值