Oracle Rac 删除节点测试笔记

一、测试环境

主机名实例名操作系统数据库版本
rac1(删除)racdb1RHEL 6.5 64位11.2.0.4.0
rac2racdb2RHEL 6.5 64位11.2.0.4.0
rac3racdb3RHEL 6.5 64位11.2.0.4.0
rac4racdb4RHEL 6.5 64位11.2.0.4.0

二、移除oracle实例

1、备份OCR

cd /u01/app/11.2.0/grid/bin/

./ocrconfig -showbackup

./ocrconfig -manualbackup

 

2、从rac数据库中移除实例

  • 方法一

进dbca图形方式来移除,如下图:

  • 方法二

命令行方式(在不删除的节点上,以oracle用户运行):

dbca -silent -deleteInstance -gdbName racdb -instanceName racdb1 -nodelist rac1 -sysDBAUserName sys -sysDBAPassword password

三、删除oracle软件

1、处理要删除的节点上的监听

在不删除的节点上以grid用户运行:

srvctl disable listener -l listener -n rac1

srvctl stop listener -l listener -n rac1

srvctl status listener -l listener -n rac1

 

2、在要删除的节点上更新NodeList

在要删除的节点rac1上以grid用户运行:

$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1/ "CLUSTER_NODES=rac1" -local

 

3、删除ORACLE_HOME目录

如果ORACLE_HOME是共享的,则在要删除的节点上执行:

cd $ORACLE_HOME/oui/bin
 
./runInstaller -detachHome ORACLE_HOME=Oracle_home_location

 如果是非共享的,则执行:

${ORACLE_HOME}/deinstall/deinstall -local

 

4、在不删除的节点上更新inventory

在不删除的任意节点上,以oracle用户执行:

cd $ORACLE_HOME/oui/bin

./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 "CLUSTER_NODES= rac2,rac3,rac4"

 

四、将节点移除出RAC集群

1、查看节点的状态是否为Unpinned

在要删除的节点上以grid用户运行:

olsnodes -s -t

 

如果不是Unpinned,则在任意节点上以root用户执行以下命令将其unpin:

crsctl unpin css -n rac1

2、对节点进行deconfig操作

在要删除的节点上以root用户执行:

cd /u01/app/11.2.0/grid/crs/install

./rootcrs.pl -deconfig -force

 过程如下:

[root@rac1 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.232.0/255.255.255.0/eth0, type static
VIP exists: /rac1-vip/192.168.232.33/192.168.232.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2-vip/192.168.232.34/192.168.232.0/255.255.255.0/eth0, hosting node rac2
VIP exists: /rac3-vip/192.168.232.39/192.168.232.0/255.255.255.0/eth0, hosting node rac3
VIP exists: /rac4-vip/192.168.232.40/192.168.232.0/255.255.255.0/eth0, hosting node rac4
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac1'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac1' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'
CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded
CRS-2677: Stop of 'ora.CRS.dg' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node

 注意:如果要删除的节点是当前整个集群中最后一个节点,也就是说要将整个集群删除掉的话,那么需要执行:

./rootcrs.pl -deconfig -force -lastnode

 3、删除节点

在不删除的节点上以root用户执行:

cd /u01/app/11.2.0/grid/bin

./crsctl delete node -n rac1

4、更新Nodelist

在要删除的节点1上以grid用户执行下面的命令:

cd /u01/app/11.2.0/grid/oui/bin

./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=rac1" CRS=TRUE -silent -local

 

五、删除GRID软件

1、删除GRID_HOME

如果GRID_HOME是共享的,则在要删除的节点上,以grid用户执行:

cd $GRID_HOME/oui/
./runInstaller -detachHome ORACLE_HOME=Grid_home -silent -local

如果非共享的,以grid用户执行:

cd /u01/app/11.2.0/grid/deinstall/
./deinstall -local

注意:这里一定要加上-local参数,否则的话,这个命令会删除所有节点的GRID_HOME目录。

过程如下:

[grid@rac1 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2019-01-21_11-37-09AM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home 
The following nodes are part of this cluster: rac1
Checking for sufficient temp space availability on node(s) : 'rac1'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2019-01-21_11-37-09AM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac1"[rac1-vip]
 > #直接回车

The following information can be collected by running "/sbin/ifconfig -a" on node "rac1"
Enter the IP netmask of Virtual IP "192.168.232.33" on node "rac1"[255.255.255.0]
 > #直接回车

Enter the network interface name on which the virtual IP address "192.168.232.33" is active
 > #直接回车

Enter an address or the name of the virtual IP[]
 > #直接回车


Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2019-01-21_11-37-09AM/logs/netdc_check2019-01-21_11-38-57-AM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:LISTENER

At least one listener from the discovered listener list [LISTENER,LISTENER_SCAN1] is missing in the specified listener list [LISTENER]. The Oracle home will be cleaned up, so all the listeners will not be available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2019-01-21_11-37-09AM/logs/asmcadc_check2019-01-21_11-40-29-AM.log


######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: 
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac1
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac1', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2019-01-21_11-37-09AM/logs/deinstall_deconfig2019-01-21_11-37-31-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2019-01-21_11-37-09AM/logs/deinstall_deconfig2019-01-21_11-37-31-AM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2019-01-21_11-37-09AM/logs/asmcadc_clean2019-01-21_11-40-50-AM.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2019-01-21_11-37-09AM/logs/netdc_clean2019-01-21_11-40-50-AM.log

De-configuring RAC listener(s): LISTENER

De-configuring listener: LISTENER
    Stopping listener on node "rac1": LISTENER
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rac1".

/tmp/deinstall2019-01-21_11-37-09AM/perl/bin/perl -I/tmp/deinstall2019-01-21_11-37-09AM/perl/lib -I/tmp/deinstall2019-01-21_11-37-09AM/crs/install /tmp/deinstall2019-01-21_11-37-09AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2019-01-21_11-37-09AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------

根据提示,以root用户在节点1上另开一个会话执行上述命令:

[root@rac1 ~]# /tmp/deinstall2019-01-21_11-37-09AM/perl/bin/perl -I/tmp/deinstall2019-01-21_11-37-09AM/perl/lib -I/tmp/deinstall2019-01-21_11-37-09AM/crs/install /tmp/deinstall2019-01-21_11-37-09AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2019-01-21_11-37-09AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2019-01-21_11-37-09AM/response/deinstall_Ora11g_gridinfrahome1.rsp
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Modify failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Delete failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware          #
################################################################
ACFS-9313: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node

-----------------------------------------------------------------

然后回到刚才的会话窗口,按回车继续执行:

Remove the directory: /tmp/deinstall2019-01-21_11-37-09AM on node: 
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done

Delete directory '/u01/app/11.2.0/grid' on the local node : Done

Delete directory '/u01/app/oraInventory' on the local node : Done

Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/metadata_pv'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/sweep'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/incident'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/incpkg'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/stage'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/metadata_dgif'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/cdump'. The directory is in use.
The Oracle Base directory '/u01/app/grid' will not be removed on local node. The directory is not empty.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2019-01-21_11-37-09AM' on node 'rac1'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware is stopped and successfully de-configured on node "rac1"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Oracle Universal Installer cleanup was successful.


Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac1' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac1' at the end of the session.
Run 'rm -rf /etc/oratab' as root on node(s) 'rac1' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

---------------------------------------------------------------

2、在剩余节点上更新NodeList

在剩余的任意节点,也就是节点2、3或4上以grid用户执行:

cd /u01/app/11.2.0/grid/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=rac2,rac3,rac4" CRS=TRUE -silent

 

3、执行删除后检查

在节点2、3或4上以grid用户执行:

cluvfy stage -post nodedel -n rac1 -verbose

 

至此rac1节点的删除操作完成。

 

参考博文:http://www.cnxdug.org/?p=2511


 

 

 

 

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
Oracle 12c RAC故障节点删除主要包括以下步骤: 1. 检查故障节点:首先,我们需要确认故障节点是否真的无法恢复。可以使用集群管理工具(如CRSCTL或SRVCTL)来检查节点状态和资源的可用性。 2. 卸载软件:如果节点无法修复,我们需要停止Oracle服务,并使用操作系统工具卸载Oracle软件。可以使用软件管理工具(如OPATCH)来卸载Oracle Patch。 3. 移除节点:在集群环境中,我们需要从集群配置中移除故障节点。可以使用CRSCTL或SRVCTL工具来执行此操作。首先,我们需要将节点的监听器和资源(如数据库实例和服务)从集群配置中删除。然后,我们需要将节点从集群中移除。 4. 清理相关配置:移除节点后,我们需要更新其他节点上的相关配置。可以使用CRSCTL工具更新OCR和Voting Disk的配置。此外,还可以使用Oracle Grid Infrastructure安装程序重新配置集群。 5. 恢复节点:如果我们计划将故障节点重新纳入集群,我们可以根据需要重新安装Oracle软件,并将节点添加回集群。在添加节点之前,确保在节点上进行必要的操作系统和网络配置,并使用CRSCTL或SRVCTL工具进行节点添加操作。 总结来说,Oracle 12c RAC故障节点删除需要按照一定的步骤进行操作。这些步骤包括卸载软件、移除节点、清理配置和恢复节点等。请谨慎操作,确保数据的安全和集群的稳定。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

白昼ron

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值