Oracle Rac 删除节点测试笔记

Oracle 专栏收录该内容
39 篇文章 8 订阅

一、测试环境

主机名实例名操作系统数据库版本
rac1(删除)racdb1RHEL 6.5 64位11.2.0.4.0
rac2racdb2RHEL 6.5 64位11.2.0.4.0
rac3racdb3RHEL 6.5 64位11.2.0.4.0
rac4racdb4RHEL 6.5 64位11.2.0.4.0

二、移除oracle实例

1、备份OCR

cd /u01/app/11.2.0/grid/bin/

./ocrconfig -showbackup

./ocrconfig -manualbackup

 

2、从rac数据库中移除实例

  • 方法一

进dbca图形方式来移除,如下图:

  • 方法二

命令行方式(在不删除的节点上,以oracle用户运行):

dbca -silent -deleteInstance -gdbName racdb -instanceName racdb1 -nodelist rac1 -sysDBAUserName sys -sysDBAPassword password

三、删除oracle软件

1、处理要删除的节点上的监听

在不删除的节点上以grid用户运行:

srvctl disable listener -l listener -n rac1

srvctl stop listener -l listener -n rac1

srvctl status listener -l listener -n rac1

 

2、在要删除的节点上更新NodeList

在要删除的节点rac1上以grid用户运行:

$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1/ "CLUSTER_NODES=rac1" -local

 

3、删除ORACLE_HOME目录

如果ORACLE_HOME是共享的,则在要删除的节点上执行:

cd $ORACLE_HOME/oui/bin
 
./runInstaller -detachHome ORACLE_HOME=Oracle_home_location

 如果是非共享的,则执行:

${ORACLE_HOME}/deinstall/deinstall -local

 

4、在不删除的节点上更新inventory

在不删除的任意节点上,以oracle用户执行:

cd $ORACLE_HOME/oui/bin

./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 "CLUSTER_NODES= rac2,rac3,rac4"

 

四、将节点移除出RAC集群

1、查看节点的状态是否为Unpinned

在要删除的节点上以grid用户运行:

olsnodes -s -t

 

如果不是Unpinned,则在任意节点上以root用户执行以下命令将其unpin:

crsctl unpin css -n rac1

2、对节点进行deconfig操作

在要删除的节点上以root用户执行:

cd /u01/app/11.2.0/grid/crs/install

./rootcrs.pl -deconfig -force

 过程如下:

[root@rac1 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.232.0/255.255.255.0/eth0, type static
VIP exists: /rac1-vip/192.168.232.33/192.168.232.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2-vip/192.168.232.34/192.168.232.0/255.255.255.0/eth0, hosting node rac2
VIP exists: /rac3-vip/192.168.232.39/192.168.232.0/255.255.255.0/eth0, hosting node rac3
VIP exists: /rac4-vip/192.168.232.40/192.168.232.0/255.255.255.0/eth0, hosting node rac4
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac1'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac1' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'
CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded
CRS-2677: Stop of 'ora.CRS.dg' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node

 注意:如果要删除的节点是当前整个集群中最后一个节点,也就是说要将整个集群删除掉的话,那么需要执行:

./rootcrs.pl -deconfig -force -lastnode

 3、删除节点

在不删除的节点上以root用户执行:

cd /u01/app/11.2.0/grid/bin

./crsctl delete node -n rac1

4、更新Nodelist

在要删除的节点1上以grid用户执行下面的命令:

cd /u01/app/11.2.0/grid/oui/bin

./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=rac1" CRS=TRUE -silent -local

 

五、删除GRID软件

1、删除GRID_HOME

如果GRID_HOME是共享的,则在要删除的节点上,以grid用户执行:

cd $GRID_HOME/oui/
./runInstaller -detachHome ORACLE_HOME=Grid_home -silent -local

如果非共享的,以grid用户执行:

cd /u01/app/11.2.0/grid/deinstall/
./deinstall -local

注意:这里一定要加上-local参数,否则的话,这个命令会删除所有节点的GRID_HOME目录。

过程如下:

[grid@rac1 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2019-01-21_11-37-09AM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home 
The following nodes are part of this cluster: rac1
Checking for sufficient temp space availability on node(s) : 'rac1'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2019-01-21_11-37-09AM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac1"[rac1-vip]
 > #直接回车

The following information can be collected by running "/sbin/ifconfig -a" on node "rac1"
Enter the IP netmask of Virtual IP "192.168.232.33" on node "rac1"[255.255.255.0]
 > #直接回车

Enter the network interface name on which the virtual IP address "192.168.232.33" is active
 > #直接回车

Enter an address or the name of the virtual IP[]
 > #直接回车


Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2019-01-21_11-37-09AM/logs/netdc_check2019-01-21_11-38-57-AM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:LISTENER

At least one listener from the discovered listener list [LISTENER,LISTENER_SCAN1] is missing in the specified listener list [LISTENER]. The Oracle home will be cleaned up, so all the listeners will not be available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2019-01-21_11-37-09AM/logs/asmcadc_check2019-01-21_11-40-29-AM.log


######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: 
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac1
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac1', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2019-01-21_11-37-09AM/logs/deinstall_deconfig2019-01-21_11-37-31-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2019-01-21_11-37-09AM/logs/deinstall_deconfig2019-01-21_11-37-31-AM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2019-01-21_11-37-09AM/logs/asmcadc_clean2019-01-21_11-40-50-AM.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2019-01-21_11-37-09AM/logs/netdc_clean2019-01-21_11-40-50-AM.log

De-configuring RAC listener(s): LISTENER

De-configuring listener: LISTENER
    Stopping listener on node "rac1": LISTENER
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rac1".

/tmp/deinstall2019-01-21_11-37-09AM/perl/bin/perl -I/tmp/deinstall2019-01-21_11-37-09AM/perl/lib -I/tmp/deinstall2019-01-21_11-37-09AM/crs/install /tmp/deinstall2019-01-21_11-37-09AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2019-01-21_11-37-09AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------

根据提示,以root用户在节点1上另开一个会话执行上述命令:

[root@rac1 ~]# /tmp/deinstall2019-01-21_11-37-09AM/perl/bin/perl -I/tmp/deinstall2019-01-21_11-37-09AM/perl/lib -I/tmp/deinstall2019-01-21_11-37-09AM/crs/install /tmp/deinstall2019-01-21_11-37-09AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2019-01-21_11-37-09AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2019-01-21_11-37-09AM/response/deinstall_Ora11g_gridinfrahome1.rsp
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Modify failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Delete failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware          #
################################################################
ACFS-9313: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node

-----------------------------------------------------------------

然后回到刚才的会话窗口,按回车继续执行:

Remove the directory: /tmp/deinstall2019-01-21_11-37-09AM on node: 
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done

Delete directory '/u01/app/11.2.0/grid' on the local node : Done

Delete directory '/u01/app/oraInventory' on the local node : Done

Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/metadata_pv'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/sweep'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/incident'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/incpkg'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/stage'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/metadata_dgif'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/cdump'. The directory is in use.
The Oracle Base directory '/u01/app/grid' will not be removed on local node. The directory is not empty.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2019-01-21_11-37-09AM' on node 'rac1'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware is stopped and successfully de-configured on node "rac1"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Oracle Universal Installer cleanup was successful.


Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac1' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac1' at the end of the session.
Run 'rm -rf /etc/oratab' as root on node(s) 'rac1' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

---------------------------------------------------------------

2、在剩余节点上更新NodeList

在剩余的任意节点,也就是节点2、3或4上以grid用户执行:

cd /u01/app/11.2.0/grid/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=rac2,rac3,rac4" CRS=TRUE -silent

 

3、执行删除后检查

在节点2、3或4上以grid用户执行:

cluvfy stage -post nodedel -n rac1 -verbose

 

至此rac1节点的删除操作完成。

 

参考博文:http://www.cnxdug.org/?p=2511


 

 

 

 

  • 0
    点赞
  • 0
    评论
  • 1
    收藏
  • 一键三连
    一键三连
  • 扫一扫,分享海报

©️2021 CSDN 皮肤主题: 像素格子 设计师:CSDN官方博客 返回首页
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、C币套餐、付费专栏及课程。

余额充值