11204RAC删除节点

本文章为网络资源学习,自己做笔记使用,如有侵权请联系删除!
数据库环境:11204 三节点RAC
主机环境:centos6.8
删除节点简单为分 3 个步骤:
删除实例
删除 DB 软件
删除 GRID 软件
三节点RAC,主机名是rac01、rac02、rac03,现在需要删除rac03。
一般对 CRS 层面数据结构做重要操作之前一定要先备份 OCR 。
一 . 删除实例
1.关闭计划删除的目标实例

[gird@rac01 ~]# srvctl stop instance -d racdb -n rac03
[grid@rac01 ~]# srvctl status instance -d racdb

2.删除实例

[oracle@rac01 ~]$ dbca -silent -deleteInstance -nodeList rac03  -gdbName racdb -instanceName racdb3 -sysDBAUserName sys -sysDBAPassword oracle
Deleting instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/PROD.log" for further details.

3.再次检查

[grid@rac01 ~]# srvctl status database -d racdb
Instance racdb1 is running on node rac01
Instance racdb2 is running on node rac02

二、删除 DB 软件
1.更新 inventory 删除节点上执行

[oracle@rac03 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=rac03" -local
Starting Oracle Universal Installer...
  
Checking swap space: must be greater than 500 MB.   Actual 6143 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
  1. 卸载 DB 软件
    删除节点上执行以下是正常流程,如果报错看下面
[oracle@rac03 ~]$ $ORACLE_HOME/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
  
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
  
Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/db_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid
The following nodes are part of this cluster: rac03
Checking for sufficient temp space availability on node(s) : 'rac03'
## [END] Install check configuration ##
  
Network Configuration check config START
  
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2019-02-25_12-27-57-AM.log
  
Network Configuration check config END
  
Database Check Configuration START
  
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2019-02-25_12-27-59-AM.log
  
Database Check Configuration END
  
Enterprise Manager Configuration Assistant START
  
EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2019-02-25_12-28-02-AM.log 
  
Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check2332.log
Oracle Configuration Manager check END
  
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac03
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac03', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/db_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.
  
No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-02-25_12-27-50-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-02-25_12-27-50-AM.err'
######################## CLEAN OPERATION START ########################
  
Enterprise Manager Configuration Assistant START
  
EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2019-02-25_12-28-02-AM.log 
  
Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2019-02-25_12-29-25-AM.log
  
Network Configuration clean config START
  
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2019-02-25_12-29-25-AM.log
  
De-configuring Local Net Service Names configuration file...
Local Net Service Names configuration file de-configured successfully.
  
De-configuring backup files...
Backup files de-configured successfully.
  
The network configuration has been cleaned up successfully.
  
Network Configuration clean config END
  
Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean2332.log
Oracle Configuration Manager clean END
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
  
Detach Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node : Done
  
Delete directory '/u01/app/oracle/product/11.2.0/db_1' on the local node : Done
  
Failed to delete the directory '/u01/app/oracle'. The directory is in use.
Delete directory '/u01/app/oracle' on the local node : Failed <<<<
  
Oracle Universal Installer cleanup completed with errors.
  
Oracle Universal Installer clean END
  
## [START] Oracle install clean ##
  
Clean install operation removing temporary directory '/tmp/deinstall2019-02-25_00-27-29AM' on node 'rac03'
  
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/product/11.2.0/db_1' on the local node.
Failed to delete directory '/u01/app/oracle' on the local node.
Oracle Universal Installer cleanup completed with errors.
  
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
这里报错:
User equivalence is not set on the node(s) 'grid3'
Run '/tmp/deinstall2014-03-16_06-37-55PM/sshUserSetup.sh -hosts "grid3 " -user oracle' from the local node 'grid3' and rerun the tool again. 
ERROR: Exited from Program.

解决方法:
手工执行
/tmp/deinstall2014-03-16_06-37-55PM/sshUserSetup.sh -hosts "grid3 " -user oracle

3.更新 inventory(在所有保留的节点上执行)

[oracle@rac01 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=rac01,rac02" -local
Starting Oracle Universal Installer...
  
Checking swap space: must be greater than 500 MB.   Actual 6143 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@rac02 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=rac01,rac02" -local
Starting Oracle Universal Installer...
  
Checking swap space: must be greater than 500 MB.   Actual 6143 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

4.如果卸载不干净,需要人为手工执行下面命令

[oracle@rac03 ~]$ rm -rf $ORACLE_HOME/*

三 . 删除 GRID软件
1.检查被删除节点状态

[grid@rac01 ~]$ olsnodes -s -n -t
rac01      1      Active      Unpinned
rac02        2      Active      Unpinned
rac03        3      Active      Unpinned

2.节点被 PIN 住,需要 UNPIN

[root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl unpin css -n rac03
CRS-4667: Node rac03 successfully unpinned.

3.停止被删节点 HAS 服务

[root@rac03 ~]# export ORACLE_HOME=/u01/app/11.2.0/grid
[root@rac03 ~]# cd $ORACLE_HOME/crs/install
[root@rac03 install]# perl rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.0.0/255.255.255.0/eth0, type static
VIP exists: /rac03-vip/192.168.0.22/192.168.0.0/255.255.255.0/eth0, hosting node rac03
VIP exists: /vastdata4-vip/192.168.0.23/192.168.0.0/255.255.255.0/eth0, hosting node vastdata4
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac03'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac03' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac03'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac03'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac03'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac03'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rac03'
CRS-2677: Stop of 'ora.FRA.dg' on 'rac03' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac03'
CRS-2677: Stop of 'ora.asm' on 'rac03' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac03' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac03'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac03'
CRS-2673: Attempting to stop 'ora.crf' on 'rac03'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac03'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac03'
CRS-2673: Attempting to stop 'ora.asm' on 'rac03'
CRS-2677: Stop of 'ora.mdnsd' on 'rac03' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac03' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac03' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac03'
CRS-2677: Stop of 'ora.ctssd' on 'rac03' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac03' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac03'
CRS-2677: Stop of 'ora.cssd' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac03'
CRS-2677: Stop of 'ora.gipcd' on 'rac03' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac03'
CRS-2677: Stop of 'ora.gpnpd' on 'rac03' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac03' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node

4.检查集群资源状态

[root@rac01 ~]# crsctl stat res -t
正常已经没有rac03的事了

5.检查集群下所有节点状态

[root@rac01 ~]# /u01/app/11.2.0/grid/bin/olsnodes -s -n -t
rac03        1      Inactive    Unpinned
rac01     1      Active      Unpinned
rac02      1     Active      Unpinned
  1. 更新 inventory(删除节点上做)下面是正常过程,要是报错就按照提示做
[grid@rac03 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=rac03" CRS=TRUE -silent -local
Starting Oracle Universal Installer...
  
Checking swap space: must be greater than 500 MB.   Actual 6143 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
  1. 卸载 GI 软件(注意输出的部分要手动回车)
[grid@rac03 ~]$ $ORACLE_HOME/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2019-02-25_00-43-06AM/logs/
  
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
  
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
  
Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home 
The following nodes are part of this cluster: rac03
Checking for sufficient temp space availability on node(s) : 'rac03'
  
## [END] Install check configuration ##
  
Traces log file: /tmp/deinstall2019-02-25_00-43-06AM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac03"[rac03-vip]
 > 
  
The following information can be collected by running "/sbin/ifconfig -a" on node "rac03"
Enter the IP netmask of Virtual IP "192.168.0.22" on node "rac03"[255.255.255.0]
 > 
  
Enter the network interface name on which the virtual IP address "192.168.0.22" is active
 > 
  
Enter an address or the name of the virtual IP[]
 > 
  
Network Configuration check config START
  
Network de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/netdc_check2019-02-25_12-44-14-AM.log
  
Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:
  
Network Configuration check config END
  
Asm Check Configuration START
  
ASM de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/asmcadc_check2019-02-25_12-44-19-AM.log
  
######################### CHECK OPERATION END #########################
  
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: 
The cluster node(s) on which the Oracle home deinstallation will be performed rac03
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac03', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2019-02-25_00-43-06AM/logs/deinstall_deconfig2019-02-25_12-43-22-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2019-02-25_00-43-06AM/logs/deinstall_deconfig2019-02-25_12-43-22-AM.err'
  
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/asmcadc_clean2019-02-25_12-44-27-AM.log
ASM Clean Configuration END
  
Network Configuration clean config START
  
Network de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/netdc_clean2019-02-25_12-44-27-AM.log
  
De-configuring RAC listener(s): LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
  
De-configuring listener: LISTENER
    Stopping listener on node "rac03": LISTENER
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
  
De-configuring listener: LISTENER_SCAN3
    Stopping listener on node "rac03": LISTENER_SCAN3
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
  
De-configuring listener: LISTENER_SCAN2
    Stopping listener on node "rac03": LISTENER_SCAN2
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
  
De-configuring listener: LISTENER_SCAN1
    Stopping listener on node "rac03": LISTENER_SCAN1
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
  
De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.
  
De-configuring backup files...
Backup files de-configured successfully.
  
The network configuration has been cleaned up successfully.
  
Network Configuration clean config END
---------------------------------------->
  
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.
  
Run the following command as the root user or the administrator on node "rac03".
  
/tmp/deinstall2019-02-25_00-43-06AM/perl/bin/perl -I/tmp/deinstall2019-02-25_00-43-06AM/perl/lib -I/tmp/deinstall2019-02-25_00-43-06AM/crs/install /tmp/deinstall2019-02-25_00-43-06AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2019-02-25_00-43-06AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

--这里要新开一个窗口执行上面的命令,然后回来按回车
Press Enter after you finish running the above commands
<---------------------------------------- 
[root@rac03 ~]# /tmp/deinstall2019-02-25_00-43-06AM/perl/bin/perl   -I/tmp/deinstall2019-02-25_00-43-06AM/perl/lib   -I/tmp/deinstall2019-02-25_00-43-06AM/crs/install   /tmp/deinstall2019-02-25_00-43-06AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2019-02-25_00-43-06AM/response/deinstall_Ora11g_gridinfrahome1.rsp"Using configuration parameter file:   /tmp/deinstall2019-02-25_00-43-06AM/response/deinstall_Ora11g_gridinfrahome1.rsp****Unable to retrieve Oracle Clusterware home.Start Oracle Clusterware stack and try again.CRS-4047: No Oracle Clusterware components   configured.CRS-4000: Command Stop failed, or completed with   errors.Either /etc/oracle/ocr.loc does not exist or is   not readableMake sure the file exists and it has read and   execute accessEither /etc/oracle/ocr.loc does not exist or is   not readableMake sure the file exists and it has read and   execute accessCRS-4047: No Oracle Clusterware components   configured.CRS-4000: Command Modify failed, or completed   with errors.CRS-4047: No Oracle Clusterware components   configured.CRS-4000: Command Delete failed, or completed   with errors.CRS-4047: No Oracle Clusterware components   configured.CRS-4000: Command Stop failed, or completed with   errors.################################################################# You must kill processes or reboot the system   to properly ## cleanup the processes started by Oracle   clusterware          #################################################################ACFS-9313: No ADVM/ACFS installation detected.Either /etc/oracle/olr.loc does not exist or is   not readableMake sure the file exists and it has read and   execute accessEither /etc/oracle/olr.loc does not exist or is   not readableMake sure the file exists and it has read and   execute accessFailure in execution (rc=-1, 256, No such file   or directory) for command /etc/init.d/ohasd deinstallerror: package cvuqdisk is not installedSuccessfully   deconfigured Oracle clusterware stack on this node    
Remove the directory: /tmp/deinstall2019-02-25_00-43-06AM on node: 
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
  
Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done
  
Delete directory '/u01/app/11.2.0/grid' on the local node : Done
  
Delete directory '/u01/app/oraInventory' on the local node : Done
  
Delete directory '/u01/app/grid' on the local node : Done
  
Oracle Universal Installer cleanup was successful.
  
Oracle Universal Installer clean END
  
## [START] Oracle install clean ##
  
Clean install operation removing temporary directory '/tmp/deinstall2019-02-25_00-43-06AM' on node 'rac03'
  
## [END] Oracle install clean ##
  
######################### CLEAN OPERATION END #########################
  
####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
Oracle Clusterware is stopped and successfully de-configured on node "rac03"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Successfully deleted directory '/u01/app/grid' on the local node.
Oracle Universal Installer cleanup was successful.
  
Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac03' at the end of the session.
  
Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac03' at the end of the session.
Run 'rm -rf /etc/oratab' as root on node(s) 'rac03' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
  
############# ORACLE DEINSTALL & DECONFIG TOOL END #############

8.更新 inventory(在所有保留节点做)

[grid@rac01 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=rac01,rac02" CRS=TRUE -silent
Starting Oracle Universal Installer...
  
Checking swap space: must be greater than 500 MB.   Actual 6143 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[grid@rac02 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=rac01,rac02" CRS=TRUE -silent
Starting Oracle Universal Installer...
  
Checking swap space: must be greater than 500 MB.   Actual 6143 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

9.检查集群下所有节点状态

[grid@rac01 ~]$ olsnodes -s
rac03 Inactive
rac01 Active
rac02 Active

10.如果卸载不干净,需要人为手工执行下面命令

ps -ef |grep ora |awk '{print $2}' |xargs kill -9
ps -ef |grep grid |awk '{print $2}' |xargs kill -9
ps -ef |grep asm |awk '{print $2}' |xargs kill -9
ps -ef |grep storage |awk '{print $2}' |xargs kill -9
ps -ef |grep ohasd |awk '{print $2}' |xargs kill -9 
ps -ef |grep grid
ps -ef |grep ora
ps -ef |grep asm
  
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
cd $ORACLE_HOME
rm -rf *
cd $ORACLE_BASE
rm -rf *
  
rm -rf /etc/rc5.d/S96ohasd
rm -rf /etc/rc3.d/S96ohasd
rm -rf /rc.d/init.d/ohasd
rm -rf /etc/oracle
rm -rf /etc/ora*
rm -rf /etc/oratab
rm -rf /etc/oraInst.loc
rm -rf /opt/ORCLfmap/
rm -rf /taryartar/12c/oraInventory
rm -rf /usr/local/bin/dbhome
rm -rf /usr/local/bin/oraenv
rm -rf /usr/local/bin/coraenv
rm -rf /tmp/*
rm -rf /var/tmp/.oracle
rm -rf /var/tmp
rm -rf /home/grid/*
rm -rf /home/oracle/*
rm -rf /etc/init/oracle*
rm -rf /etc/init.d/ora
rm -rf /tmp/.*

11.从集群中删除节点

[root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl delete node -n rac03
CRS-4661: Node rac03 successfully deleted.
  1. 检查集群下所有节点状态
[root@rac01 ~]# olsnodes -s
rac01 Active
rac02 Active

13.保留节点使用grid用户更新集群列表

[root@rac01 ~]# su – grid
[grid@rac01 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={rac01,rac02}” CRS=true
[root@rac02 ~]# su – grid
[grid@rac02 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={rac01,rac02}” CRS=true

14.检查节点删除是否成功
这步非常重要,关系以后是否可以顺利增加节点到集群中。

[grid@rac01 ~]$ cluvfy stage -post nodedel -n rac03 -verbose
  
Performing post-checks for node removal 
  
Checking CRS integrity...
  
Clusterware version consistency passed
The Oracle Clusterware is healthy on node "rac01"
  
CRS integrity check passed
Result: 
Node removal check passed
  
Post-check for node removal was successful.

15.备份 OCR

[root@rac01 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -manualbackup
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值