11gR2 RAC删除节点步骤

. 环境概述

现有的RAC 环境是3节点的11.2.0.4RAC,在本文档中,我们要演示删除一个节点:rac3 所有删除操作都在环境正常运行状态下进行。

RAC 删除节点操作正好和添加节点完全相反。

[root@rac1 ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

#node1

192.168.8.221   rac1 rac1.oracle.com

192.168.8.242   rac1-vip 

172.168.0.18    rac1-priv

#node2

192.168.8.223   rac2 rac2.oracle.com

192.168.8.244   rac2-vip

172.168.0.19    rac2-priv

#node3

192.168.8.228   rac3 rac3.oracle.com

192.168.8.247   rac3-vip

172.168.0.15    rac3-priv

#scan-ip

192.168.8.245   rac-cluster rac-cluster-scan

当前RAC 环境的相关信息:

[root@rac1 ~]# olsnodes -s

rac1   Active

rac2   Active

rac3   Active

[root@rac1 ~]# olsnodes -i

rac1   rac1-vip

rac2   rac2-vip

rac3   rac3-vip

[root@rac1 ~]# crsctl stat res -t

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS      

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.DATADG.dg

               ONLINE  ONLINE       rac1                                        

               ONLINE  ONLINE       rac2                                        

               ONLINE  ONLINE       rac3                                        

ora.LISTENER.lsnr

               ONLINE  ONLINE       rac1                                        

               ONLINE  ONLINE       rac2                                        

               ONLINE  ONLINE       rac3                                        

ora.SYSTEMDG.dg

               ONLINE  ONLINE       rac1                                        

               ONLINE  ONLINE       rac2                                        

               ONLINE  ONLINE       rac3                                        

ora.asm

               ONLINE  ONLINE       rac1                     Started            

               ONLINE  ONLINE       rac2                     Started            

               ONLINE  ONLINE       rac3                     Started            

ora.gsd

               OFFLINE OFFLINE      rac1                                        

               OFFLINE OFFLINE      rac2                                        

               OFFLINE OFFLINE      rac3                                        

ora.net1.network

               ONLINE  ONLINE       rac1                                        

               ONLINE  ONLINE       rac2                                        

               ONLINE  ONLINE       rac3                                        

ora.ons

               ONLINE  ONLINE       rac1                                        

               ONLINE  ONLINE       rac2                                        

               ONLINE  ONLINE       rac3                                        

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       rac2                                         

ora.cvu

      1        ONLINE  ONLINE       rac2                                        

ora.oc4j

      1        ONLINE  ONLINE       rac2                                        

ora.orcl.db

      1        ONLINE  ONLINE       rac1                     Open               

      2        ONLINE  ONLINE       rac2                     Open               

      3        ONLINE  ONLINE       rac3                     Open               

ora.orcl.orcl_taf.svc

      1        ONLINE  ONLINE       rac1                                        

      2        ONLINE  ONLINE       rac3                                        

      3        ONLINE  ONLINE       rac2                                        

ora.rac1.vip

      1        ONLINE  ONLINE       rac1                                        

ora.rac2.vip

      1        ONLINE  ONLINE       rac2                                        

ora.rac3.vip

      1        ONLINE  ONLINE       rac3                                         

ora.scan1.vip

      1        ONLINE  ONLINE       rac2   
                                     

[root@rac1 ~]# su - oracle

[oracle@rac1 ~]$ sqlplus / as sysdba

SQL> col host_name for a20

SQL> select inst_id,host_name,instance_name,status from gv$instance;

   INST_ID HOST_NAME     INSTANCE_NAME STATUS

---------- -------------------- ---------------- ------------

     1 rac1       orcl1      OPEN

     3 rac3       orcl3      OPEN

     2 rac2       orcl2      OPEN

[root@rac1 ~]# ./crs_stat.sh

Name                             Target     State      Host     

------------------------       ---------- ---------  -------  

ora.DATADG.dg                  ONLINE     ONLINE     rac1     

ora.LISTENER.lsnr              ONLINE     ONLINE     rac1     

ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE     rac2     

ora.SYSTEMDG.dg                ONLINE     ONLINE     rac1     

ora.asm                        ONLINE     ONLINE     rac1     

ora.cvu                        ONLINE     ONLINE     rac2     

ora.gsd                        OFFLINE    OFFLINE             

ora.net1.network               ONLINE     ONLINE     rac1     

ora.oc4j                       ONLINE     ONLINE     rac2     

ora.ons                        ONLINE     ONLINE     rac1     

ora.orcl.db                    ONLINE     ONLINE     rac1     

ora.orcl.orcl_taf.svc          ONLINE     ONLINE     rac1     

ora.rac1.ASM1.asm              ONLINE     ONLINE     rac1     

ora.rac1.LISTENER_RAC1.lsnr    ONLINE     ONLINE     rac1     

ora.rac1.gsd                   OFFLINE    OFFLINE             

ora.rac1.ons                   ONLINE     ONLINE     rac1     

ora.rac1.vip                   ONLINE     ONLINE     rac1     

ora.rac2.ASM2.asm              ONLINE     ONLINE     rac2     

ora.rac2.LISTENER_RAC2.lsnr    ONLINE     ONLINE     rac2     

ora.rac2.gsd                   OFFLINE    OFFLINE             

ora.rac2.ons                   ONLINE     ONLINE     rac2     

ora.rac2.vip                   ONLINE     ONLINE     rac2     

ora.rac3.ASM3.asm              ONLINE     ONLINE     rac3     

ora.rac3.LISTENER_RAC3.lsnr    ONLINE     ONLINE     rac3     

ora.rac3.gsd                   OFFLINE    OFFLINE             

ora.rac3.ons                   ONLINE     ONLINE     rac3     

ora.rac3.vip                   ONLINE     ONLINE     rac3     

ora.scan1.vip                  ONLINE     ONLINE     rac2  

. 备份OCR

在删除节点前,建议手动备份OCR GRID4个小时也会自动备份OCR),目的是如果出现某些问题,我们可以恢复OCR

这里在节点1上执行备份操作。

root用户执行:

--执行手工OCR的备份

[root@rac1 ~]# ocrconfig -manualbackup

rac3     2016/06/13 12:40:49     /u01/app/11.2.0/grid/cdata/rac-cluster/backup_20160613_124049.ocr

rac2     2016/06/01 05:41:52     /u01/app/11.2.0/grid/cdata/rac-cluster/backup_20160601_054152.ocr

--查看ocr的手工备份:

[root@rac1 ~]# ocrconfig -showbackup manual

rac3     2016/06/13 12:40:49     /u01/app/11.2.0/grid/cdata/rac-cluster/backup_20160613_124049.ocr

rac2     2016/06/01 05:41:52     /u01/app/11.2.0/grid/cdata/rac-cluster/backup_20160601_054152.ocr

. DBCA删除数据库实例

3.1 调整service 信息

如果RAC 环境配置了Service-Side TAF 的操作,并且待删除节点的service preferred的,那么在我们删除该节点之前,需要把该节点上的连接转移到其他节点上去,使用relocate service进行操作。

preferred instance 不可用时,service 会自动relocateavailable的实例上,这个过程也可以手工来执行,命令如下:

格式:srvctl relocate service -d dbname -s servicename -i instancename -t newinstancename [-f]

[root@rac1 ~]# srvctl status service -d orcl

Service orcl_taf is running on instance(s) orcl1,orcl2,orcl3

[root@rac1 ~]# srvctl config service -d orcl

Service name: orcl_taf

Service is enabled

Server pool: orcl_orcl_taf

Cardinality: 3

Disconnect: false

Service role: PRIMARY

Management policy: AUTOMATIC

DTP transaction: false

AQ HA notifications: false

Failover type: SELECT

Failover method: BASIC

TAF failover retries: 180

TAF failover delay: 5

Connection Load Balancing Goal: LONG

Runtime Load Balancing Goal: NONE

TAF policy specification: BASIC

Edition:

Preferred instances: orcl1,orcl2,orcl3

Available instances:

--将节点3上的service 转移到其他节点,用oracle用户执行:

[root@rac1 ~]# su - oracle

[oracle@rac1 ~]$ srvctl relocate service -s orcl_taf -d orcl -i orcl3 -t orcl1

PRCR-1106 : Failed to relocate resource ora.orcl.orcl_taf.svc from node rac3 to node rac1

PRCR-1089 : Failed to relocate resource ora.orcl.orcl_taf.svc.

CRS-5702: Resource 'ora.orcl.orcl_taf.svc' is already running on 'rac1'

因为我这里的3个节点的service 都是preferred的,所以不能完成切换,relocate只能从preferredavailable

--配置service的信息,删除节点3service

[oracle@rac1 ~]$ srvctl stop service -d orcl -s orcl_taf -i orcl3

[oracle@rac1 ~]$ srvctl status service -d orcl

Service orcl_taf is running on instance(s) orcl1,orcl2

[oracle@rac1 ~]$ srvctl modify service -d orcl -s orcl_taf -n -i orcl1,orcl2 -f

[oracle@rac1 ~]$ srvctl status service -d orcl

Service orcl_taf is running on instance(s) orcl1,orcl2

[oracle@rac1 ~]$ srvctl config service -d orcl

Service name: orcl_taf

Service is enabled

Server pool: orcl_orcl_taf

Cardinality: 2

Disconnect: true

Service role: PRIMARY

Management policy: AUTOMATIC

DTP transaction: false

AQ HA notifications: false

Failover type: SELECT

Failover method: BASIC

TAF failover retries: 180

TAF failover delay: 5

Connection Load Balancing Goal: LONG

Runtime Load Balancing Goal: NONE

TAF policy specification: BASIC

Edition:

Preferred instances: orcl1,orcl2

Available instances:

3.2 DBCA删除实例

在节点1上用Oracle 用户运行dbca 这里可以用图形界面来删除:

dbca -> RAC database -> nstance Management -> Delete Instance -> 选择实例,输入sys用户和密码 -> 选择准备删除的数据库实例

也可以使用dbca silent 来操作:

dbca -silent -deleteInstance [-nodeList node_name] -gdbName gdb_name -instanceName instance_name -sysDBAUserName sysdba -sysDBAPassword password

在节点1上用Oracle 用户执行:

[oracle@rac1 ~]$ dbca -silent -deleteInstance -nodeList rac3 -gdbName orcl -instanceName orcl3 -sysDBAUserName sys -sysDBAPassword oracle

Deleting instance

1% complete

2% complete

6% complete

13% complete

20% complete

26% complete

33% complete

40% complete

46% complete

53% complete

60% complete

66% complete

Completing instance management.

100% complete

Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/orcl.log" for further details.

3.3 确认orcl3已经从CRS中清除

注意,用oracle用户执行,还要注意oracle 用户的属组信息

[oracle@rac1 ~]$ srvctl config database -d orcl

Database unique name: orcl

Database name: orcl

Oracle home: /u01/app/oracle/product/11.2.0/db_1

Oracle user: oracle

Spfile: +DATADG/orcl/spfileorcl.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: orcl

Database instances: orcl1,orcl2

Disk Groups: DATADG,SYSTEMDG

Mount point paths:

Services: orcl_taf

Type: RAC

Database is administrator managed

这里已经没有orcl3这个实例了。

. RAC层面删除节点(Oracle 软件)

这小节的操作都用oracle 用户完成。

4.1 停止节点3 Listener

grid用户执行:

[root@rac1 ~]# su - grid

lsnrctl status[grid@rac1 ~]$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 13-JUN-2016 12:54:51

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))

STATUS of the LISTENER

------------------------

Alias                     LISTENER

Version                   TNSLSNR for Linux: Version 11.2.0.4.0 - Production

Start Date                13-JUN-2016 12:32:26

Uptime                    0 days 0 hr. 22 min. 24 sec

Trace Level               off

Security                  ON: Local OS Authentication

SNMP                      OFF

Listener Parameter File   /u01/app/11.2.0/grid/network/admin/listener.ora

Listener Log File         /u01/app/grid/diag/tnslsnr/rac1/listener/alert/log.xml

Listening Endpoints Summary...

  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))

  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.8.221)(PORT=1521)))

  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.8.242)(PORT=1521)))

Services Summary...

Service "+ASM" has 1 instance(s).

  Instance "+ASM1", status READY, has 1 handler(s) for this service...

Service "orcl" has 1 instance(s).

  Instance "orcl1", status READY, has 1 handler(s) for this service...

Service "orclXDB" has 1 instance(s).

  Instance "orcl1", status READY, has 1 handler(s) for this service...

Service "orcl_taf" has 1 instance(s).

  Instance "orcl1", status READY, has 1 handler(s) for this service...

The command completed successfully

[grid@rac1 ~]# ./crs_stat.sh

Name                             Target     State      Host     

------------------------       ---------- ---------  -------  

ora.DATADG.dg                  ONLINE     ONLINE     rac1     

ora.LISTENER.lsnr              ONLINE     ONLINE     rac1     

ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE     rac2     

ora.SYSTEMDG.dg                ONLINE     ONLINE     rac1     

ora.asm                        ONLINE     ONLINE     rac1     

ora.cvu                        ONLINE     ONLINE     rac2     

ora.gsd                        OFFLINE    OFFLINE             

ora.net1.network               ONLINE     ONLINE     rac1     

ora.oc4j                       ONLINE     ONLINE     rac2     

ora.ons                        ONLINE     ONLINE     rac1     

ora.orcl.db                    ONLINE     ONLINE     rac1     

ora.orcl.orcl_taf.svc          ONLINE     ONLINE     rac1     

ora.rac1.ASM1.asm              ONLINE     ONLINE     rac1     

ora.rac1.LISTENER_RAC1.lsnr    ONLINE     ONLINE     rac1     

ora.rac1.gsd                   OFFLINE    OFFLINE             

ora.rac1.ons                   ONLINE     ONLINE     rac1     

ora.rac1.vip                   ONLINE     ONLINE     rac1     

ora.rac2.ASM2.asm              ONLINE     ONLINE     rac2     

ora.rac2.LISTENER_RAC2.lsnr    ONLINE     ONLINE     rac2     

ora.rac2.gsd                   OFFLINE    OFFLINE             

ora.rac2.ons                   ONLINE     ONLINE     rac2     

ora.rac2.vip                   ONLINE     ONLINE     rac2     

ora.rac3.ASM3.asm              ONLINE     ONLINE     rac3     

ora.rac3.LISTENER_RAC3.lsnr    ONLINE     ONLINE     rac3     

ora.rac3.gsd                   OFFLINE    OFFLINE             

ora.rac3.ons                   ONLINE     ONLINE     rac3     

ora.rac3.vip                   ONLINE     ONLINE     rac3     

ora.scan1.vip                  ONLINE     ONLINE     rac2

[grid@rac1 ~]# srvctl disable listener -l LISTENER -n rac3

[grid@rac1 ~]# srvctl stop listener -l LISTENER -n rac3

[grid@rac1 ~]# srvctl status listener -l listener

Listener LISTENER is enabled

Listener LISTENER is running on node(s): rac2,rac1

[root@rac1 ~]# ./crs_stat.sh

Name                             Target     State      Host     

------------------------       ---------- ---------  -------  

ora.DATADG.dg                  ONLINE     ONLINE     rac1     

ora.LISTENER.lsnr              ONLINE     ONLINE     rac1     

ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE     rac2     

ora.SYSTEMDG.dg                ONLINE     ONLINE     rac1     

ora.asm                        ONLINE     ONLINE     rac1     

ora.cvu                        ONLINE     ONLINE     rac2     

ora.gsd                        OFFLINE    OFFLINE             

ora.net1.network               ONLINE     ONLINE     rac1     

ora.oc4j                       ONLINE     ONLINE     rac2     

ora.ons                        ONLINE     ONLINE     rac1     

ora.orcl.db                    ONLINE     ONLINE     rac1     

ora.orcl.orcl_taf.svc          ONLINE     ONLINE     rac1     

ora.rac1.ASM1.asm              ONLINE     ONLINE     rac1     

ora.rac1.LISTENER_RAC1.lsnr    ONLINE     ONLINE     rac1     

ora.rac1.gsd                   OFFLINE    OFFLINE             

ora.rac1.ons                   ONLINE     ONLINE     rac1     

ora.rac1.vip                   ONLINE     ONLINE     rac1     

ora.rac2.ASM2.asm              ONLINE     ONLINE     rac2     

ora.rac2.LISTENER_RAC2.lsnr    ONLINE     ONLINE     rac2     

ora.rac2.gsd                   OFFLINE    OFFLINE             

ora.rac2.ons                   ONLINE     ONLINE     rac2     

ora.rac2.vip                   ONLINE     ONLINE     rac2     

ora.rac3.ASM3.asm              ONLINE     ONLINE     rac3     

ora.rac3.LISTENER_RAC3.lsnr    OFFLINE    OFFLINE             

ora.rac3.gsd                   OFFLINE    OFFLINE             

ora.rac3.ons                   ONLINE     ONLINE     rac3     

ora.rac3.vip                   ONLINE     ONLINE     rac3     

ora.scan1.vip                  ONLINE     ONLINE     rac2 

4.2 在节点3oracle用户更新Inventory

[root@rac3 ~]# su - oracle

[oracle@rac3 ~]$ cd $ORACLE_HOME/oui/bin

[oracle@rac3 bin]$ ls

addLangs.sh  attachHome.sh  filesList.bat         filesList.sh  resource      runInstaller     runSSHSetup.sh

addNode.sh   detachHome.sh  filesList.properties  lsnodes       runConfig.sh  runInstaller.sh

[oracle@rac3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES=rac3"

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

如果这里执行报错:

[oracle@rac3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES=rac3"

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' failed

注意这的解决方法:

log信息如下:

INFO: Setting variable 'INVENTORY_LOCATION' to '/u01/app/oraInventory'. Received the value from a code block.

INFO: Created OiicStandardInventorySession.

INFO: Checkpoint:getting indexSession from checkpoint factory

INFO: Checkpoint:Index file :/u01/app/oracle/11.2.0/db_1/install/checkpoints/oui/index.xml not found.

INFO: Checkpoint:Initializing checkpoint session in oiicUpdateNodeList.

INFO: Checkpoint:Location is- /u01/app/oracle/11.2.0/db_1/install

INFO: Checkpoint:Initializing checkpoint session in oiicUpdateNodeList.

INFO: Checkpoint:Index session object added to oiicexitops.

INFO: Checkpoint:Initializing checkpoint session for UpdateNodeList.

INFO: Checkpoint:checkpointfile :/u01/app/oracle/11.2.0/db_1/install/checkpoints/oui/checkpoint_null.xml not found,creating one for this session

INFO: Checkpoint:constructing checkpoint with name:oracle.installer.updatenodelist in checkpoint factory

SEVERE: oracle.sysman.oii.oiix.OiixException: The Oracle home '/u01/app/oracle/11.2.0/db_1' could not be updated as it does not exist.

at oracle.sysman.oii.oiic.OiicBaseInventoryApp.getOracleHomeInfo(OiicBaseInventoryApp.java:738)

at oracle.sysman.oii.oiic.OiicUpdateNodeList.doOperation(OiicUpdateNodeList.java:206)

at oracle.sysman.oii.oiic.OiicBaseInventoryApp.main_helper(OiicBaseInventoryApp.java:890)

at oracle.sysman.oii.oiic.OiicUpdateNodeList.main(OiicUpdateNodeList.java:399)

--查看:/etc/oraInst.loc

[root@rac3 logs]# cat /etc/oraInst.loc

inventory_loc=/u01/app/oraInventory

inst_group=oinstall

[root@rac3 logs]#

--我这个节点的添加是在节点1上操作的,所以查看节点的oraInst.loc 文件:

[oracle@rac1 ~]$ cat /etc/oraInst.loc

inventory_loc=/u01/oraInventory

inst_group=oinstall

--修改节点3oraInst.loc 与节点1一致:

[root@rac3 logs]# cat /etc/oraInst.loc

inventory_loc=/u01/oraInventory

inst_group=oinstall

--再次更新目录,这次成功:

[oracle@rac3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/11.2.0/db_1 "CLUSTER_NODES=rac3"

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 2925 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/oraInventory

'UpdateNodeList' was successful.

4.3 删除节点3ORACLE_HOME, oracle用户执行Deinstall命令

[oracle@rac3 bin]$ $ORACLE_HOME/deinstall/deinstall -local

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################

## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/db_1

Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database

Oracle Base selected for deinstall is: /u01/app/oracle

Checking for existence of central inventory location /u01/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid

The following nodes are part of this cluster: rac3

Checking for sufficient temp space availability on node(s) : 'rac3'

## [END] Install check configuration ##

Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2016-06-13_01-13-53-PM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2016-06-13_01-13-56-PM.log

Database Check Configuration END

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2016-06-13_01-13-59-PM.log

Enterprise Manager Configuration Assistant END

Oracle Configuration Manager check START

OCM check log file location : /u01/app/oraInventory/logs//ocm_check3898.log

Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid

The cluster node(s) on which the Oracle home deinstallation will be performed are:rac3

Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac3', and the global configuration will be removed.

Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/db_1

Inventory Location where the Oracle home registered is: /u01/app/oraInventory

The option -local will not modify any database configuration for this Oracle home.

No Enterprise Manager configuration to be updated for any database(s)

No Enterprise Manager ASM targets to update

No Enterprise Manager listener targets to migrate

Checking the config status for CCR

Oracle Home exists with CCR directory, but CCR is not configured

CCR check is finished

Do you want to continue (y - yes, n - no)? [n]: y

A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2016-06-13_01-13-46-PM.out'

Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2016-06-13_01-13-46-PM.err'

######################## CLEAN OPERATION START ########################

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2016-06-13_01-13-59-PM.log

Updating Enterprise Manager ASM targets (if any)

Updating Enterprise Manager listener targets (if any)

Enterprise Manager Configuration Assistant END

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2016-06-13_01-15-56-PM.log

Network Configuration clean config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2016-06-13_01-15-56-PM.log

De-configuring Local Net Service Names configuration file...

Local Net Service Names configuration file de-configured successfully.

De-configuring backup files...

Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

Oracle Configuration Manager clean START

OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean3898.log

Oracle Configuration Manager clean END

Setting the force flag to false

Setting the force flag to cleanup the Oracle Base

Oracle Universal Installer clean START

Detach Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node : Done

Delete directory '/u01/app/oracle/product/11.2.0/db_1' on the local node : Done

Failed to delete the directory '/u01/app/oracle'. The directory is in use.

Delete directory '/u01/app/oracle' on the local node : Failed <<<<

Oracle Universal Installer cleanup completed with errors.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2016-06-13_01-13-12PM' on node 'rac3'

## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################

Cleaning the config for CCR

As CCR is not configured, so skipping the cleaning of CCR configuration

CCR clean is finished

Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node.

Successfully deleted directory '/u01/app/oracle/product/11.2.0/db_1' on the local node.

Failed to delete directory '/u01/app/oracle' on the local node.

Oracle Universal Installer cleanup completed with errors.

Oracle deinstall tool successfully cleaned up temporary directories.

#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

4.4 在节点1 Oracle用户更新inventory

[root@rac1 ~]# su - oracle

[oracle@rac1 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES=rac1,rac2"

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 1868 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

. GRID层面删除节点(Clusterware

该小节的操作用grid用户或者root用户完成。

5.1 查看节点都是unpinned状态

[root@rac1 ~]# olsnodes -s -t

rac1   Active Unpinned

rac2   Active Unpinned

rac3   Active Unpinned

5.2 在节点3root用户运行deconfig

[root@rac3 ~]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -deinstall -force

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

Network exists: 1/192.168.8.0/255.255.255.0/eth0, type static

VIP exists: /192.168.8.242/192.168.8.242/192.168.8.0/255.255.255.0/eth0, hosting node rac1

VIP exists: /192.168.8.244/192.168.8.244/192.168.8.0/255.255.255.0/eth0, hosting node rac2

VIP exists: /rac3-vip/192.168.8.247/192.168.8.0/255.255.255.0/eth0, hosting node rac3

GSD exists

ONS exists: Local port 6100, remote port 6200, EM port 2016

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3'

CRS-2673: Attempting to stop 'ora.crsd' on 'rac3'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac3'

CRS-2673: Attempting to stop 'ora.DATADG.dg' on 'rac3'

CRS-2673: Attempting to stop 'ora.SYSTEMDG.dg' on 'rac3'

CRS-2677: Stop of 'ora.DATADG.dg' on 'rac3' succeeded

CRS-2677: Stop of 'ora.SYSTEMDG.dg' on 'rac3' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'rac3'

CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac3' has completed

CRS-2677: Stop of 'ora.crsd' on 'rac3' succeeded

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac3'

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac3'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac3'

CRS-2673: Attempting to stop 'ora.asm' on 'rac3'

CRS-2677: Stop of 'ora.evmd' on 'rac3' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'rac3' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac3' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac3'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac3' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac3'

CRS-2677: Stop of 'ora.cssd' on 'rac3' succeeded

CRS-2673: Attempting to stop 'ora.crf' on 'rac3'

CRS-2677: Stop of 'ora.crf' on 'rac3' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac3'

CRS-2677: Stop of 'ora.gipcd' on 'rac3' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac3'

CRS-2677: Stop of 'ora.gpnpd' on 'rac3' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed

CRS-4133: Oracle High Availability Services has been stopped.

Removing Trace File Analyzer

Successfully deconfigured Oracle clusterware stack on this node

--验证:

[root@rac1 ~]# olsnodes -s

rac1   Active

rac2   Active

rac3   Inactive

5.3 在节点1运行,删除节点

root用户执行:

[root@rac1 ~]# crsctl delete node -n rac3

CRS-4661: Node rac3 successfully deleted.

5.4 在节点3运行,更新inventory

[root@rac3 ~]# su - grid

[grid@rac3 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=rac3" -silent -local

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

5.5 删除GIRD HOME,在节点3运行Deinstall

grid用户执行:

[grid@rac3 ~]$ $ORACLE_HOME/deinstall/deinstall -local

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /tmp/deinstall2016-06-13_01-38-44PM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################

## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/11.2.0/grid

Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster

Oracle Base selected for deinstall is: /u01/app/grid

Checking for existence of central inventory location /u01/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home

The following nodes are part of this cluster: rac3

Checking for sufficient temp space availability on node(s) : 'rac3'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2016-06-13_01-38-44PM/logs//crsdc.log

Enter an address or the name of the virtual IP used on node "rac3"[rac3-vip]

 >

The following information can be collected by running "/sbin/ifconfig -a" on node "rac3"

Enter the IP netmask of Virtual IP "192.168.8.247" on node "rac3"[255.255.255.0]

 >

Enter the network interface name on which the virtual IP address "192.168.8.247" is active

 >

Enter an address or the name of the virtual IP[]

 >

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2016-06-13_01-38-44PM/logs/netdc_check2016-06-13_01-39-35-PM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2016-06-13_01-38-44PM/logs/asmcadc_check2016-06-13_01-39-38-PM.log

######################### CHECK OPERATION END #########################

####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is:

The cluster node(s) on which the Oracle home deinstallation will be performed are:rac3

Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac3', and the global configuration will be removed.

Oracle Home selected for deinstall is: /u01/app/11.2.0/grid

Inventory Location where the Oracle home registered is: /u01/app/oraInventory

Following RAC listener(s) will be de-configured: LISTENER

Option -local will not modify any ASM configuration.

Do you want to continue (y - yes, n - no)? [n]: y

A log of this session will be written to: '/tmp/deinstall2016-06-13_01-38-44PM/logs/deinstall_deconfig2016-06-13_01-39-02-PM.out'

Any error messages from this session will be written to: '/tmp/deinstall2016-06-13_01-38-44PM/logs/deinstall_deconfig2016-06-13_01-39-02-PM.err'

######################## CLEAN OPERATION START ########################

ASM de-configuration trace file location: /tmp/deinstall2016-06-13_01-38-44PM/logs/asmcadc_clean2016-06-13_01-39-41-PM.log

ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2016-06-13_01-38-44PM/logs/netdc_clean2016-06-13_01-39-41-PM.log

De-configuring RAC listener(s): LISTENER

De-configuring listener: LISTENER

    Stopping listener on node "rac3": LISTENER

    Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

De-configuring Naming Methods configuration file...

Naming Methods configuration file de-configured successfully.

De-configuring backup files...

Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rac3".

/tmp/deinstall2016-06-13_01-38-44PM/perl/bin/perl -I/tmp/deinstall2016-06-13_01-38-44PM/perl/lib -I/tmp/deinstall2016-06-13_01-38-44PM/crs/install /tmp/deinstall2016-06-13_01-38-44PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2016-06-13_01-38-44PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------

---------------------------------------->

遇到如上情况,重新开一个窗口执行

[root@rac3 ~]# /tmp/deinstall2016-06-13_01-38-44PM/perl/bin/perl -I/tmp/deinstall2016-06-13_01-38-44PM/perl/lib -I/tmp/deinstall2016-06-13_01-38-44PM/crs/install /tmp/deinstall2016-06-13_01-38-44PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2016-06-13_01-38-44PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Using configuration parameter file: /tmp/deinstall2016-06-13_01-38-44PM/response/deinstall_Ora11g_gridinfrahome1.rsp

****Unable to retrieve Oracle Clusterware home.

Start Oracle Clusterware stack and try again.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

################################################################

# You must kill processes or reboot the system to properly #

# cleanup the processes started by Oracle clusterware          #

################################################################

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall

error: package cvuqdisk is not installed

Successfully deconfigured Oracle clusterware stack on this node

执行完成后,重新回到刚才窗口按Enter,刚才的窗口会继续执行如下:

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Press Enter after you finish running the above commands

<----------------------------------------

Remove the directory: /tmp/deinstall2016-06-13_01-38-44PM on node:

Setting the force flag to false

Setting the force flag to cleanup the Oracle Base

Oracle Universal Installer clean START

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done

Delete directory '/u01/app/11.2.0/grid' on the local node : Done

Delete directory '/u01/app/oraInventory' on the local node : Done

Delete directory '/u01/app/grid' on the local node : Done

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2016-06-13_01-38-44PM' on node 'rac3'

## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################

####################### CLEAN OPERATION SUMMARY #######################

Following RAC listener(s) were de-configured successfully: LISTENER

Oracle Clusterware is stopped and successfully de-configured on node "rac3"

Oracle Clusterware is stopped and de-configured successfully.

Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.

Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.

Successfully deleted directory '/u01/app/oraInventory' on the local node.

Successfully deleted directory '/u01/app/grid' on the local node.

Oracle Universal Installer cleanup was successful.

Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac3' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac3' at the end of the session.

Run 'rm -rf /etc/oratab' as root on node(s) 'rac3' at the end of the session.

Oracle deinstall tool successfully cleaned up temporary directories.

#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

5.6 在保留节点运行,更新inventory

在节点1,用grid用户执行。

[grid@rac1 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=rac1,rac2" -silent

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 1848 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

5.7 CVU检查节点删除是否成功

在节点1上用grid用户执行:

[grid@rac1 ~]$ cluvfy stage -post nodedel -n rac3 -verbose

Performing post-checks for node removal

Checking CRS integrity...

Clusterware version consistency passed

The Oracle Clusterware is healthy on node "rac2"

The Oracle Clusterware is healthy on node "rac1"

CRS integrity check passed

Result:

Node removal check passed

Post-check for node removal was successful.

.验证

6.1 效验

[grid@rac1 ~]$ olsnodes -s

rac1   Active

rac2   Active

[grid@rac1 ~]$ olsnodes -n

rac1   1

rac2   2

[grid@rac1 ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS      

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.DATADG.dg

               ONLINE  ONLINE       rac1                                        

               ONLINE  ONLINE       rac2                                         

ora.LISTENER.lsnr

               ONLINE  ONLINE       rac1                                        

               ONLINE  ONLINE       rac2                                        

ora.SYSTEMDG.dg

               ONLINE  ONLINE       rac1                                        

               ONLINE  ONLINE       rac2                                        

ora.asm

               ONLINE  ONLINE       rac1                     Started            

               ONLINE  ONLINE       rac2                     Started            

ora.gsd

               OFFLINE OFFLINE      rac1                                        

               OFFLINE OFFLINE      rac2                                        

ora.net1.network

               ONLINE  ONLINE       rac1                                        

               ONLINE  ONLINE       rac2                                        

ora.ons

               ONLINE  ONLINE       rac1                                        

               ONLINE  ONLINE       rac2                                        

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       rac2                                        

ora.cvu

      1        ONLINE  ONLINE       rac2                                        

ora.oc4j

      1        ONLINE  ONLINE       rac2                                         

ora.orcl.db

      1        ONLINE  ONLINE       rac1                     Open               

      2        ONLINE  ONLINE       rac2                     Open               

ora.orcl.orcl_taf.svc

      1        ONLINE  ONLINE       rac1                                        

      3        ONLINE  ONLINE       rac2                                        

ora.rac1.vip

      1        ONLINE  ONLINE       rac1                                        

ora.rac2.vip

      1        ONLINE  ONLINE       rac2                                        

ora.scan1.vip

      1        ONLINE  ONLINE       rac2       

     

[root@rac1 ~]# ./crs_stat.sh

Name                             Target     State      Host     

------------------------       ---------- ---------  -------  

ora.DATADG.dg                  ONLINE     ONLINE     rac1     

ora.LISTENER.lsnr              ONLINE     ONLINE     rac1     

ora.LISTENER_SCAN1.lsnr        ONLINE     ONLINE     rac2     

ora.SYSTEMDG.dg                ONLINE     ONLINE     rac1     

ora.asm                        ONLINE     ONLINE     rac1     

ora.cvu                        ONLINE     ONLINE     rac2     

ora.gsd                        OFFLINE    OFFLINE             

ora.net1.network               ONLINE     ONLINE     rac1     

ora.oc4j                       ONLINE     ONLINE     rac2     

ora.ons                        ONLINE     ONLINE     rac1     

ora.orcl.db                    ONLINE     ONLINE     rac1     

ora.orcl.orcl_taf.svc          ONLINE     ONLINE     rac1     

ora.rac1.ASM1.asm              ONLINE     ONLINE     rac1     

ora.rac1.LISTENER_RAC1.lsnr    ONLINE     ONLINE     rac1     

ora.rac1.gsd                   OFFLINE    OFFLINE             

ora.rac1.ons                   ONLINE     ONLINE     rac1     

ora.rac1.vip                   ONLINE     ONLINE     rac1     

ora.rac2.ASM2.asm              ONLINE     ONLINE     rac2     

ora.rac2.LISTENER_RAC2.lsnr    ONLINE     ONLINE     rac2     

ora.rac2.gsd                   OFFLINE    OFFLINE             

ora.rac2.ons                   ONLINE     ONLINE     rac2     

ora.rac2.vip                   ONLINE     ONLINE     rac2     

ora.scan1.vip                  ONLINE     ONLINE     rac2  

6.2 清除残留文件

在节点3上可能还有一些目录存在,可以使用如下命令,进行清除:

清除家目录:

rm -rf /u01/app/grid_home

rm -rf /home/oracle

清除相关文件:

rm -rf /tmp/.oracle

rm -rf /var/tmp/.oracle

rm -rf /etc/init/oracle-ohasd.conf

rm -rf /etc/init.d/ohasd

rm -rf /etc/init.d/init.ohasd

rm -rf /etc/oraInst.loc

rm -rf /etc/oratab

rm -rf /etc/oracle

. 11gR2 添加删除节点小结

11gR2 RAC 添加节点分3个阶段:

1)第一阶段主要工作是复制GIRD HOME到新节点,配置GRID,并且启动GRID,同时更新OCR信息,更新inventory信息。

2)第二阶段主要工作是复制RDBMS HOME到新节点,更新inventory信息。

3)第三阶段主要工作是DBCA创建新的数据库实例(包括创建undo 表空间,redo log,初始化参数等),更新OCR信息(包括注册新的数据库实例等)。

11gR2 的卸载步骤正好和上面的步骤相反。 步骤还是三个步骤。

在添加/删除节点的过程中,原有的节点一直是online状态,不需要停机,对客户端业务没有影响。新节点的ORACLE_BASEORACLE_HOME 路径在添加过程中会自动创建,无需手动创建。

注意事项:

1)在添加/删除节点前,建议手工备份一下OCR,在某些情况下添加/删除节点失败,可以通过恢复原来的OCR来解决问题。

2)在正常安装11.2 GRID时,OUI界面提供SSH 配置功能,但是添加节点脚本addNode.sh没有这个功能,因此需要手动配置oracle用户和grid用户的SSH 用户等效性。

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/29812844/viewspace-2119822/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/29812844/viewspace-2119822/

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值