RAC11.2G 节点删除

lmocm191 为要删除的节点, lmocm189,lmocm190 为保留的节点。   --参考《构建最高可用oracle数据库系统》

1 清除EM管理工具.  --在需要删除的节点上。
[oracle@lmocm191 ~]$ emca -deleteNode db
STARTED EMCA at Aug 2, 2013 5:25:12 PM
EM Configuration Assistant, Version 11.2.0.3.0 Production
Copyright (c) 2003, 2011, Oracle.  All rights reserved.

Enter the following information:
Database unique name: lmocm    
Service name: lmocm
Node name: lmocm191
Database SID: lmocm3

Do you wish to continue? [yes(Y)/no(N)]: y
Aug 2, 2013 5:26:04 PM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at /u01/app/oracle/cfgtoollogs/emca/lmocm/lmocm3/emca_2013_08_02_17_25_10.log.
Aug 2, 2013 5:26:29 PM oracle.sysman.emcp.util.GeneralUtil initSQLEngineLoacly
SEVERE: No SID specified
Aug 2, 2013 5:26:29 PM oracle.sysman.emcp.DatabaseChecks throwDBUnavailableException
SEVERE: 
Database instance is unavailable. Fix the ORA error thrown and run EM Configuration Assistant again.

Some of the possible reasons may be : 

1) Database may not be up. 
2) Database is started setting environment variable ORACLE_HOME with trailing '/'. Reset ORACLE_HOME and bounce the database. 

For eg. Database is started setting environment variable ORACLE_HOME=/scratch/db/ . Reset ORACLE_HOME=/scratch/db  and bounce the database.

2删除实例 --在保留的节点上执行:
[oracle@lmocm189 ~]$ dbca -silent -deleteInstance -nodeList lmocm191  -gdbName lmocm -instanceName lmocm3 -sysDBAUserName sys -sysDBAPassword oracle
Deleting instance
20% complete
21% complete
22% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/lmocm0.log" for further details.

3 卸载节点DATABASE软件: 
3.1 禁用,停止监听:--在删除节点上。

[root@lmocm191 ~]# su - grid
[grid@lmocm191 ~]$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 05-AUG-2013 14:20:51

Copyright (c) 1991, 2011, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.3.0 - Production
Start Date                01-AUG-2013 15:47:39
Uptime                    3 days 22 hr. 33 min. 12 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/11.2/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/lmocm191/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.103.191)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.103.202)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM3", status READY, has 1 handler(s) for this service...
The command completed successfully
[grid@lmocm191 ~]$ srvctl disable listener -n lmocm191
[grid@lmocm191 ~]$ srvctl stop listener -n lmocm191

[grid@lmocm191 ~]$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 05-AUG-2013 14:39:20

Copyright (c) 1991, 2011, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
TNS-12541: TNS:no listener
 TNS-12560: TNS:protocol adapter error
  TNS-00511: No listener
   Linux Error: 2: No such file or directory

3.2 执行命令,更新INVENTORY  -在要删除的节点 $ORACLE_HOME/oui/bin 下执行:
$ ./runInstaller -updateNodeList ORACLE_HOME=oracle_home_location "CLUSTER_NODES={name_of_node_to_delete}" -local
    --{name_of_node_to_delete} 为删除的节点,如果多个,使用“,”隔开。

3.3 移除RACdatabase home.--两总模式:
a, 共享 oracle RAC HOME   --在要删除的节点$ORACLE_HOME/oui/bin下执行:
$ ./runInstaller -detachHome ORACLE_HOME= <oracle_home_location>   
          -- <oracle_home_location>为家目录/u01/app/oracle/11.2/product/dbhome_1

b,非共享 Oracle RAC HOME 在要删除的节点以oracle用户在$ORACLE_HOME/deinstall 目录下执行:
$ ./deinstall -local

[oracle@lmocm191 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2013-08-06_10-49-31AM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
The deinstall tool cannot determine the home type needed to deconfigure the selected home.  Please select the type of Oracle home you are trying to deinstall.
Single Instance database - Enter 1
Real Application Cluster database - Enter 2
Grid Infrastructure for a cluster - Enter 3
Grid Infrastructure for a stand-alone server - Enter 4
Client Oracle Home - Enter 5
Transparent Gateways Oracle Home - Enter 6
1
The product version number of the specified home cannot be determined. Is the product version at least 11.2.0.1.0 (y - yes, n - no)? [n]
y


Checking for existence of the Oracle home location /u01/app/oracle/product/11.2/dbhome_1
Oracle Home type selected for deinstall is: Oracle Single Instance Database
Oracle Base selected for deinstall is: 
Checking for existence of central inventory location /u01/app/oraInventory
Checking for sufficient temp space availability on node(s) : 'lmocm191'

## [END] Install check configuration ##


Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2013-08-06_10-49-31AM/logs/netdc_check2013-08-06_10-50-06-AM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /tmp/deinstall2013-08-06_10-49-31AM/logs/databasedc_check2013-08-06_10-50-07-AM.log

Use comma as separator when specifying list of values as input

Specify the list of database names that are configured in this Oracle home [lmocm]: lmocm
Database Check Configuration END

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /tmp/deinstall2013-08-06_10-49-31AM/logs/emcadc_check2013-08-06_10-50-18-AM.log 

Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /tmp/deinstall2013-08-06_10-49-31AM/logs//ocm_check4176.log
Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2/dbhome_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.
No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2013-08-06_10-49-31AM/logs/deinstall_deconfig2013-08-06_10-49-58-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2013-08-06_10-49-31AM/logs/deinstall_deconfig2013-08-06_10-49-58-AM.err'

######################## CLEAN OPERATION START ########################

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /tmp/deinstall2013-08-06_10-49-31AM/logs/emcadc_clean2013-08-06_10-50-18-AM.log 

Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /tmp/deinstall2013-08-06_10-49-31AM/logs/databasedc_clean2013-08-06_10-50-23-AM.log

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2013-08-06_10-49-31AM/logs/netdc_clean2013-08-06_10-50-23-AM.log

De-configuring Local Net Service Names configuration file...
Local Net Service Names configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

Oracle Configuration Manager clean START
OCM clean log file location : /tmp/deinstall2013-08-06_10-49-31AM/logs//ocm_clean4176.log
Oracle Configuration Manager clean END
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Delete directory '/u01/app/oracle/product/11.2/dbhome_1' on the local node : Done

Delete directory '/u01/app/oraInventory' on the local node : Done

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2013-08-06_10-49-31AM' on node 'lmocm191'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully deleted directory '/u01/app/oracle/product/11.2/dbhome_1' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Oracle Universal Installer cleanup was successful.


Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'lmocm191' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'lmocm191' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

[root@lmocm191 ~]# rm -rf /etc/oraInst.loc
[root@lmocm191 ~]# rm -rf /opt/ORCLfmap


3.4 更新其他节点的inventory:--在其他保留的节点oracle用户$ORACLE_HOME/oui/bin目录下执行: 
$ ./runInstaller -updateNodeList ORACLE_HOME=<oracle_home_location> "CLUSTER_NODES={remaining_node_list}"

[oracle@lmocm190 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2/dbhome_1/ "CLUSTER_NODES={lmocm189,lmocm190}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3129 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.


4,卸载 clusterware(grid)软件
 4.1 确定所有$GRID_HOME环境变量都配置正确。
 4.2 查看是否是pinned 状态,(以root或者grid用户执行)
$olsnodes -s -t 

[grid@lmocm189 ~]$ olsnodes -s -t
lmocm189 Active Unpinned
lmocm190 Active Unpinned
lmocm191 Inactive Unpinned
 
如果上面命令返回结果是pinned状态,执行 crsctl unpin css 命令。

 4.3 以root用户在要删除的节点$GRID_HOME/crs/install 目录下执行:
#./rootcrs.pl -deconfig -force  --(如果多个节点,需要在每个节点上执行,如果删除所有及节点,加上-lastnode选项。--清空OCR 和表决磁盘)。

 [root@lmocm191 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.103.0/255.255.255.0/eth0, type static
VIP exists: /vip189/192.168.103.200/192.168.103.0/255.255.255.0/eth0, hosting node lmocm189
VIP exists: /vip190/192.168.103.201/192.168.103.0/255.255.255.0/eth0, hosting node lmocm190
VIP exists: /vip191/192.168.103.202/192.168.103.0/255.255.255.0/eth0, hosting node lmocm191
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
PRKO-2425 : VIP is already stopped on node(s): lmocm191

CRS-2673: Attempting to stop 'ora.registry.acfs' on 'lmocm191'
CRS-2677: Stop of 'ora.registry.acfs' on 'lmocm191' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'lmocm191'
CRS-2673: Attempting to stop 'ora.crsd' on 'lmocm191'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'lmocm191'
CRS-2673: Attempting to stop 'ora.ARCHIVELOG.dg' on 'lmocm191'
CRS-2673: Attempting to stop 'ora.DBFILE.dg' on 'lmocm191'
CRS-2673: Attempting to stop 'ora.FLASHBACK.dg' on 'lmocm191'
CRS-2673: Attempting to stop 'ora.LOGFILE.dg' on 'lmocm191'
CRS-2673: Attempting to stop 'ora.VODISK.dg' on 'lmocm191'
CRS-2677: Stop of 'ora.LOGFILE.dg' on 'lmocm191' succeeded
CRS-2677: Stop of 'ora.DBFILE.dg' on 'lmocm191' succeeded
CRS-2677: Stop of 'ora.FLASHBACK.dg' on 'lmocm191' succeeded
CRS-2677: Stop of 'ora.ARCHIVELOG.dg' on 'lmocm191' succeeded
CRS-2677: Stop of 'ora.VODISK.dg' on 'lmocm191' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'lmocm191'
CRS-2677: Stop of 'ora.asm' on 'lmocm191' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'lmocm191' has completed
CRS-2677: Stop of 'ora.crsd' on 'lmocm191' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'lmocm191'
CRS-2673: Attempting to stop 'ora.ctssd' on 'lmocm191'
CRS-2673: Attempting to stop 'ora.evmd' on 'lmocm191'
CRS-2673: Attempting to stop 'ora.asm' on 'lmocm191'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'lmocm191'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'lmocm191'
CRS-2677: Stop of 'ora.crf' on 'lmocm191' succeeded
CRS-2677: Stop of 'ora.evmd' on 'lmocm191' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'lmocm191' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'lmocm191' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'lmocm191' succeeded
CRS-2677: Stop of 'ora.asm' on 'lmocm191' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'lmocm191'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'lmocm191' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'lmocm191'
CRS-2677: Stop of 'ora.cssd' on 'lmocm191' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'lmocm191'
CRS-2677: Stop of 'ora.gipcd' on 'lmocm191' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'lmocm191'
CRS-2677: Stop of 'ora.gpnpd' on 'lmocm191' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'lmocm191' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node

4.4 在任何保留节点以root用户下$GRID_HOME/bin 目录下执行命令,执行节点删除操作:
# ./crsctl delete node -n <node_to_be_deleted>
[root@lmocm189 bin]# ./crsctl delete node -n lmocm191
CRS-4661: Node lmocm191 successfully deleted.

4.5 在任何保留的节点grid 用户在$GRID_HOME/oui/bin 目录下执行命令,更新保留节点 Inventory
$ ./runInstaller -updateNodeList ORACLE_HOME=GRID_HOME "CLUSTER_NODES={node_to_be_deleted}" CRS=TRUE -silent -local   ({node_to_be_deleted} 现有的lmocm189,lmocm190 节点)

[grid@lmocm189 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2/grid/ "CLUSTER_NODES={lmocm189,lmocm190}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2263 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

4.6  在任何保留节点上以grid用户 在$GRID_HOME/oui/bin目录下执行一下命令。 2种模式:

a,安装目录是共享模式执行:
$ ./runInstaller -detachhome oracle_home=GRID_HOME -silent -local

b,安装目录非共享模式: $GRID_HOME/deinstall  下执行:
$ ./deinstall -local
[grid@lmocm189 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/11.2/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2/grid
The following nodes are part of this cluster: lmocm189,lmocm190
Checking for sufficient temp space availability on node(s) : 'lmocm189,lmocm190'

## [END] Install check configuration ##

Traces log file: /u01/app/oraInventory/logs//crsdc.log
ERROR: You must delete or downgrade Clusterware-managed Oracle databases and de-install Clusterware-managed Oracle homes before attempting to remove the Oracle Clusterware home.

Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2013-08-06_01-06-23-PM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2013-08-06_01-06-26-PM.log


######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/11.2/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:lmocm189,lmocm190
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'lmocm189', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2013-08-06_01-06-05-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2013-08-06_01-06-05-PM.err'

ERROR: The deconfiguration and deinstallation tool has detected runtime errors when checking the existing configuration due to which the tool cannot continue with clean up operation.  Please check the log files for more information.  Rerun the tool after fixing the errors to proceed with the ORACLE_HOME clean up.

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

4.7 在任何保留节点上以GRID用户在$GRID_HOME/oui/bin目录下执行命令,更新保留点 Inventory:
$./runInstaller -updateNodeList ORACLE_HOME=GRID_HOME "CLUSTER_NODES={remaining_nodes_list}" CRS=TRUE -silent

[grid@lmocm189 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2/grid "CLUSTER_NODES={lmocm189,lmocm190}" CRS=TRUE -silent
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2235 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

4.8 在任何保留的节点上以grid 用户执行 CVU命令,验证节点是否成功删除。
cluvfy stage -post nodede1 -n node_list [-verbose]

grid@lmocm189 ~]$ cluvfy stage -post nodedel -n lmocm191 -verbose

Performing post-checks for node removal 

Checking CRS integrity...

Clusterware version consistency passed
The Oracle Clusterware is healthy on node "lmocm190"
The Oracle Clusterware is healthy on node "lmocm189"

CRS integrity check passed
Result: 
Node removal check passed

Post-check for node removal was successful.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值