In this Document
Purpose |
Scope |
Details |
Unpin before node delete |
A. Grid Infrastructure Cluster - Entire Cluster |
Why is deconfigure needed? |
Steps to deconfigure |
B. Grid Infrastructure Cluster - One or Partial Nodes |
Steps to deconfigure and reconfigure |
C. Grid Infrastructure Standalone (Oracle Restart) |
Why is deconfigure needed? |
Steps to deconfigure |
D. Grid Infrastructure Deinstall |
References |
APPLIES TO:
Oracle Database Exadata Express Cloud Service - Version N/A and later
Oracle Database Cloud Schema Service - Version N/A and later
Gen 1 Exadata Cloud at Customer (Oracle Exadata Database Cloud Machine) - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Database Backup Service - Version N/A and later
Information in this document applies to any platform.
PURPOSE
This note provides instruction to deconfigure/reconfigure or deinstall 11gR2/12c/18c Grid Infrastructure.
Note: in 12c, or higher, while performing the procedure, instead of rootcrs.pl, rootcrs.sh should be used if exists in the same location as rootcrs.pl
SCOPE
DETAILS
Unpin before node delete
Before deconfiguring a node, ensure it's not pinned, i.e.:
<GI_HOME>/bin/olsnodes -s -t
racnode1 Inactive Pinned
racnode2 Active Unpinned
racnode3 Active Unpinned
If a node is pinned, unpin it first, i.e. as root user:
<GI_HOME>/bin/crsctl unpin css -n <racnode1>
A. Grid Infrastructure Cluster - Entire Cluster
Deconfigure and reconfigure entire cluster will rebuild OCR and Voting Disk, user resources (database, instance, service, listener etc) will need to be added back to the cluster manually after reconfigure finishes.
The following may be needed to reconfigrure the cluster manually.
Why is deconfigure needed?
Deconfigure is needed when:
- OCR is corrupted without any good backup
- Or GI stack will not come up on any nodes due to missing Oracle Clusterware related files in /etc or /var/opt/oracle, i.e. init.ohasd missing etc. If GI is able to come up on at least one node, refer to next Section "B. Grid Infrastructure Cluster - One or Partial Nodes".
- $GRID_HOME should be intact as deconfigure will NOT fix $GRID_HOME corruption
Steps to deconfigure
Before deconfiguring, collect the following as grid user if possible to generate a list of user resources to be added back to the cluster after reconfigure finishes:
$GRID_HOME/bin/crsctl stat res -t
$GRID_HOME/bin/crsctl stat res -p
$GRID_HOME/bin/crsctl query css votedisk
$GRID_HOME/bin/ocrcheck
$GRID_HOME/bin/oifcfg getif
$GRID_HOME/bin/srvctl config nodeapps -a
$GRID_HOME/bin/srvctl config scan
$GRID_HOME/bin/srvctl config asm -a
$GRID_HOME/bin/srvctl config listener -l <listener-name> -a
$DB_HOME/bin/srvctl config database -d <dbname> -a
$DB_HOME/bin/srvctl config service -d <dbname> -s <service-name> -v
If ACFS (ASM Cluster File System) or AFD (ASM Filter Driver) is used, the following data needs to be collected before deconfig or deinstall:
11gR2:
uname -a
df -k
lsmod | grep oracle
rpm -qa | grep -i kmod
$GRID_HOME/bin/acfsdriverstate supported
$GRID_HOME/bin/acfsdriverstate version
$GRID_HOME/bin/acfsdriverstate installed
$GRID_HOME/bin/acfsdriverstate loaded
$ /sbin/acfsutil info fs
$GRID_HOME/bin/asmcmd volinfo -all
$GRID_HOME/bin/asmcmd lsdg
12c:
uname -a
df -k
lsmod | grep oracle
rpm -qa | grep -i kmod
$GRID_HOME/bin/acfsdriverstate supported
$GRID_HOME/bin/acfsdriverstate version
$GRID_HOME/bin/acfsdriverstate installed
$GRID_HOME/bin/acfsdriverstate loaded
$GRID_HOME/bin/afddriverstate supported
$GRID_HOME/bin/afddriverstate version
$GRID_HOME/bin/afddriverstate installed
$GRID_HOME/bin/afddriverstate loaded
$ /sbin/acfsutil info fs
$GRID_HOME/bin/asmcmd showclustermode
$GRID_HOME/bin/asmcmd volinfo --all
$GRID_HOME/bin/asmcmd lsdg
Note: in 12c, or higher, while performing the procedure, instead of rootcrs.pl, rootcrs.sh should be used if exists in the same location as rootcrs.pl
To deconfigure:
- If OCR and Voting Disks are NOT on ASM, or If OCR and Voting Disks are on ASM but there's NO user data in OCR/Voting Disk ASM diskgroup:
On all remote nodes, as root execute:# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose
Once the above command finishes on all remote nodes, on local node, as root execute:# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose -lastnode
To reconfigure, run $GRID_HOME/crs/config/config.sh, refer to note 1354258.1 for details
- If OCR or Voting Disks are on ASM and there is user data in OCR/Voting Disk ASM diskgroup:
- If GI version is 11.2.0.3 AND fix for bug 13058611 and bug 13001955 has been applied, or GI version is 11.2.0.3.2 GI PSU (includes both fixes) or higher:
On all remote nodes, as root execute:# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose
Once the above command finishes on all remote nodes, on local node, as root execute:# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose -keepdg -lastnode
To reconfigure, run $GRID_HOME/crs/config/config.sh, refer to note 1354258.1 for details
- If the diskgroups are corrupted you will also need to zero out the headers, please check with support for advice if you are unsure.
$ dd if=/dev/zero of=/dev/oracleasm/disks/xxx bs=1M count=10
- If fix for bug 13058611 and bug 13001955 has NOT been applied:
On all nodes, as root execute:# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force -verbose
To reconfigure:
For 11.2.0.1 - deinstall and reinstall with OCR/Voting Disk on a new ASM diskgroup or supported cluster/network filesystem
For 11.2.0.2 and onward - run $GRID_HOME/crs/config/config.sh and place OCR/Voting Disk on a new ASM diskgroup or support cluster/network filesystem. Refer to note 1354258.1 for more details of config.sh/config.bat
In 12.2, the config.sh is replaced with gridSetup.sh, so use gridSetup.sh instead of config.sh to reconfigure on 12.2 and above. Refer to Doc ID 1354258.1 for more details
B. Grid Infrastructure Cluster - One or Partial Nodes
This procedure applies only when all the followings are true:
- One or partial nodes are having problem, but one or other nodes are running fine - so there's no need to deconfigure the entire cluster
- And GI is a fresh installation (NOT upgrad) without any patch set (interim patch or patch set update(PSU) is fine). A direct patch set installation is considered as a fresh installation regardless how long it has been running, as long as there was no Oracle Clusterware running when it is first installed.
- And cluster parameters have not been changed since original configuration, eg: OCR/VD on same location, network configuration has not been changed etc
- And $GRID_HOME is intact as deconfigure will NOT fix $GRID_HOME corruption
- If any of the above is NOT true, node removal/addition procedure should be used: <note 1332451.1> - How to Add Node/Instance or Remove Node/Instance in Oracle Clusterware and RAC
Steps to deconfigure and reconfigure
Gather the following first, If ACFS (ASM Cluster File System) or AFD (ASM Filter Driver) is used:
11gR2:
uname -a
df -k
lsmod | grep oracle
rpm -qa | grep -i kmod
$GRID_HOME/bin/acfsdriverstate supported
$GRID_HOME/bin/acfsdriverstate version
$GRID_HOME/bin/acfsdriverstate installed
$GRID_HOME/bin/acfsdriverstate loaded
$ /sbin/acfsutil info fs
$GRID_HOME/bin/asmcmd volinfo -all
$GRID_HOME/bin/asmcmd lsdg
12c:
uname -a
df -k
lsmod | grep oracle
rpm -qa | grep -i kmod
$GRID_HOME/bin/acfsdriverstate supported
$GRID_HOME/bin/acfsdriverstate version
$GRID_HOME/bin/acfsdriverstate installed
$GRID_HOME/bin/acfsdriverstate loaded
$GRID_HOME/bin/afddriverstate supported
$GRID_HOME/bin/afddriverstate version
$GRID_HOME/bin/afddriverstate installed
$GRID_HOME/bin/afddriverstate loaded
$ /sbin/acfsutil info fs
$GRID_HOME/bin/asmcmd showclustermode
$GRID_HOME/bin/asmcmd volinfo --all
$GRID_HOME/bin/asmcmd lsdg
As root, on each problematic node, execute:
# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force
# <$GRID_HOME>/root.sh
For Windows platform, since root.sh doesn't exist, use node removal/addition procedure: note 1332451.1 - How to Add Node/Instance or Remove Node/Instance in Oracle Clusterware and RAC
C. Grid Infrastructure Standalone (Oracle Restart)
Why is deconfigure needed?
Deconfigure is needed when:
- OLR is corrupted without any good backup
- GI stack will not come up due to missing Oracle Clusterware related files in /etc or /var/opt/oracle, i.e. init.ohasd is missing etc
- Nodename needs to be changed
Steps to deconfigure
Before deconfiguring, collect the following if possible:
$GRID_HOME/bin/crsctl stat res -t
$GRID_HOME/bin/crsctl stat res -p
$GRID_HOME/bin/srvctl config asm -a
$GRID_HOME/bin/srvctl config listener -l <listener-name> -a
$DB_HOME/bin/srvctl config database -d <dbname> -a
$DB_HOME/bin/srvctl config service -d <dbname> -s <service-name> -v
If ACFS (ASM Cluster File System) or AFD (ASM Filter Driver) is used, the following data needs to be collected before deconfig or deinstall:
11gR2:
uname -a
df -k
lsmod | grep oracle
rpm -qa | grep -i kmod
$GRID_HOME/bin/acfsdriverstate supported
$GRID_HOME/bin/acfsdriverstate version
$GRID_HOME/bin/acfsdriverstate installed
$GRID_HOME/bin/acfsdriverstate loaded
$ /sbin/acfsutil info fs
$GRID_HOME/bin/asmcmd volinfo -all
$GRID_HOME/bin/asmcmd lsdg
12c:
uname -a
df -k
lsmod | grep oracle
rpm -qa | grep -i kmod
$GRID_HOME/bin/acfsdriverstate supported
$GRID_HOME/bin/acfsdriverstate version
$GRID_HOME/bin/acfsdriverstate installed
$GRID_HOME/bin/acfsdriverstate loaded
$GRID_HOME/bin/afddriverstate supported
$GRID_HOME/bin/afddriverstate version
$GRID_HOME/bin/afddriverstate installed
$GRID_HOME/bin/afddriverstate loaded
$ /sbin/acfsutil info fs
$GRID_HOME/bin/asmcmd showclustermode
$GRID_HOME/bin/asmcmd volinfo --all
$GRID_HOME/bin/asmcmd lsdg
To deconfigure:
As root execute:
# <$GRID_HOME>/crs/install/roothas.pl -deconfig -force -verbose
To reconfigure, refer to note 1422517.1
D. Grid Infrastructure Deinstall
As grid user, execute:
$ <$GRID_HOME>/deinstall/deinstall
If you only need to fix one node use the deinstall -local, without the -local it will deinsall all nodes.
For details, refer to the following documentation for your platform:
Oracle Grid Infrastructure
Installation Guide
How to Modify or Deinstall Oracle Grid Infrastructure
If there's any error, deconfigure the failed GI with steps in Section A - C, and deinstall manually with note 1364419.1
For 12.2:
First deinstall the database home (by running the deinstall tool from the <db_home>/deinstall directory and as the database home software owner) if admin based database is configured, before attempting to remove the grid home, otherwise report error as below
ERROR: You must delete or downgrade Clusterware-managed Oracle databases and de-install Clusterware-managed Oracle homes before attempting to remove the Oracle Clusterware home.
For searchability: recreate OCR, recreate Voting Disk, rebuild OCR
For 18c:
grid deconfig command
/Grid_home/crs/install/rootcrs.sh -deconfig -force
REFERENCES
NOTE:969254.1 - How to Proceed from Failed Upgrade to 11gR2 Grid Infrastructure on Linux/Unix
NOTE:1354258.1 - How to Configure or Re-configure Grid Infrastructure With config.sh/config.bat
NOTE:1069369.1 - How to Delete From or Add Resource to OCR in Oracle Clusterware
NOTE:1364419.1 - How to Deinstall Oracle Clusterware Home Manually
NOTE:399482.1 - Pre-11.2: How to Recreate OCR/Voting Disk Accidentally Deleted
GOAL
In 11gR2, deinstall is the recommended tool to deinstall an Oracle Clusterware(Grid Infrastructure) home, however it does not apply to certain scenarios or failures in some cases.
This notes provides instruction to manually remove current/active clusterware home.
Before removing current/active clusterware home, it's necessary to deconfigure Grid Infrastructure, refer to note 1377349.1 for steps.
To remove pre-upgrade clusterware home after successful upgrade to newer GI version, refer to note 1346305.1
SOLUTION
Before removing current/active clusterware home, it's necessary to deconfigure Grid Infrastructure, refer to note 1377349.1 for steps
To remove a home, as clusterware user execute the following on any node:
export ORACLE_HOME=<clusterware-home>
## detach ORACLE_HOME
$ORACLE_HOME/oui/bin/runInstaller -detachHome -silent ORACLE_HOME=$ORACLE_HOME
## confirm $ORACLE_HOME is removed from central inventory:
$ORACLE_HOME/OPatch/opatch lsinventory -all
## remove files in ORACLE_HOME manually on all nodes
/bin/rm -rf $ORACLE_HOME ##>> if grid user fails to remove all files, switch to root user
unset ORACLE_HOME
If it fails for any reason, as clusterware user execute the following on all nodes:
export ORACLE_HOME=<clusterware-home>
## detach ORACLE_HOME
$ORACLE_HOME/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=$ORACLE_HOME
## confirm $ORACLE_HOME is removed from central inventory:
$ORACLE_HOME/OPatch/opatch lsinventory -all
## remove files in ORACLE_HOME manually
/bin/rm -rf $ORACLE_HOME ##>> if grid user fails to remove all files, switch to root user
unset ORACLE_HOME