GI卸载
[grid@rac1 grid_1]$ cd deinstall/
[grid@rac1 deinstall]$ ls
bootstrap.pl deinstall deinstall.pl deinstall.xml jlib readme.txt response sshUserSetup.sh
[grid@rac1 deinstall]$ ./deinstall
Network Configuration clean config END
---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
按照步骤在二节点执行脚本
Run the following command as the root user or the administrator on node "rac2".
/tmp/deinstall2022-08-14_07-13-25PM/perl/bin/perl -I/tmp/deinstall2022-08-14_07-13-25PM/perl/lib -I/tmp/deinstall2022-08-14_07-13-25PM/crs/install /tmp/deinstall2022-08-14_07-13-25PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2022-08-14_07-13-25PM/response/deinstall_Ora11g_gridinfrahome1.rsp" --执行脚本期间每一步按回车就可以
按照步骤在一节点执行脚本
Run the following command as the root user or the administrator on node "rac1".
/tmp/deinstall2022-08-14_07-13-25PM/perl/bin/perl -I/tmp/deinstall2022-08-14_07-13-25PM/perl/lib -I/tmp/deinstall2022-08-14_07-13-25PM/crs/install /tmp/deinstall2022-08-14_07-13-25PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2022-08-14_07-13-25PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode --执行脚本期间每一步按回车就可以
二节点执行
[root@rac2 ~]# /tmp/deinstall2022-08-14_07-13-25PM/perl/bin/perl -I/tmp/deinstall2022-08-14_07-13-25PM/perl/lib -I/tmp/deinstall2022-08-14_07-13-25PM/crs/install /tmp/deinstall2022-08-14_07-13-25PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2022-08-14_07-13-25PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2022-08-14_07-13-25PM/response/deinstall_Ora11g_gridinfrahome1.rsp
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
一节点执行
[root@rac1 ~]# /tmp/deinstall2022-08-14_07-13-25PM/perl/bin/perl -I/tmp/deinstall2022-08-14_07-13-25PM/perl/lib -I/tmp/deinstall2022-08-14_07-13-25PM/crs/install /tmp/deinstall2022-08-14_07-13-25PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2022-08-14_07-13-25PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode
Using configuration parameter file: /tmp/deinstall2022-08-14_07-13-25PM/response/deinstall_Ora11g_gridinfrahome1.rsp
Network exists: 1/192.168.159.0/255.255.255.0/ens33, type static
VIP exists: /rac1-vip/192.168.159.103/192.168.159.0/255.255.255.0/ens33, hosting node rac1
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-4611: Successful deletion of voting disk +CRS.
ASM de-configuration trace file location: /tmp/deinstall2022-08-14_07-13-25PM/logs/asmcadc_clean2022-08-14_07-18-52-PM.log
ASM Clean Configuration START
ASM Clean Configuration END
ASM with SID +ASM1 deleted successfully. Check /tmp/deinstall2022-08-14_07-13-25PM/logs/asmcadc_clean2022-08-14_07-18-52-PM.log for details.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
执行脚本完成后再卸载程序点击回车
Press Enter after you finish running the above commands
####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Oracle Clusterware is stopped and successfully de-configured on node "rac1"
Oracle Clusterware is stopped and successfully de-configured on node "rac2"
Oracle Clusterware is stopped and de-configured successfully.
Oracle Universal Installer cleanup completed with errors.
Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac1' at the end of the session.
Run 'rm -rf /etc/oratab' as root on node(s) 'rac1' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
清空磁盘头
[root@rac1 ~]# ll -al /dev/sd*
brw-rw---- 1 root disk 8, 0 Aug 14 17:33 /dev/sda
brw-rw---- 1 root disk 8, 1 Aug 14 17:33 /dev/sda1
brw-rw---- 1 root disk 8, 2 Aug 14 17:33 /dev/sda2
brw-rw---- 1 grid asmadmin 8, 16 Aug 14 19:19 /dev/sdb
brw-rw---- 1 grid asmadmin 8, 32 Aug 14 19:19 /dev/sdc
brw-rw---- 1 grid asmadmin 8, 48 Aug 14 19:19 /dev/sdd
brw-rw---- 1 grid asmadmin 8, 64 Aug 14 19:19 /dev/sde
brw-rw---- 1 grid asmadmin 8, 80 Aug 14 19:19 /dev/sdf
[root@rac1 ~]# dd if=/dev/zero of=/dev/sdb bs=512K count=20
20+0 records in
20+0 records out
10485760 bytes (10 MB) copied, 0.0119188 s, 880 MB/s
[root@rac1 ~]# dd if=/dev/zero of=/dev/sdc bs=512K count=20
20+0 records in
20+0 records out
10485760 bytes (10 MB) copied, 0.00998202 s, 1.1 GB/s
[root@rac1 ~]# dd if=/dev/zero of=/dev/sdd bs=512K count=20
20+0 records in
20+0 records out
10485760 bytes (10 MB) copied, 0.00990024 s, 1.1 GB/s
[root@rac1 ~]# dd if=/dev/zero of=/dev/sde bs=512K count=20
20+0 records in
20+0 records out
10485760 bytes (10 MB) copied, 0.00735018 s, 1.4 GB/s
[root@rac1 ~]# dd if=/dev/zero of=/dev/sdf bs=512K count=20
20+0 records in
20+0 records out
10485760 bytes (10 MB) copied, 0.0073563 s, 1.4 GB/s