1、rootcrs.pl命令介绍
#命令位置:$GRID_HOME/crs/install
#命令说明:
# 该命令主要是用于对crs进行维护与管理,包括patch,upgrade,downgrade,deconfig等等
# perldoc rootcrs.pl执行这个命令获得完整的介绍
[root@rac1 install]# ./rootcrs.pl -h
Unknown option: h
Usage:
rootcrs.pl [-verbose] [-upgrade [-force] | -patch]
[-paramfile <parameter-file>]
[-deconfig [-deinstall] [-keepdg] [-force] [-lastnode]]
[-downgrade -oldcrshome <old crshome path> -version <old crs version> [-force] [-lastnode]]
[-unlock [-crshome <path to crs home>] [-nocrsstop]]
[-init]
Options:
-verbose Run this script in verbose mode
-upgrade Oracle HA is being upgraded from previous version
-patch Oracle HA is being upgraded to a patch version
-paramfile Complete path of file specifying HA parameter values
-lastnode Force the node this script is executing on to be considered
as the last node of deconfiguration or downgrade, and perform
actions associated with deconfiguring or downgrading the
last node
-downgrade Downgrade the clusterware
-version For use with downgrade; special handling is required if
downgrading to 9i. This is the old crs version in the format
A.B.C.D.E (e.g 11.1.0.6.0).
-deconfig Remove Oracle Clusterware to allow it to be uninstalled or reinstalled
-force Force the execution of steps in delete or dwongrade that cannot
be verified to be safe
-deinstall Reset the permissions on CRS home during de-configuration
-keepdg Keep existing diskgroups during de-configuration
-unlock Unlock CRS home
-crshome Complete path of crs home. Use with unlock option
-oldcrshome For use with downgrade. Complete path of the old crs home
-nocrsstop used with unlock option to reset permissions on an inactive grid home
-init Reset the permissions of all files and directories under CRS home
If neither -upgrade nor -patch is supplied, a new install is performed
To see the full manpage for this program, execute:
perldoc rootcrs.pl
2、重新配置Grid Infrastructure及ASM
#重新配置Grid Infrastructure并不会移除已经复制的二进制文件,仅仅是回复到配置crs之前的状态,下面是其步骤
a、使用root用户登录,并执行下面的命令(所有节点,但最后一个节点除外)
# perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
b、同样使用root用户在最后一个节点执行下面的命令。该命令将清空ocr 配置和voting disk
# perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode
c、如果使用了ASM磁盘,继续下面的操作以使得ASM重新作为候选磁盘(清空所有的ASM磁盘组)
# dd if=/dev/zero of=/dev/sdc1 bs=1024 count=100
# /sbin/start_udev
3、彻底删除Grid Infrastructure
#11g R2 Grid Infrastructure也提供了彻底卸载的功能,deinstall该命令取代了使用OUI方式来清除clusterware以及ASM,回复到安装grid之前的环境。
#该命令将停止集群,移除二进制文件及其相关的所有配置信息。
#命令位置:$GRID_HOME/deinstall
#下面是该命令操作的具体事例,操作期间,需要提供一些交互信息,以及在新的session以root身份清除一些/tmp下的文件
[root@rac1 deinstall]# ./deinstall
You must not be logged in as root to run ./deinstall.
Log in as Oracle user and rerun ./deinstall.
[root@rac1 deinstall]# su - grid
[grid@rac1 ~]$ cd /u01/app/11.2.0/grid/deinstall/
[grid@rac1 deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2016-09-16_11-31-50AM/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: rac1,rac2
Checking for sufficient temp space availability on node(s) : 'rac1,rac2'
## [END] Install check configuration ##
Traces log file: /tmp/deinstall2016-09-16_11-31-50AM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac1"[rac1-vip]
>
The following information can be collected by running "/sbin/ifconfig -a" on node "rac1"
Enter the IP netmask of Virtual IP "192.168.11.103" on node "rac1"[255.255.255.0]
>
Enter the network interface name on which the virtual IP address "192.168.11.103" is active
>
Enter an address or the name of the virtual IP used on node "rac2"[rac2-vip]
>
The following information can be collected by running "/sbin/ifconfig -a" on node "rac2"
Enter the IP netmask of Virtual IP "192.168.11.104" on node "rac2"[255.255.255.0]
>
Enter the network interface name on which the virtual IP address "192.168.11.104" is active
>
Enter an address or the name of the virtual IP[]
>
Network Configuration check config START
Network de-configuration trace file location: /tmp/deinstall2016-09-16_11-31-50AM/logs/netdc_check2016-09-16_11-34-25-AM.log
Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER_SCAN1]:
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /tmp/deinstall2016-09-16_11-31-50AM/logs/asmcadc_check2016-09-16_11-34-26-AM.log
ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
Is OCR/Voting Disk placed in ASM y|n [n]: y
Enter the OCR/Voting Disk diskgroup name []:
Specify the ASM Diagnostic Destination [ ]:
Specify the diskstring []:
Specify the diskgroups that are managed by this ASM instance []:
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac1,rac2
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER_SCAN1
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2016-09-16_11-31-50AM/logs/deinstall_deconfig2016-09-16_11-31-59-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2016-09-16_11-31-50AM/logs/deinstall_deconfig2016-09-16_11-31-59-AM.err'
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2016-09-16_11-31-50AM/logs/asmcadc_clean2016-09-16_11-34-50-AM.log
ASM Clean Configuration START
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall2016-09-16_11-31-50AM/logs/netdc_clean2016-09-16_11-34-54-AM.log
De-configuring RAC listener(s): LISTENER_SCAN1
De-configuring listener: LISTENER_SCAN1
Stopping listener: LISTENER_SCAN1
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.
De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.
De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.
De-configuring backup files on all nodes...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
Run the following command as the root user or the administrator on node "rac2".
/tmp/deinstall2016-09-16_11-31-50AM/perl/bin/perl -I/tmp/deinstall2016-09-16_11-31-50AM/perl/lib -I/tmp/deinstall2016-09-16_11-31-50AM/crs/install /tmp/deinstall2016-09-16_11-31-50AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2016-09-16_11-31-50AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Run the following command as the root user or the administrator on node "rac1".
/tmp/deinstall2016-09-16_11-31-50AM/perl/bin/perl -I/tmp/deinstall2016-09-16_11-31-50AM/perl/lib -I/tmp/deinstall2016-09-16_11-31-50AM/crs/install /tmp/deinstall2016-09-16_11-31-50AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2016-09-16_11-31-50AM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode
Press Enter after you finish running the above commands
<----------------------------------------
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done
Delete directory '/u01/app/11.2.0/grid' on the local node : Done
Delete directory '/u01/app/oraInventory' on the local node : Done
Delete directory '/u01/app/grid' on the local node : Done
Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'rac2' : Done
Delete directory '/u01/app/11.2.0/grid' on the remote nodes 'rac2' : Done
Delete directory '/u01/app/oraInventory' on the remote nodes 'rac2' : Done
Delete directory '/u01/app/grid' on the remote nodes 'rac2' : Done
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2016-09-16_11-31-50AM' on node 'rac1'
Clean install operation removing temporary directory '/tmp/deinstall2016-09-16_11-31-50AM' on node 'rac2'
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following RAC listener(s) were de-configured successfully: LISTENER_SCAN1
Oracle Clusterware is stopped and successfully de-configured on node "rac1"
Oracle Clusterware is stopped and successfully de-configured on node "rac2"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Successfully deleted directory '/u01/app/grid' on the local node.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'rac2'.
Successfully deleted directory '/u01/app/11.2.0/grid' on the remote nodes 'rac2'.
Successfully deleted directory '/u01/app/oraInventory' on the remote nodes 'rac2'.
Successfully deleted directory '/u01/app/grid' on the remote nodes 'rac2'.
Oracle Universal Installer cleanup was successful.
Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac1,rac2' at the end of the session.
Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac1,rac2' at the end of the session.
Run 'rm -rf /etc/oratab' as root on node(s) 'rac1' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
[root@rac2 ~]# perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Delete failed, or completed with errors.
CRS-4689: Oracle Clusterware is already stopped
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
This ASM diskgroup does not contain voting disks to be deleted
ASM de-configuration trace file location: /tmp/asmcadc_clean2016-09-16_11-10-09-AM.log
ASM Clean Configuration START
ASM Clean Configuration END
ASM with SID +ASM1 deleted successfully. Check /tmp/asmcadc_clean2016-09-16_11-10-09-AM.log for details.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node
[root@rac2 ~]# /tmp/deinstall2016-09-16_11-31-50AM/perl/bin/perl -I/tmp/deinstall2016-09-16_11-31-50AM/perl/lib -I/tmp/deinstall2016-09-16_11-31-50AM/crs/install /tmp/deinstall2016-09-16_11-31-50AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2016-09-16_11-31-50AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2016-09-16_11-31-50AM/response/deinstall_Ora11g_gridinfrahome1.rsp
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Modify failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Delete failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware #
################################################################
ACFS-9313: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/30258496/viewspace-2125042/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/30258496/viewspace-2125042/