Oracle Real Application Cluster 管理命令详解

参考《Real Application Clusters Administration and Deployment Guide

 

Oracle Clusterware的命令集可以分为以下4种: 
节点层:olsnodes 
网络层:oifcfg 
集群层:crsctl, ocrcheck,ocrdump,ocrconfig 
应用层:srvctl,onsctl,crs_stat 
下面分别来介绍这些命令

1. 节点层

olsnodes --- 显示集群点列表

[grid@rac1 root]$ olsnodes --help
Usage: olsnodes [ [ [-n] [-i] [-s] [-t] [<node> | -l [-p]] [-a] ] | [-f] | [-c] ] [-g] [-v]
	where
		-n print node number with the node name
		-p print private interconnect address for the local node
		-i print virtual IP name or address with the node name
		<node> print information for the specified node
		-l print information for the local node 
		-s print node status - active or inactive 
		-t print node type - pinned or unpinned 
		-g turn on logging 
		-v Run in debug mode; use at direction of Oracle Support only.
		-c print clusterware name
		-a print active node roles of the nodes in the cluster
		-f print historical Leaf Nodes (active and recent)

2. 网络层

oifcfg --- 

[grid@rac1 ~]$ oifcfg --help
PRIF-9: incorrect usage

Name:
   oifcfg - Oracle Interface Configuration Tool. 
Usage:
oifcfg iflist [-p] [-n] [-hdr]
   Lists the network interfaces known to the operating system on this node that can be configured with 'oifcfg setif'.
   -p displays a heuristic assumption of the interface type (PRIVATE, PUBLIC, or UNKNOWN).
   -n displays the netmask.
   -hdr displays column headings.
oifcfg setif {-node <nodename> | -global} [<if_name>/<subnet>:<if_type>[,<if_type>...]][,...]
   'oifcfg setif -global' or 'oifcfg setif -node <nodename>' is used to synchronize the interface details in the GPnP profile and OCR.
oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>]] [-hdr]
oifcfg delif {{-node <nodename> | -global} [<if_name>[/<subnet>]] [-force] | -force}
oifcfg [-help]

   <nodename> - name of the host, as known to a communications network
   <if_name>  - name by which the interface is configured in the system
   <subnet>   - subnet address of the interface
   <if_type>  - one or more comma-separated interface types { cluster_interconnect | public | asm}

<nodename> --- 通信网络所知的主机名 
<if_name> --- 系统中配置的接口的名称 
<subnet> --- 接口的子网地址 
<if_type> --- 接口类型{ cluster_interconnect | public | asm } 
oifctg命令用来定义和修改Oracle集群需要的网卡属性,这些属性包括网卡的网段地址,子网掩码,接口类型等。要想正确的使用这个命令,必须先知道Oracle是如何定义网络接口的,Oracle的每个网络接口包括名称,网段地址,接口类型3个属性:interface_name、subnet、interface_type。

这些属性中没有IP地址,但接口类型有两种,public和private,前者说明接口用于外部通信,用于Oracle Net和VIP地址,而后者说明接口用于Interconnect。

接口的配置方式分为两类:

global和node-specific。前者说明集群所有节点的配置信息相同,也就是说所有节点的配置是对称的;而后者意味着这个节点的配置和其他节点配置不同,是非对称的。 
Iflist:显示网口列表 
getif:获得单个网口信息 
setif:配置单个网口 
delif:删除网口 

3. 集群层

集群层是指由Clusterware组成的核心集群, 这一层负责维护集群内的共享设备,并为应用集群提供完整的集群状态视图,应用集群依据这个视图进行调整。这一层共有4个命令:crsctl、ocrcheck、ocrdump、ocrconfig,后三个是针对OCR磁盘的。

3.1 crsctl

crsctl命令可以用来检查crs进程栈,每个crs进程状态,管理Votedisk,跟踪crs进程功能。

crsctl

3.1.1 检查crs状态

crsctl check crs

crsctl check cssd

crsctl check crsd

crsctl check evmd

crsctl check cluster -all

 

3.1.2 配置crs自启动

CRS进程栈默认随着操作系统的启动而自启动,有时出于维护目的需要关闭这个特性,可以用root用户执行下面命令。

crsctl disable crs

crsctl enable crs

此命令修改/etc/oracle/scls_scr/rac1/root/crsstart中的配置,但此次配置之后crsstart中的内容并未更改。待深入研究

3.1.3 启、停crs服务

crsctl start crs

crsctl stop crs

 

3.1.4 查看votingdisk状态

[grid@rac1 ~]$ crsctl query --help
Usage:
  crsctl query crs administrator
     Display admin list

  crsctl query crs autostart 
    Gets the value of automatic start delay and server count

  crsctl query crs activeversion [-f]
     Lists the Oracle Clusterware active version
where
     -f              List the cluster state and active patch level

  crsctl query crs releaseversion
     Lists the Oracle Clusterware release version

  crsctl query crs softwareversion [<nodename>| -all]
     Lists the version of Oracle Clusterware software installed
where
     Default         List software version of the local node
     nodename        List software version of named node
     -all            List software version for all the nodes in the cluster

  crsctl query crs releasepatch
     Lists the Oracle Clusterware release patch level

  crsctl query crs softwarepatch [<host>]
     Lists the patch level of Oracle Clusterware software installed
where
     Default         List software patch level of the local host
     host            List software path level of the named host

  crsctl query crs lastactivetimestamp [<host>]
     Lists the Last active timestamp of a leaf node.
where
     Default         Lists the last active timestamp of the local leaf node
     host            Lists the last active timestamp of the named leaf node

  crsctl query crs site {-n <node_name> | -d <disk_name>}
     List the site with which the node or disk is associated. 
Where
     node_name       The name of the node to be queried
     disk_name       The name of the disk to be queried

  crsctl query css ipmiconfig
     Checks whether Oracle Clusterware has been configured for IPMI

  crsctl query css ipmidevice
     Checks whether the IPMI device/driver is present

  crsctl query css votedisk
     Lists the voting files used by Cluster Synchronization Services

  crsctl query wallet -type <wallet_type> [-name <name>] [-user <user_name>]
     Check if the designated wallet or user exists
where
     wallet_type     Type of wallet i.e. APPQOSADMIN, APPQOSUSER, APPQOSDB, MGMTDB, OSUSER or CVUDB.
     name            Name is required for APPQOSDB and CVUDB wallets.
     user_name       User to be queried from wallet.

  crsctl query dns -servers
     Lists the system configured DNS server, search paths, attempt and timeout values

  crsctl query dns -name <name> [-dnsserver <DNS_server_address>] [-type <query_type>] [-port <port>] [-attempts <attempts>] [-timeout <timeout>] [-v]
    Returns a list of addresses returned by DNS lookup of the name with the specified DNS server
Where
    name                Fully qualified domain name to lookup
    DNS_server_address  Address of the DNS server on which name needs to be looked up
    query_type          The query type is A or AAAA for IPv4 or IPv6 respectively. The default query_type is A.
    port                Port on which DNS server is listening
    attempts            Number of retry attempts
    timeout             Timeout in seconds

  crsctl query socket udp [-address <address>] [-port <port>]
     Verifies that a daemon can listen on specified address and port
Where
       address             IP address on which socket needs to be created
       port                port on which socket needs to be created

  crsctl query calog [-aftertime <after_timestamp>] [-beforetime <before_timestamp>] [-duration <time_interval> | -follow] [-filter <filter_expression>] [-fullfmt | -xmlfmt]
     Lists the cluster activity log activities matching the specified criteria

  crsctl query credentials targetlist
     Lists the valid credentials targets to be used with credentials commands

  crsctl query credentials -target <target>
     Lists the valid credentials under a specific target

  crsctl query cluster site {<site_name> | -all}
     List the nodes and disks associated with the sites. 
Where
       site_name             The site name to be queried
       -all                  List all sites.

  crsctl query member_cluster_configuration [<member_cluster_name>]
  where 
        member_cluster_name    Name of the Member Cluster
         If no member cluster name is provided then the details of all
         the member clusters are displayed.

  crsctl query driver activeversion [<node_name> | -all] [-f]
     List the active version of the drivers. 
Where
       node_name             List drivers of the named node
       -all                  List drivers of all nodes.
       Default               List drivers of the local node.
       -f                    Display the BugHash and BugsFixed.

  crsctl query driver softwareversion [<node_name> | -all] [-f]
     List the software installed version on the drivers. 
Where
       node_name             List drivers of the named node
       -all                  List drivers of all nodes.
       Default               List drivers of the local node.
       -f                    Display the BugHash and BugsFixed.

 

crsctl query css votedisk

3.1.5 查看、配置css参数

查看参数使用get 

[grid@rac1 ~]$ crsctl get --help
Usage:
  crsctl get log {mdns|gpnp|css|crf|crs|ctss|evm|gipc} "<name1>,..."
    Get the log levels for specific modules

  crsctl get log res <resname> 
    Get the log level for an agent

  crsctl get hostname
    Displays the host name

  crsctl get nodename
    Displays the node name

  crsctl get cluster mode {config|status} 
    Get the cluster mode

  crsctl get clientid dhcp -cluname <cluster_name> -viptype <vip_type> [-vip <VIPResName>] [-n <nodename>]
    Generate client ID's as used by RAC agents for configured cluster resources
where
    cluster_name    name of the cluster to be configured

    vip_type        Type of VIP resource: HOSTVIP, SCANVIP, or APPVIP
    VIPResName      User defined application VIP name (required for APPVIP vip_type)
    nodename        Node for which the client ID is required (required for HOSTVIP vip_type)

  crsctl get calog <parameter> 
    Gets the value of an Oracle cluster activity log parameter

  crsctl get history
     Gets the cluster history

  crsctl get cluster type
   Display the type of resources the current cluster can run.

  crsctl get cluster extended
   Display whether the current cluster is extended.

  crsctl get cluster class
     Get the cluster class

  crsctl get cluster name
    Displays the current cluster name

  crsctl get node role {config|status} [-node <nodename> | -all]
    Gets the current role of nodes in the cluster

  crsctl get credentials <options>
    Exports the specified credentials to an XML file

  crsctl get css <parameter>
    Displays the value of a Cluster Synchronization Services parameter

    clusterguid
    disktimeout
    misscount
    reboottime
    noautorestart
    priority

  crsctl get css ipmiaddr
    Displays the IP address of the local IPMI device as set in the Oracle registry.

  crsctl get {css <parameter>|hostname|nodename}

  crsctl get tracefileopts {mdns|gpnp|css|crf|crs|ctss|evm|gipc}

  crsctl get diagstat {css | gipc | ohas | crs | evm | gns | gpnp | ologger | osysmon | mdns | evmlogger | ctss | scriptagent | jagent | [crsd_oraagent | crsd_orarootagent | ohasd_oraagent | ohasd_orarootagent | ohasd_cssdmonitor | ohas
d_cssdagent | crsd_appagent [-user <username>]] }  {-all | -k <key_name> }       Gets the diagnostic shared memory statistics for the specific component

  crsctl get cpu equivalency
    Gets the current configured value for server attribute CPU_EQUIVALENCY

  crsctl get resource use
    Gets the current configured value for server attribute RESOURCE_USE_ENABLED

  crsctl get server label
    Gets the current configured value for server attribute SERVER_LABEL

  crsctl get server css_critical
    Gets the current configured value for server attribute CSS_CRITICAL

  crsctl get ipmi binaryloc
    Gets the current configured value for server attribute IPMI_BIN_PATH

配置参数使用set

[grid@rac1 ~]$ crsctl set --help
Usage:
  crsctl set log {mdns|gpnp|css|crf|crs|ctss|evm|gipc} "<name1>=<lvl1>,..."
    Set the log levels for specific modules within daemons

  crsctl set log res <resname>=<lvl>
    Set the log levels for agents

  crsctl set css <parameter> <value>
    Sets the value of a Cluster Synchronization Services parameter

  crsctl set css {ipmiaddr|ipmiadmin} <value>
    Sets IPMI configuration data in the Oracle registry

  crsctl set css votedisk asm <diskgroup>[...]
    Defines the set of voting disks to be used by CRS

  crsctl set crs autostart [delay <delayTime>] [servercount <count>] 
    Sets the Oracle Clusterware automatic resource start criteria

  crsctl set calog <parameter> <value> [-f]
    Sets the value of an Oracle cluster activity log parameter

  crsctl set tracefileopts {mdns|gpnp|css|crf|crs|ctss|evm|gipc} [-filesize <file_size>[K|k|M|m|G|g]] [-numsegments <num_of_segments>]
  Note: You must specify at least one of the clauses -filesize or -numsegments

  crsctl set credentials <options>
    Imports the specified credentials from an XML file

  crsctl set cluster mode flex 
    Set the cluster mode

  crsctl set cluster class {standalone|domainservices|member}
    Sets the cluster class

  crsctl set cpu equivalency
    Sets the configuration value for server attribute CPU_EQUIVALENCY

  crsctl set resource use
    Sets the configuration value for server attribute RESOURCE_USE_ENABLED

  crsctl set server label
    Sets the configuration value for server attribute SERVER_LABEL

  crsctl set server css_critical {yes|no}
    Sets the configuration value for server attribute CSS_CRITICAL

  crsctl set ipmi binaryloc <ipmi_binary_path>
    Sets the configuration value for server attribute IPMI_BIN_PATH

注:crs参数配置需谨慎

3.1.6 追踪crs,提供辅助功能

CRS由CRS,CSS,EVM三个服务组成,每个服务又是由一系列module组成,crsctl允许对每个module进行跟踪,并把跟踪内容记录到日志中。

--- 查看服务对应的模块 

[grid@rac1 ~]$ crsctl lsmodules --help
Usage:
  crsctl lsmodules {mdns|gpnp|css|crf|crs|ctss|evm|gipc}
 where
   mdns  multicast Domain Name Server
   gpnp  Grid Plug-n-Play Service
   css   Cluster Synchronization Services
   crf   Cluster Health Monitor
   crs   Cluster Ready Services
   ctss  Cluster Time Synchronization Service
   evm   EventManager
   gipc  Grid Interprocess Communications

---  追踪服务状态

[grid@rac1 ~]$ crsctl debug --help
Usage:
  crsctl debug statedump {crs|css|evm}
 where
   crs           Cluster Ready Services
   css           Cluster Synchronization Services
   evm           Event Manager

[grid@rac1 ~]$ crsctl debug log --help
Usage:
  DEPRECATED: use crsctl set log {css|crs|evm}


--- example

[grid@rac1 ~]$ crsctl debug log css "CSSD:1"

 3.1.7 维护votingdisk

[grid@rac1 ~]$ crsctl add --help
Usage:
  crsctl add {resource|type|resourcegroup|resourcegrouptype|serverpool|policy} <name> <options> 
where 
   name          Name of the CRS entity 
   options       Options to be passed to the add command

   See individual CRS entity help for more details

  crsctl add crs administrator -u <user_name> [-f] 
where  
   user_name     User name to be added to the admin list or "*"
   -f            Override user name validity check

  crsctl add wallet -type <wallet_type> <options>
where 
   wallet_type   Type of wallet i.e. APPQOS, APPQOSUSER, APPQOSDB, MGMTDB, OSUSER or CVUDB.
   options       Options to be passed to the add command

 crsctl add css votedisk <vdisk>[...] <options>
where 
   vdisk [...]   One or more blank-separated voting file paths
   options       Options to be passed to the add command

  crsctl add category <categoryName> [-attr "<attrName>=<value>[,...]"] [-i] 
where 
    categoryName Name of server category to be added
    attrName     Attribute name 
    value        Attribute value 
    -i           Fail if request cannot be processed immediately

  crsctl add credentials -target <target> [[-user <user_name>] [-passwd]|[-keylength <nbits>]]
where
     target        Target as is listed by 'crsctl query credentials targetlist'
     user_name     User to be added to wallet. If not specified, will 
                   be pseudo-randomly generated. Can only be specified with 'userpass' credentials.

     -passwd       Indication that password will be specified. Password 
                   will be read from standard input. If not specified, 
                   will be pseudo-randomly generated. Can only be specified with 'userpass' credentials.
     nbits         Length of the key. If not specified, defaults to 128. 
                   Can only be specified with 'keypair' credentials.

  crsctl add cluster site <site_name> [-guid <site_guid>]
     Add the site to the cluster. 
Where
       site_name             The site name of the new site
       site_guid             The site GUID (global unique ID) of the new site 

 尝试增加一块votingdisk

--- 查看votingdisk状态
 
[root@rac1 bin]# ./crsctl query css votedisk 

--- 停止所有节点的CRS服务: 

[root@rac1 bin]# ./crsctl stop crs 

--- 添加Votingdisk 

[root@rac1 bin]# ./crsctl add css votedisk /dev/asmdisk07 -force 

注意:即使在CRS关闭后,也必须通过-force参数来添加和删除Votedisk,并且-force参数只有在CRS关闭的场合下使用才安全,否则会报:Cluter is not a ready state for online disk addition. 

--- 确认添加后的状态: 

[root@rac1 bin]# ./crsctl query css votedisk 

--- 启动CRS服务 

[root@rac1 bin]# ./crsctl start crs

注:添加和删除Votedisk的操作非常危险,必须停止数据库,停止ASM,停止CRS 后操作,并且操作时必须使用-force参数

3.2 ocrcheck、ocrdump、ocrconfig

Oracle Clusterware把整个集群的配置信息放在共享存储上,这个存储就是OCR Disk.在整个集群中,只有一个节点能对OCR Disk进行读写操作,这个节点叫作Master Node,所有节点都会在内存中保留一份OCR的拷贝,同时有一个OCR Process从这个内存中读取内容。OCR内容发生改变时,由Master Node的OCR Process负责同步到其他节点的OCR Process。 
OCR的内容如此重要,Oracle每4个小时对其做一次备份,并且保留最后的3个备份,以及前一天,前一周的最后一个备份。这个备份由Master Node CRSD进程完成,备份的默认位置是$CRS_HOME\crs\cdata\<cluster_name>目录下。每次备份后,备份文件名自动更改,以反应备份时间顺序,最近一次的备份叫作backup00.ocr。这些备份文件除了保存在本地,DBA还应该在其他存储设备上保留一份,以防止意外的存储故障。

3.2.1 ocrdump

该命令能以ASCII的方式打印出OCR的内容,但不能用作OCR的备份,产生的文件只能用作阅读,而不能用于恢复

[grid@rac1 ~]$ ocrdump --help
Name:
	ocrdump - Dump contents of Oracle Cluster/Local Registry to a file.

Synopsis:
	ocrdump [-local] [<filename>|-stdout] [-backupfile <backupfilename>] [-keyname <keyname>] [-xml] [-noheader]

Description:
	Default filename is OCRDUMPFILE. Examples are:

	prompt> ocrdump
	writes cluster registry contents to OCRDUMPFILE in the current directory

	prompt> ocrdump MYFILE
	writes cluster registry contents to MYFILE in the current directory

	prompt> ocrdump -stdout -keyname SYSTEM
	writes the subtree of SYSTEM in the cluster registry to stdout

	prompt> ocrdump -local -stdout -xml
	writes local registry contents to stdout in xml format

	prompt> ocrdump -backupfile /oracle/CRSHOME/backup.ocr -stdout -xml
	writes registry contents in the backup file to stdout in xml format

Notes:
	* The header information will be retrieved based on best effort basis.
	* Use option '-local' to indicate that the operation is to be performed on the Oracle Local Registry.

3.2.2 ocrcheck

ocrcheck命令用于检查OCR内容的一致性,这个命令不需要参数。

[grid@rac1 ~]$ ocrcheck --help
Name:
	ocrcheck - Displays health of Oracle Cluster/Local Registry.

Synopsis:
	ocrcheck [-config | -backupfile <backupfilename>] [-details] [-local]

  -config	Displays the configured locations of the Oracle Cluster Registry.
		This can be used with the -local option to display the configured
		location of the Oracle Local Registry
  -details	Displays detailed configuration information.
  -local	The operation will be performed on the Oracle Local Registry.
  -backupfile <backupfilename>	The operation will be performed on the backup file.

Notes:
	* This command for Oracle Cluster Registry is not supported from a Leaf node.

3.2.3 ocrconfig

该命令用于维护OCR磁盘,安装clusterware过程中,如果选择External Redundancy冗余方式,则只能输入一个OCR磁盘位置。但是Oracle允许配置两个OCR磁盘互为镜像,以防止OCR磁盘的单点故障。OCR磁盘和Votedisk磁盘不一样,OCR磁盘最多只能有两个,一个Primary OCR和一个Mirror OCR

[grid@rac1 ~]$ ocrconfig --help
Name:
	ocrconfig - Configuration tool for Oracle Cluster/Local Registry.

Synopsis:
	ocrconfig [option]
	option:
		[-local] -export <filename>
		                                    - Export OCR/OLR contents to a file
		[-local] -import <filename>         - Import OCR/OLR contents from a file
		[-local] -upgrade [<user> [<group>]]
		                                    - Upgrade OCR from previous version
		-downgrade [-version <version string>]
		                                    - Downgrade OCR to the specified version
		[-local] -backuploc { <dirname> | +<diskgroupname> }
		                                    - Configure OCR/OLR backup location
		[-local] -showbackuploc             - Show OCR/OLR backup location 
		[-local] -showbackup [auto|manual]  - Show OCR/OLR backup information
		[-local] -manualbackup              - Perform OCR/OLR backup
		[-local] -restore { +<diskgroupname> | <filename> }
		                                    - Restore OCR/OLR from physical backup
		-replace { +<current diskgroupname> | <current filename> } -replacement { +<new diskgroupname> | <new filename> }
		                                    - Replace first specified OCR device or file with second specified device/file
		-add { +<diskgroupname> | <filename> }
		                                    - Add a new OCR device/file
		-delete { +<diskgroupname> | <filename> }
		                                    - Remove an OCR device/file
		-overwrite                          - Overwrite OCR configuration on disk
		-repair -add { +<diskgroupname> | <filename> } | -delete { +<diskgroupname> | <filename> } | -replace { +<current diskgroupname> | <current filename> } -replacement { +<new diskgroupname> | <new filename> }
		                                    - Repair OCR configuration on the local node
		-copy <source_filename> <destination_filename>
		                                    - Copy OCR physical backup from source to destination
		[-local] -delbackup { +<diskgroupname> | <filename> }
		                                    - Delete OCR/OLR physical backup file
		-help                               - Print out this help information

Notes:
	* Use option '-local' to indicate that the operation is to be performed on the Oracle Local Registry.

查看OCR备份状态

3.2.4 OCR备份与恢复

Oracle推荐在对集群做调整时,比如增加,删除节点之前,应该对OCR做一个备份,可以使用export备份到指定文件,如果做了replace或者restore等操作,Oracle建议使用cluvfy comp ocr -n all命令来做一次全面的检查。cluvfy命令在clusterware的安装目录

--- 关闭所有节点的CRS
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/crsctl stop crs
 
--- 导出OCR内容

[root@rac1 bin]# /app/product/19.2.0/crs/bin/ocrconfig -export /app/ocr.exp

--- 启动CRS 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/crsctl start crs

--- 检查CRS状态 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/crsctl check crs 
CSS appears healthy 
CRS appears healthy 
EVM appears healthy

--- 模拟OCR损坏 

[root@rac1 bin]# dd if=/dev/zero f=/dev/asmdisk01 bs=1024 count=102400 
102400+0 records in 
102400+0 records out 

--- 检查OCR一致性 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/ocrcheck 
PROT-601: Failed to initialize ocrcheck

--- 使用cluvfy工具检查一致性 

[root@rac1 cluvfy]# /app/product/19.2.0/crs/bin/runcluvfy.sh comp ocr -n all 
Verifying OCR integrity 
Unable to retrieve nodelist from Oracle clusterware. 
Verification cannot proceed.
 
--- 使用Import恢复OCR内容 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/ocrconfig -import /app/ocr.exp 

--- 再次检查OCR一致性 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/ocrcheck
 
--- 再次使用cluvfy工具检查一致性 

[root@rac1 cluvfy]# /app/product/19.2.0/crs/bin/runcluvfy.sh comp ocr -n all

3.2.5 迁移OCR磁盘

实验演示将OCR从/dev/asmdisk01 迁移至/dev/asmdisk08

--- 查看OCR备份
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/ocrconfig -showbackup 

如果没有备份,立即执行一次导出作为备份: 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/ocrconfig -export /u01/ocrbackup -s online
 
--- 查看OCR配置
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/ocrcheck 

--- 添加一个Mirror OCR
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/ocrconfig -replace ocrmirror /dev/asmdisk08
 
--- 确认添加成功 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/ocrcheck

--- 改变primary OCR位置 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/ocrconfig -replace ocr /dev/asmdisk08 

确认修改成功: 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/ocrcheck

--- 使用ocrconfig命令修改后,所有RAC节点上的/etc/oracle/ocr.loc文件内容也会自动同步了,如果没有自动同步,可以手工的改成以下内容。
 
[root@rac1 bin]#more /etc/oracle/ocr.loc 
ocrconfig_loc=/dev/asmdisk01 
ocrmirrorconfig_loc=/dev/asmdisk08 
local_only=FALSE

4 应用层

应用层就是指RAC数据库了,这一层有若干资源组成,每个资源都是一个进程或者一组进程组成的完整服务,这一层的管理和维护都是围绕这些资源进行的。有如下命令:srvctl, onsctl, crs_stat三个命令

4.1 crs_stat

crs_stat这个命令用于查看CRS维护的所有资源的运行状态,如果不带任何参数时,显示所有资源的概要信息。每个资源显示是各个属性:资源名称,类型,目录,资源运行状态等。18c之后此命令已不再被支持

crs_stat -t

crs_stat -ls

4.2 onsctl

这个命令用于管理配置ONS(Oracle Notification Service). ONS是Oracle Clusterware实现FAN Event Push模型的基础。在传统模型中,客户端需要定期检查服务器来判断服务端状态,本质上是一个pull模型,Oracle 10g引入了一个全新的PUSH机制–FAN(Fast Application Notification),当服务端发生某些事件时,服务器会主动的通知客户端这种变化,这样客户端就能尽早得知服务端的变化。而引入这种机制就是依赖ONS实现, 在使用onsctl命令之前,需要先配置ONS服务

[grid@rac1 bin]$ onsctl

usage: onsctl [verbose] <command> [<options>]

The verbose option enables debug tracing and logging (for the server start).

Permitted <command>/<options> combinations are:

command   options
-------   ---------
start                       - Start ons
shutdown                    - Shutdown ons
reload                      - Trigger ons to reread its configuration file
debug     [<attr>=<val> ..] - Display ons server debug information
set       [<attr>=<val> ..] - Set ons log parameters
query     [<attr>=<val>]    - Query ons log parameters
ping      [<max-retry>]     - Ping local ons
help                        - Print brief usage description (this)
usage     [<command>]       - Print detailed usage description


--- example

--- 查看进程状态
 
[root@rac1 bin]#ps -aef|grep ons
 
--- 查看ONS服务的状态
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/onsctl ping
 
--- 启动ONS服务
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/onsctl start

--- 使用debug选项,可以查看详细信息,其中最有意义的就是能显示所有连接
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/onsctl debug 

ONS状态检查

4.3 srvctl

该命令是RAC维护中最常用的命令,也是最复杂的命令。 这个工具可以操作下面的几种资源:Database,Instance,ASM,Service,Listener和Node Application,其中Node application又包括GSD,ONS,VIP。 这些资源除了使用srvctl工具统一管理外,某些资源还有自己独立的管理工具,比如ONS可以使用onsctl命令进行管理、Listener可以通过lsnrctl管理

[grid@rac1 bin]$ srvctl -help
Usage: srvctl {-version | -version -fullversion | -fullversion}
Usage: srvctl config all
Usage: srvctl add database -db <db_unique_name> -oraclehome <oracle_home> [-dbtype {RACONENODE | RAC | SINGLE} [-server "<server_list>"] [-instance <inst_name>] [-timeout <timeout>]] [-domain <domain_name>] [-spfile <spfile>] [-pwfile <
password_file_path>] [-role {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY | FAR_SYNC}] [-startoption <start_options>] [-stopoption <stop_options>] [-startconcurrency <start_concurrency>] [-stopconcurrency <stop_concurrency>] [-dbname <db_name>] [-policy {AUTOMATIC | MANUAL | NORESTART | USERONLY}] [-serverpool "<serverpool_list>" [-pqpool <pq_server_pools>]] [-node <node_name>] [-diskgroup "<diskgroup_list>"] [-acfspath "<acfs_path_list>"] [-eval] [-fixed] [-css_critical {YES | NO}] [-cpucount <cpu_count>] [-memorytarget <memory_target>] [-maxmemory <max_memory>] [-defaultnetnum <network_number>] [-verbose]Usage: srvctl config database [-db <db_unique_name> [-all] | -serverpool <serverpool_name> | -home] [-verbose]
Usage: srvctl start database -db <db_unique_name> [-startoption <start_options>] [-startconcurrency <start_concurrency>] [-node <node> | -serverpool "<serverpool_list>"] [-eval] [-verbose]
Usage: srvctl stop database -db <db_unique_name> [-stopoption <stop_options>] [-stopconcurrency <stop_concurrency>] [-serverpool "<serverpool_list>"] [-drain_timeout <timeout>] [-force] [-eval] [-verbose]
Usage: srvctl status database {-db <db_unique_name> {[-serverpool <serverpool_name>] | [-sid] [-home]}  | -serverpool <serverpool_name> | -thisversion | -thishome} [-force] [-detail] [-verbose]
Usage: srvctl enable database -db <db_unique_name> [-node <node_name>]
Usage: srvctl disable database -db <db_unique_name> [-node <node_name>]
Usage: srvctl modify database -db <db_unique_name> [-dbname <db_name>] [-instance <instance_name>] [-oraclehome <oracle_home>] [-user <oracle_user>] [-server "<server_list>"] [-timeout <timeout>] [-domain <domain>] [-spfile <spfile>] [-
pwfile <password_file_path>] [-role {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-startoption <start_options>] [-stopoption <stop_options>] [-startconcurrency <start_concurrency>] [-stopconcurrency <stop_concurrency>][-policy {AUTOMATIC | MANUAL | NORESTART | USERONLY}] [-serverpool "<serverpool_list>" [-node <node_name>]] [-pqpool <pq_server_pools>] [-diskgroup "<diskgroup_list>"|-nodiskgroup] [-acfspath "<acfs_path_list>"] [-css_critical {YES | NO}] [-cpucount <cpu_count> [-overridepools <overridepool_list>]] [-memorytarget <memory_target>] [-maxmemory <max_memory>] [-defaultnetnum <network_number>] [-disabledreason {DECOMMISSIONED}] [-force] [-eval] [-verbose]Usage: srvctl remove database -db <db_unique_name> [-force] [-noprompt] [-verbose]
Usage: srvctl getenv database -db <db_unique_name> [-envs "<name>[,...]"]
Usage: srvctl setenv database -db <db_unique_name> {-envs "<name>=<val>[,...]" | -env "<name>=<val>"}
Usage: srvctl unsetenv database -db <db_unique_name> -envs "<name>[,...]"
Usage: srvctl predict database -db <database_name> [-verbose]
Usage: srvctl convert database -db <db_unique_name> -dbtype RAC [-node <node>]
Usage: srvctl convert database -db <db_unique_name> -dbtype RACONENODE [-instance <inst_name>] [-timeout <timeout>]
Usage: srvctl relocate database -db <db_unique_name> {[-node <target>] [-timeout <timeout>] [-stopoption <stop_option>] | -abort [-revert]} [-drain_timeout <timeout>] [-verbose]
Usage: srvctl upgrade database -db <db_unique_name> -oraclehome <oracle_home>
Usage: srvctl downgrade database -db <db_unique_name> -oraclehome <oracle_home> -targetversion <to_version>
Usage: srvctl update database -db <db_unique_name> [-startoption <start_options> [-node <node_name> | -serverpool "<serverpool_list>"]]
Usage: srvctl add instance -db <db_unique_name> -instance <inst_name> -node <node_name> [-force]
Usage: srvctl start instance {-node "<node_list>" | -db <db_unique_name> {-node <node_name> [-instance <inst_name>] | -node "<node_list>" | -instance "<inst_name_list>"}} [-startoption <start_options>]
Usage: srvctl stop instance {-node "<node_list>" | -db <db_unique_name> {-node "<node_list>" | -instance "<inst_name_list>"}} [-stopoption <stop_options>] [-drain_timeout <timeout>] [-force] [-failover] [-verbose]
Usage: srvctl status instance -db <db_unique_name> {-node <node_list> | -instance <inst_name_list>} [-force] [-detail] [-verbose]
Usage: srvctl enable instance -db <db_unique_name> -instance "<inst_name_list>"
Usage: srvctl disable instance -db <db_unique_name> -instance "<inst_name_list>"
Usage: srvctl modify instance -db <db_unique_name> -instance <inst_name> -node <node_name>
Usage: srvctl remove instance -db <db_unique_name> -instance <inst_name> [-force] [-noprompt]
Usage: srvctl update instance -db <db_unique_name> [-instance "<instance_name_list>" | -node "<node_list>"] [-startoption <start_options>] [-targetinstance <instance_name>]
Usage: srvctl add service -db <db_unique_name> -service "<service_name_list>" 
       {-preferred "<preferred_list>" [-available "<available_list>"] [-tafpolicy {BASIC | NONE | PRECONNECT}] | -serverpool <pool_name> [-cardinality {UNIFORM | SINGLETON}] } 
       [-netnum <network_number>] [-role "[PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]"] [-policy {AUTOMATIC | MANUAL}] 
       [-notification {TRUE | FALSE}] [-dtp {TRUE | FALSE}] [-clbgoal {SHORT | LONG}] [-rlbgoal {NONE | SERVICE_TIME | THROUGHPUT}] 
       [-failovertype {NONE | SESSION | SELECT | TRANSACTION | AUTO}] [-failovermethod {NONE | BASIC}] [-failoverretry <failover_retries>] [-failoverdelay <failover_delay>] [-failover_restore {NONE | LEVEL1}] [-failback {YES | NO}] 
       [-edition <edition>] [-pdb <pluggable_database>] [-global {TRUE | FALSE}] [-maxlag <max_lag_time>] [-sql_translation_profile <sql_translation_profile>] 
       [-commit_outcome {TRUE | FALSE}] [-retention <retention>] [-replay_init_time <replay_initiation_time>] [-session_state {STATIC | DYNAMIC}] 
       [-pqservice <pq_service>] [-pqpool "<pq_pool_list>"] [-gsmflags <gsm_flags>] [-tablefamilyid <table_family_id>] [-drain_timeout <drain_timeout>] [-stopoption <stop_option>] [-css_critical {YES | NO}] [-rfpool <pool_name> -hubsvc 
<hub_service>]       [-force] [-eval] [-verbose]
Usage: srvctl add service -db <db_unique_name> -service "<service_name_list>" -update {-preferred "<new_pref_inst>" | -available "<new_avail_inst>"} [-force] [-verbose]
Usage: srvctl config service {-db <db_unique_name> [-service <service_name>] | -serverpool <serverpool_name> [-db <db_unique_name>]} [-verbose]
Usage: srvctl enable service -db <db_unique_name> -service  "<service_name_list>" [-instance <inst_name> | -node <node_name>] [-global_override]
Usage: srvctl disable service -db <db_unique_name> -service  "<service_name_list>" [-instance <inst_name> | -node <node_name>] [-global_override]
Usage: srvctl status service {-db <db_unique_name> [-service  "<service_name_list>" | -pdb <pluggable_database>] | -serverpool <serverpool_name> [-db <db_unique_name>]} [-force] [-verbose]
Usage: srvctl predict service -db <database_name> -service <service_name> [-verbose]
Usage: srvctl modify service -db <db_unique_name> -service <service_name> 
       {[-oldinst <old_inst_name> -newinst <new_inst_name>  | -available <avail_inst_name> -toprefer | -modifyconfig -preferred "<preferred_list>" [-available "<available_list>"]] 
       | [-serverpool <pool_name>] [-pqservice <pqsvc_name>] [-pqpool "<pq_pool_list>"] [-cardinality {UNIFORM | SINGLETON}] [-tafpolicy {BASIC | NONE}] 
       [-role [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-policy {AUTOMATIC | MANUAL}] [-notification {TRUE | FALSE}] 
       [-dtp {TRUE | FALSE}] [-clbgoal {SHORT | LONG}] [-rlbgoal {NONE | SERVICE_TIME | THROUGHPUT}] 
       [-failovertype {NONE | SESSION | SELECT | TRANSACTION | AUTO}] [-failoverretry <integer>] [-failoverdelay <integer>] [-failover_restore {NONE | LEVEL1 | AUTO}] [-failback {YES | NO}] 
       [-edition <edition>] [-pdb <pluggable_database>] [-sql_translation_profile <sql_translation_profile>] [-commit_outcome {TRUE|FALSE}] [-retention <retention>] 
       [-replay_init_time <replay_initiation_time>] [-session_state {STATIC | DYNAMIC | AUTO}] [-maxlag <max_lag_time>] [-gsmflags <gsm_flags>] [-tablefamilyid <table_family_id>] [-drain_timeout <timeout>] [-stopoption <stop_option>] 
       [-global_override] [-css_critical {YES | NO}] [-rfpool <pool_name> -hubsvc <hub_service>]} [-eval] [-verbose] [-force]
Usage: srvctl relocate service -db <db_unique_name> [-service <service_name> | -pdb <pluggable_database>] {-oldinst <old_inst_name> [-newinst <new_inst_name>] | -currentnode <current_node> [-targetnode <target_node>]} [-drain_timeout <t
imeout>] [-wait <wait_option>] [-pq] [-force [-noreplay] [-stopoption <stop_option>]] [-eval] [-verbose]Usage: srvctl remove service -db <db_unique_name> -service <service_name>  [-global_override] [-force]
Usage: srvctl start service { -node <node_name> | -db <db_unique_name>  [-node <node_name> | -instance <inst_name>] [-service  "<service_name_list>" [-pq] |  -pdb <pluggable_database> | -serverpool <pool_name>]  [-global_override] [-rol
e] [-startoption <start_options>] [-eval]} [-verbose]Usage: srvctl stop service {-node <node_name> | -db <db_unique_name> [-pq] [-pdb <pluggable_database> | -service "<service_name_list>" [-eval]] [-node <node_name> | -instance <inst_name> | -serverpool <pool_name>]} [-stopoption <stop_op
tion>] [-drain_timeout <timeout>] [-wait {YES | NO}] [-force [-noreplay]] [-global_override] [-verbose]Usage: srvctl add nodeapps { { -node <node_name> -address {<vip_name>|<ip>}/<netmask>[/if1[|if2...]] [-skip]} | { -subnet <subnet>/<netmask>[/if1[|if2...]] } } [-emport <em_port>] [-onslocalport <ons_local_port>]  [-onsremoteport <ons_r
emote_port>] [-remoteservers <host>[:<port>][,<host>[:<port>]...]] [-clientdata <file> [-scanclient]] [-pingtarget "<pingtarget_list>"] [-vipless] [-verbose]Usage: srvctl config nodeapps [-viponly] [-onsonly]
Usage: srvctl modify nodeapps {[-node <node_name> -address {<vip_name>|<ip>}/<netmask>[/if1[|if2...]] [-skip]] | [-subnet <subnet>/<netmask>[/if1[|if2|...]]]} [-nettype {STATIC|DHCP|AUTOCONFIG|MIXED}] [-emport <em_port>] [ -onslocalport
 <ons_local_port> ] [-onsremoteport <ons_remote_port> ] [-remoteservers <host>[:<port>][,<host>[:<port>]...]] [-clientdata <file>] [-pingtarget "<pingtarget_list>"] [-verbose]Usage: srvctl start nodeapps [-node <node_name>] [-adminhelper | -onsonly] [-verbose]
Usage: srvctl stop nodeapps [-node <node_name>] [-adminhelper | -onsonly | -relocate] [-force] [-verbose]
Usage: srvctl status nodeapps [-node <node_name>]
Usage: srvctl enable nodeapps [-adminhelper] [-verbose]
Usage: srvctl disable nodeapps [-adminhelper] [-verbose]
Usage: srvctl remove nodeapps [-force] [-noprompt] [-verbose]
Usage: srvctl getenv nodeapps [-viponly] [-onsonly] [-envs "<name>[,...]"]
Usage: srvctl setenv nodeapps {-envs "<name>=<val>[,...]" | -env "<name>=<val>"} [-viponly | -onsonly] [-verbose]
Usage: srvctl unsetenv nodeapps -envs "<name>[,...]" [-viponly | -onsonly] [-verbose]
Usage: srvctl add vip -node <node_name> -netnum <network_number> -address {<name>|<ip>}/<netmask>[/if1[|if2...]] [-skip] [-verbose]
Usage: srvctl config vip {-node <node_name> | -vip <vip_name>}
Usage: srvctl disable vip -vip <vip_name> [-verbose]
Usage: srvctl enable vip -vip <vip_name> [-verbose]
Usage: srvctl remove vip -vip <"vip_name_list"> [-force] [-noprompt] [-verbose]
Usage: srvctl getenv vip -vip <vip_name> [-envs "<name>[,...]"]
Usage: srvctl start vip {-node <node_name> [-netnum <network_number>] | -vip <vip_name>} [-verbose]
Usage: srvctl stop vip {-node <node_name> [-netnum <network_number>] | -vip <vip_name>} [-force] [-relocate] [-verbose]
Usage: srvctl relocate vip -vip <vip_name> [-node <node_name>] [-force] [-verbose]
Usage: srvctl status vip {-node <node_name> | -vip <vip_name>} [-verbose]
Usage: srvctl setenv vip -vip <vip_name> {-envs "<name>=<val>[,...]" | -env "<name>=<val>"} [-verbose]
Usage: srvctl unsetenv vip -vip <vip_name> -envs "<name>[,...]" [-verbose]
Usage: srvctl predict vip -vip <vip_name> [-verbose]
Usage: srvctl add network [-netnum <network_number>] -subnet <subnet>/<netmask>[/if1[|if2...]] [-nettype {STATIC|DHCP|AUTOCONFIG|MIXED}] [-pingtarget "<pingtarget_list>"] [-verbose]
Usage: srvctl config network [-netnum <network_number>]
Usage: srvctl modify network [-netnum <network_number>] [-subnet <subnet>/<netmask>[/if1[|if2...]]] [-nettype {STATIC|DHCP|AUTOCONFIG|MIXED}] [-iptype {IPV4|IPV6|BOTH}] [-pingtarget "<pingtarget_list>"] [-verbose]
Usage: srvctl remove network {-netnum <network_number> | -all} [-force] [-verbose]
Usage: srvctl predict network [-netnum <network_number>] [-verbose]
Usage: srvctl add asm {[-proxy [-spfile <spfile>]] | [-listener <lsnr_name>] [-pwfile <password_file_path>] [-pwfilebackup <backup_password_file_path>] [-flex [-count {<number_of_instances>|ALL}] [-spfile <spfile>]]} 
Usage: srvctl start asm [-proxy] [-node <node_name>] [-startoption <start_options>]
Usage: srvctl stop asm [-proxy] [-node <node_name>] [-stopoption <stop_options>] [-force]
Usage: srvctl config asm [-proxy] [-detail]
Usage: srvctl status asm [-proxy] [-node <node_name>] [-detail] [-verbose]
Usage: srvctl enable asm [-proxy] [-node <node_name>]
Usage: srvctl disable asm [-proxy] [-node <node_name>]
Usage: srvctl modify asm {[-proxy -spfile <spfile_path>] | [-pwfile <password_file_path>] [-pwfilebackup <backup_password_file_path>] [-listener <lsnr_name>] [-count {<number_of_instances>|ALL}] [-spfile <spfile_path>]} [-force]
Usage: srvctl remove asm [-proxy] [-force]
Usage: srvctl relocate asm -currentnode <current_node> [-targetnode <target_node>] [-force]
Usage: srvctl getenv asm [-envs "<name>[,...]"]
Usage: srvctl setenv asm {-envs "<name>=<val>[,...]" | -env "<name>=<value>"}
Usage: srvctl unsetenv asm -envs "<name>[,...]"
Usage: srvctl predict asm [-node <node_name>] [-verbose]
Usage: srvctl add ioserver [-spfile <spfile>] [-count <number_of_ioserver_instances>] [-listener <lsnr_name>]
Usage: srvctl start ioserver [-node <node_name>]
Usage: srvctl stop ioserver [-node <node_name>] [-force]
Usage: srvctl config ioserver [-detail]
Usage: srvctl status ioserver [-node <node_name>] [-detail] [-verbose]
Usage: srvctl enable ioserver [-node <node_name>]
Usage: srvctl disable ioserver [-node <node_name>]
Usage: srvctl modify ioserver [-spfile <spfile>] [-count <number_of_ioserver_instances>] [-listener <lsnr_name>] [-force]
Usage: srvctl remove ioserver [-force]
Usage: srvctl relocate ioserver -currentnode <current_node> [-targetnode <target_node>] [-force]
Usage: srvctl start diskgroup -diskgroup <dg_name> [-node "<node_list>"]
Usage: srvctl stop diskgroup -diskgroup <dg_name> [-node "<node_list>"] [-force]
Usage: srvctl status diskgroup -diskgroup <dg_name> [-node "<node_list>"] [-detail] [-verbose]
Usage: srvctl enable diskgroup -diskgroup <dg_name> [-node "<node_list>"]
Usage: srvctl disable diskgroup -diskgroup <dg_name> [-node "<node_list>"]
Usage: srvctl remove diskgroup -diskgroup <dg_name> [-force]
Usage: srvctl predict diskgroup -diskgroup <diskgroup_name> [-verbose]
Usage: srvctl add listener [-listener <lsnr_name>] {[-netnum <network_number>] [-oraclehome <path>] [-user <oracle_user>] | -asmlistener [-subnet <subnet>]} [-skip] [-endpoints "[TCP:]<port>[, ...][:FIREWALL={ON|OFF}][/IPC:<key>][/NMP:<
pipe_name>][/{TCPS|SDP|EXADIRECT}<port>[:FIREWALL={ON|OFF}]]" [-group <group>]] [-invitednodes "<node_list>"] [-invitedsubnets "<subnet_list>"]Usage: srvctl config listener [-listener <lsnr_name> | -asmlistener] [-all]
Usage: srvctl start listener [-listener <lsnr_name>] [-node <node_name>]
Usage: srvctl stop listener [-listener <lsnr_name>] [-node <node_name>] [-force]
Usage: srvctl status listener [-listener <lsnr_name>] [-node <node_name>] [-verbose]
Usage: srvctl enable listener [-listener <lsnr_name>] [-node <node_name>]
Usage: srvctl disable listener [-listener <lsnr_name>] [-node <node_name>]
Usage: srvctl modify listener [-listener <lsnr_name>] [-oraclehome <path>] [-endpoints "[TCP:]<port>[, ...][:FIREWALL={ON|OFF}][/IPC:<key>][/NMP:<pipe_name>][/{TCPS|SDP|EXADIRECT}<port>[:FIREWALL={ON|OFF}]]"] [-group <group>] [-user <or
acle_user>] [-netnum <network_number>]Usage: srvctl remove listener [-listener <lsnr_name> | -all] [-force]
Usage: srvctl getenv listener [-listener <lsnr_name>] [-envs "<name>[,...]"]
Usage: srvctl setenv listener [-listener <lsnr_name>] {-envs "<name>=<val>[,...]" | -env "<name>=<value>"}
Usage: srvctl unsetenv listener [-listener <lsnr_name>] -envs "<name>[,...]"
Usage: srvctl predict listener -listener <listener_name> [-verbose]
Usage: srvctl add scan {-scanname <scan_name> | -clientdata <filename>} [-netnum <network_number>]
Usage: srvctl config scan [[-netnum <network_number>] [-scannumber <scan_ordinal_number>] | -all]
Usage: srvctl start scan [-netnum <network_number>] [-scannumber <scan_ordinal_number>] [-node <node_name>]
Usage: srvctl stop scan [-netnum <network_number>] [-scannumber <scan_ordinal_number>] [-force]
Usage: srvctl relocate scan -scannumber <scan_ordinal_number> [-netnum <network_number>] [-node <node_name>]
Usage: srvctl status scan [[-netnum <network_number>] [-scannumber <scan_ordinal_number>] | -all] [-verbose]
Usage: srvctl enable scan [-netnum <network_number>] [-scannumber <scan_ordinal_number>]
Usage: srvctl disable scan [-netnum <network_number>] [-scannumber <scan_ordinal_number>]
Usage: srvctl modify scan -scanname <scan_name> [-netnum <network_number>]
Usage: srvctl remove scan [-netnum <network_number>] [-force] [-noprompt]
Usage: srvctl predict scan -scannumber <scan_ordinal_number> [-netnum <network_number>] [-verbose]
Usage: srvctl add scan_listener [-netnum <network_number>] [-listener <lsnr_name_prefix>] [-skip] [-endpoints [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>][/SDP:<port>][/EXADIRECT:<port>]] [-invitednodes "<node_list>"] [-in
vitedsubnets "<subnet_list>"] [-clientcluster <cluster_name>] [-clientdata <filename>]Usage: srvctl config scan_listener [[-netnum <network_number>] [-scannumber <scan_ordinal_number>] [-clientcluster <cluster_name>] | -all]
Usage: srvctl start scan_listener [-netnum <network_number>] [-scannumber <scan_ordinal_number>] [-node <node_name>] [-clientcluster <cluster_name>]
Usage: srvctl stop scan_listener [-netnum <network_number>] [-scannumber <scan_ordinal_number>] [-clientcluster <cluster_name>] [-force]
Usage: srvctl relocate scan_listener -scannumber <scan_ordinal_number> [-netnum <network_number>] [-node <node_name>]
Usage: srvctl status scan_listener [[-netnum <network_number>] [-scannumber <scan_ordinal_number>] | [-clientcluster <cluster_name>] | -all] [-verbose]
Usage: srvctl enable scan_listener [-netnum <network_number>] [-scannumber <scan_ordinal_number>] [-clientcluster <cluster_name>]
Usage: srvctl disable scan_listener [-netnum <network_number>] [-scannumber <scan_ordinal_number>] [-clientcluster <cluster_name>]
Usage: srvctl modify scan_listener {-update|-endpoints [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>][/SDP:<port>][/EXADIRECT:<port>]} [-netnum <network_number>] [-invitednodes "<node_list>"] [-invitedsubnets "<subnet_list>"
] [-clientcluster <cluster_name>]Usage: srvctl update scan_listener
Usage: srvctl remove scan_listener [-netnum <network_number>] [-clientcluster <cluster_name>] [-force] [-noprompt]
Usage: srvctl predict scan_listener -scannumber <scan_ordinal_number> [-netnum <network_number>] [-verbose]
Usage: srvctl export scan_listener -clientcluster <cluster_name> -clientdata <filename>
Usage: srvctl add cdp [-port <port_number>] [-passfile_admin <afile>] [-passfile_readonly <rfile>]
Usage: srvctl enable cdp [-cdpnumber <cdp_ordinal_number>]
Usage: srvctl disable cdp [-cdpnumber <cdp_ordinal_number>]
Usage: srvctl start cdp [-cdpnumber <cdp_ordinal_number>] [-node <node_name>]
Usage: srvctl stop cdp [-cdpnumber <cdp_ordinal_number>]
Usage: srvctl modify cdp [-port <port_number>] [-passfile_admin <afile>] [-passfile_readonly <rfile>]
Usage: srvctl relocate cdp -cdpnumber <cdp_ordinal_number> [-node <node_name>] [-force]
Usage: srvctl remove cdp [-force]
Usage: srvctl config cdp [-cdpnumber <cdp_ordinal_number>]
Usage: srvctl add cdpproxy -clienttype <client_type> -clientname <client_name> [-remotestart {YES|NO}]
Usage: srvctl start cdpproxy -clienttype <client_type> -clientname <client_name> [-node <node_name>]
Usage: srvctl stop cdpproxy -clienttype <client_type> -clientname <client_name> [-node <node_name>]
Usage: srvctl status cdp [-cdpnumber <cdp_ordinal_number>]
Usage: srvctl status cdpproxy -clienttype <client_type> -clientname <client_name>
Usage: srvctl remove cdpproxy -clienttype <client_type> [-clientname <client_name>] [-force]
Usage: srvctl config cdpproxy -clienttype <client_type> [-clientname <client_name>]
Usage: srvctl add srvpool -serverpool <pool_name> [-min <min>] [-max <max>] [-importance <importance>] [-servers "<server_list>" | -category <server_category>] [-force] [-eval] [-verbose]
Usage: srvctl config srvpool [-serverpool <pool_name>]
Usage: srvctl status srvpool [-serverpool <pool_name>] [-detail]
Usage: srvctl status server -servers "<server_list>" [-detail]
Usage: srvctl relocate server -servers "<server_list>" -serverpool <pool_name> [-force] [-eval] [-verbose]
Usage: srvctl modify srvpool -serverpool <pool_name> [-min <min>] [-max <max>] [-importance <importance>] [-servers "<server_list>" | -category <server_category>] [-force] [-eval] [-verbose]
Usage: srvctl remove srvpool -serverpool <pool_name> [-eval] [-verbose]
Usage: srvctl add qosmserver [-secure '{YES|NO}'] [-enableHTTPS '{YES|NO}'] [-verbose]
Usage: srvctl config qosmserver
Usage: srvctl start qosmserver [-node <node_name>] [-verbose]
Usage: srvctl stop qosmserver [-force] [-verbose]
Usage: srvctl relocate qosmserver [-node <node_name>] [-verbose]
Usage: srvctl status qosmserver [-node <node_name>] [-verbose]
Usage: srvctl enable qosmserver [-node <node_name>] [-verbose]
Usage: srvctl disable qosmserver [-node <node_name>] [-verbose]
Usage: srvctl modify qosmserver [-rmiport <qosmserver_rmi_port>] [-httpport <qosmserver_http_port>] [-secure '{YES|NO}'] [-enableHTTPS '{YES|NO}'] [-verbose] [-force]
Usage: srvctl predict qosmserver [-verbose]
Usage: srvctl add rhpserver -storage <base_path> [-diskgroup <dg_name>] [-email <email_address> -mailserver <mail_server_address> -mailserverport <mail_server_port>] [-pl_port <RHP_progress_listener_port>] [-clport <RHP_copy_listener_po
rt>] [-enableTLS {YES|NO}] [-enableHTTPS '{YES|NO}'] [-port_range <low_val-high_val>] [-tmploc <temporary_location>] [-verbose]Usage: srvctl config rhpserver
Usage: srvctl start rhpserver [-node <node_name>]
Usage: srvctl stop rhpserver
Usage: srvctl relocate rhpserver [-node <node_name>]
Usage: srvctl status rhpserver
Usage: srvctl enable rhpserver [-node <node_name>]
Usage: srvctl disable rhpserver [-node <node_name>]
Usage: srvctl modify rhpserver [-port <rmi_port> [-force]] [-email <email_address> -mailserver <mail_server_address> -mailserverport <mail_server_port>] [-pl_port <RHP_progress_listener_port>] [-clport <RHP_copy_listener_port>] [-enable
TLS {YES|NO}] [-enableHTTPS '{YES|NO}'] [-port_range <low_val-high_val>] [-tmploc <temporary_location>]Usage: srvctl remove rhpserver [-resource] [-force] [-verbose]
Usage: srvctl add havip -id <id> -address {<ip>|<name>} [-netnum <network_number>] [-description <text>] [-skip] [-homenode <node_name>]
Usage: srvctl config havip [-id <id> | -transport]
Usage: srvctl start havip {-id <id> [-node <node_name>] | -transport}
Usage: srvctl stop havip {-id <id> [-node <node_name>] | -transport} [-force]
Usage: srvctl relocate havip -id <id> [-node <node_name>] [-force]
Usage: srvctl status havip [-id <id> | -transport]
Usage: srvctl enable havip {-id <id> [-node <node_name>] | -transport}
Usage: srvctl disable havip {-id <id> [-node <node_name>] | -transport}
Usage: srvctl modify havip -id <id> [-address {<name>|<ip>} [-netnum <network_number>] [-skip]] [-description <text>] [-homenode <node_name>]
Usage: srvctl remove havip -id <id> [-force]
Usage: srvctl add exportfs -name <expfs_name>  -id <id> -path <export_path> [-clients <export_clients>] [-options <export_options>] [-type {NFS|SMB}]
Usage: srvctl config exportfs [-name <expfs_name> | -id <havip_id>]
Usage: srvctl start exportfs {-name <expfs_name> | -id <havip_id>}
Usage: srvctl stop exportfs {-name <expfs_name> | -id <havip_id> [-type {NFS|SMB}]} [-force]
Usage: srvctl status exportfs [-name <expfs_name> |-id <havip_id>]
Usage: srvctl enable exportfs -name <expfs_name>
Usage: srvctl disable exportfs -name <expfs_name>
Usage: srvctl modify exportfs -name <expfs_name> [-path <exportpath>] [-clients <export_clients>] [-options <export_options>]
Usage: srvctl remove exportfs -name <expfs_name> [-force]
Usage: srvctl add rhpclient -clientdata <file> [-diskgroup <dg_name> -storage <base_path>] [-email <email_address> -mailserver <mail_server_address> -mailserverport <mail_server_port>] [-clport <RHP_copy_listener_port>] [-enableTLS {YES
|NO}] [-enableHTTPS '{YES|NO}'] [-verbose]Usage: srvctl config rhpclient
Usage: srvctl start rhpclient [-node <node_name>]
Usage: srvctl stop rhpclient
Usage: srvctl relocate rhpclient [-node <node_name>]
Usage: srvctl status rhpclient
Usage: srvctl enable rhpclient [-node <node_name>]
Usage: srvctl disable rhpclient [-node <node_name>]
Usage: srvctl modify rhpclient [-clientdata <file>] [-port <rmi_port>] [-diskgroup <dg_name> -storage <base_path>] [-email <email_address> -mailserver <mail_server_address> -mailserverport <mail_server_port>] [-clport <RHP_copy_listener
_port>] [-enableTLS {YES|NO}] [-enableHTTPS '{YES|NO}']Usage: srvctl remove rhpclient [-force] [-verbose]
Usage: srvctl start home -oraclehome <oracle_home> -statefile <state_file> -node <node_name>
Usage: srvctl stop home -oraclehome <oracle_home> -statefile <state_file> -node <node_name> [-stopoption <stop_options>] [-force]
Usage: srvctl status home -oraclehome <oracle_home> -statefile <state_file> -node <node_name>
Usage: srvctl add filesystem {-device <volume_device> | -volume <volume_name> -diskgroup <dg_name>} -path <mountpoint_path> [-node "<node_list>" | -serverpool "<serverpool_list>"] [-user "<user_list>"] [-mountowner <user_name>] [-mountg
roup <group_name>] [-mountperm <octal_permission>] [-fstype {ACFS|EXT3|EXT4}] [-fsoptions <options>] [-description <description>] [-appid <application_id>] [-autostart {ALWAYS|NEVER|RESTORE}] [-acceleratorvols <volume_device>]Usage: srvctl config filesystem [-device <volume_device> | -volume <volume_name> -diskgroup <dg_name>]
Usage: srvctl start filesystem {-device <volume_device_list> | -volume <volume_name_list> -diskgroup <dg_name_list>} [-node <node_name>]
Usage: srvctl stop filesystem {-device <volume_device> | -volume <volume_name> -diskgroup <dg_name>} [-node <node_name>] [-force]
Usage: srvctl relocate filesystem {-device <volume_device> | -volume <volume_name> -diskgroup <dg_name>} [-node <node_name>] [-force] [-verbose]
Usage: srvctl status filesystem [-device <volume_device> | -volume <volume_name> -diskgroup <dg_name>] [-verbose]
Usage: srvctl enable filesystem {-device <volume_device> | -volume <volume_name> -diskgroup <dg_name>}
Usage: srvctl disable filesystem {-device <volume_device> | -volume <volume_name> -diskgroup <dg_name>}
Usage: srvctl modify filesystem {-device <volume_device> | -volume <volume_name> -diskgroup <dg_name>} [-user {/+|/-}<user_name> | "<user_list>"] [-mountowner <user_name>] [-mountgroup <group_name>] [-mountperm <octal_permission>] [-pat
h <mountpoint_path>] [-node "<node_list>" | -serverpool "<serverpool_list>"] [-fsoptions <options>] [-description <description>] [-autostart  {ALWAYS|NEVER|RESTORE}] [-force]Usage: srvctl remove filesystem {-device <volume_device> | -volume <volume_name> -diskgroup <dg_name>} [-force]
Usage: srvctl predict filesystem {-device <volume_device> | -volume <volume_name> -diskgroup <dg_name>} [-verbose]
Usage: srvctl config volume [-volume <volume_name>] [-diskgroup <group_name>] [-device <volume_device>]
Usage: srvctl start volume {-volume <volume_name> -diskgroup <group_name> | -device <volume_device>} [-node "<node_list>"]
Usage: srvctl stop volume {-volume <volume_name> -diskgroup <group_name> | -device <volume_device>} [-node "<node_list>"] [-force]
Usage: srvctl status volume [-device <volume_device>] [-volume <volume_name>] [-diskgroup <group_name>] [-node "<node_list>" | -all]
Usage: srvctl enable volume {-volume <volume_name> -diskgroup <group_name> | -device <volume_device>} [-node <node_name>]
Usage: srvctl disable volume {-volume <volume_name> -diskgroup <group_name> | -device <volume_device>} [-node <node_name>]
Usage: srvctl start gns [-loglevel <log_level>] [-node <node_name>] [-verbose]
Usage: srvctl stop gns [-node <node_name>] [-force] [-verbose]
Usage: srvctl config gns [-detail] [-subdomain] [-multicastport] [-node <node_name>] [-port] [-status] [-version] [-query <name>] [-list] [-clusterguid] [-clustername] [-clustertype] [-loglevel] [-network] [-role] [-serialnumber] [-inst
ances] [-querycluster [-name <clustername>]] [-verbose]Usage: srvctl status gns [-node <node_name>] [-verbose]
Usage: srvctl enable gns [-node <node_name>] [-verbose]
Usage: srvctl disable gns [-node <node_name>] [-verbose]
Usage: srvctl relocate gns [-node <node_name>] [-verbose]
Usage: srvctl add gns {-vip {<vip_name> | <ip>} [-skip] [-domain <domain>] [-clientdata <filename>] [-verbose] | -clientdata <filename>}
Usage: srvctl modify gns {-loglevel <log_level> | [-resolve <name>] [-verify <name>] [-parameter <name>:<value>[,<name>:<value>...]] [-vip {<vip_name> | <ip>} [-skip]] [-clientdata <filename>] [-role {PRIMARY} [-force]] [-verbose]}
Usage: srvctl remove gns [-clustername <name>] [-force] [-verbose]
Usage: srvctl import gns -instance <filename>
Usage: srvctl export gns {-instance <filename> | {-clientdata <filename> -role {CLIENT|SECONDARY} [-version]}}
Usage: srvctl update gns {-advertise <name> -address <address> [-timetolive <time_to_live>] | -delete <name> [-address <address>] | -alias <alias> -name <name> [-timetolive <time_to_live>] | -deletealias <alias> | -createsrv <service> -
target <target> -protocol <protocol> [-weight <weight>] [-priority <priority>] [-port <port_number>] [-timetolive <time_to_live>] [-instance <instance_name>] | -deletesrv <service_name> -target <target> -protocol <protocol> [-instance <instance_name>] | -createtxt <name> -target <target> [-timetolive <time_to_live>] [-namettl <name_ttl>] | -deletetxt <name> -target <target> | -createptr <name> -target <target> [-timetolive <time_to_live>] [-namettl <name_ttl>] | -deleteptr <name> -target <target>} [-verbose]Usage: srvctl add cvu [-checkinterval <check_interval_in_minutes>] [-destloc <path>]
Usage: srvctl config cvu
Usage: srvctl start cvu [-node <node_name>]
Usage: srvctl stop cvu [-force]
Usage: srvctl relocate cvu [-node <node_name>]
Usage: srvctl status cvu [-node <node_name>]
Usage: srvctl enable cvu [-node <node_name>]
Usage: srvctl disable cvu [-node <node_name>]
Usage: srvctl modify cvu [-checkinterval <check_interval_in_minutes>] [-destloc <path>]
Usage: srvctl remove cvu [-force]
Usage: srvctl add mgmtdb [-domain <domain_name>]
Usage: srvctl config mgmtdb [-verbose] [-all]
Usage: srvctl start mgmtdb [-startoption <start_option>] [-node <node_name>]
Usage: srvctl stop mgmtdb [-stopoption <stop_option>] [-force]
Usage: srvctl update mgmtdb -startoption <start_options>
Usage: srvctl status mgmtdb [-verbose]
Usage: srvctl modify mgmtdb [-pwfile <password_file_path>] [-spfile <server_parameter_file>]
Usage: srvctl getenv mgmtdb [-envs "<name>[,...]"]
Usage: srvctl setenv mgmtdb {-envs "<name>=<value>[,...]" | -env "<name=value>"}
Usage: srvctl unsetenv mgmtdb -envs "<name>[,..]"
Usage: srvctl relocate mgmtdb [-node <node_name>]
Usage: srvctl add mgmtlsnr [-endpoints "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>][/SDP:<port>][/EXADIRECT:<port>]"] [-skip]
Usage: srvctl config mgmtlsnr [-all]
Usage: srvctl start mgmtlsnr [-node <node_name>]
Usage: srvctl stop mgmtlsnr [-node <node_name>] [-force]
Usage: srvctl status mgmtlsnr [-verbose]
Usage: srvctl modify mgmtlsnr -endpoints "[TCP:]<port>[,...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>][/SDP:<port>][/EXADIRECT:<port>]"
Usage: srvctl getenv mgmtlsnr [ -envs "<name>[,...]"]
Usage: srvctl setenv mgmtlsnr { -envs "<name>=<val>[,...]" | -env "<name>=<value>"}
Usage: srvctl unsetenv mgmtlsnr -envs "<name>[,...]"
Usage: srvctl add cha
Usage: srvctl config cha
Usage: srvctl start cha [-node <node_name>]
Usage: srvctl stop cha [-node <node_name>] [-force]
Usage: srvctl status cha [-node <node_name>]
Usage: srvctl remove cha [-force]
Usage: srvctl getenv cha [-envs "<name>[,...]"]
Usage: srvctl setenv cha -envs "<name>=<val>[,...]"
Usage: srvctl unsetenv cha -envs "<name>[,...]"
Usage: srvctl add mountfs -name <mountfs_name> -path <mount_path> -exportserver <server_name> -exportpath <path> [-mountoptions <mount_options>] [-user <user>]
Usage: srvctl config mountfs [-name <mountfs_name>]
Usage: srvctl start mountfs -name <mountfs_name> [-node <node_list>]
Usage: srvctl stop mountfs -name <mountfs_name> [-node <node_list>] [-force]
Usage: srvctl status mountfs -name <mountfs_name>
Usage: srvctl enable mountfs -name <mountfs_name> [-node <node_list>]
Usage: srvctl disable mountfs -name <mountfs_name> [-node <node_list>]
Usage: srvctl modify mountfs -name <mountfs_name> [-path <mount_path>] [-exportserver <server_name>] [-exportpath <path>] [-mountoptions <mount_options>] [-user <user>]
Usage: srvctl remove mountfs -name <mountfs_name> [-force]
Usage: srvctl add vm -name <unique_name> -vm "<vm_list>" [-serverpool <pool_name> | -category <server_category> | -node "<node_list>"] [-stoptimeout <timeout>] [-checkinterval <interval>]
Usage: srvctl config vm [-name <unique_name>]
Usage: srvctl disable vm -name <unique_name> [-vm <name_or_id> | -node <node_name>]
Usage: srvctl enable vm -name <unique_name> [-vm <name_or_id> | -node <node_name>]
Usage: srvctl modify vm -name <unique_name> [-addvm "<vm_list>" | -removevm "<vm_list>"] [-serverpool <server_pool> | -category <server_category> | -node "<node_list>"] [-stoptimeout <timeout>] [-checkinterval <interval>]
Usage: srvctl relocate vm -name <unique_name> {-vm <name_or_id> | -srcnode <source_node>} -node <destination_node>
Usage: srvctl remove vm -name <unique_name> [-force]
Usage: srvctl start vm -name <unique_name> [-vm <name_or_id> -node <node_name> | -vm <name_or_id> | -node <node_name>]
Usage: srvctl stop vm -name <unique_name> [-vm <name_or_id> | -node <node_name>]
Usage: srvctl status vm -name <unique_name> [-vm <name_or_id> | -node <node_name>]
Usage: srvctl add ovmm -username <username> -wallet <wallet_path> -ovmmhost <host_or_IP> -ovmmport <port>
Usage: srvctl config ovmm
Usage: srvctl modify ovmm [-username <username>] [-wallet <wallet_path>] [-ovmmhost <host_or_IP>] [-ovmmport <port>]
Usage: srvctl remove ovmm
Usage: srvctl add acfsrapps
Usage: srvctl config acfsrapps
Usage: srvctl start acfsrapps [-node <node_list>]
Usage: srvctl stop acfsrapps [-node <node_list>] [-force]
Usage: srvctl status acfsrapps
Usage: srvctl enable acfsrapps [-node <node_list>]
Usage: srvctl disable acfsrapps [-node <node_list>]
Usage: srvctl remove acfsrapps [-force]
Usage: srvctl add oraclehome -name <home_name> -path <path> [-type {ADMIN|POLICY}] [-node <node_list>]
Usage: srvctl config oraclehome [-name <home_name>]
Usage: srvctl disable oraclehome -name <home_name> [-node <node_name>]
Usage: srvctl enable oraclehome -name <home_name> [-node <node_name>]
Usage: srvctl modify oraclehome -name <home_name> [-path <path>] [-type {ADMIN|POLICY}] [-node <node_list> | [-addnode <node_name>] [-deletenode <node_name>]]
Usage: srvctl remove oraclehome -name <home_name> [-force]
Usage: srvctl start oraclehome -name <home_name> [-node <node_list>]
Usage: srvctl stop oraclehome -name <home_name> [-node <node_list>] [-force]
Usage: srvctl status oraclehome -name <home_name>
Usage: srvctl config rhpplsnr
Usage: srvctl disable rhpplsnr [-node <node_name>]
Usage: srvctl enable rhpplsnr [-node <node_name>]
Usage: srvctl modify rhpplsnr [-pl_port <RHP_progress_listener_port>] [-use '{YES|NO}'] [-force]
Usage: srvctl start rhpplsnr [-node <node_name>]
Usage: srvctl stop rhpplsnr
Usage: srvctl status rhpplsnr
Usage: srvctl add ons [-emport <em_port>] [-onslocalport <ons_local_port>]  [-onsremoteport <ons_remote_port>] [-remoteservers <host>[:<port>][,<host>[:<port>]...]] [-clientcluster <cluster_name> | -clientdata <filename>]
Usage: srvctl config ons [-all] [-clientcluster <cluster_name>]
Usage: srvctl disable ons [-clientcluster <cluster_name>]
Usage: srvctl enable ons [-clientcluster <cluster_name>]
Usage: srvctl modify ons [-emport <em_port>] [-onslocalport <ons_local_port>]  [-onsremoteport <ons_remote_port>] [-remoteservers <host>[:<port>][,<host>[:<port>]...]] [-clientcluster <cluster_name>]
Usage: srvctl remove ons [-clientcluster <cluster_name>] [-force]
Usage: srvctl start ons [-clientcluster <cluster_name>]
Usage: srvctl stop ons [-clientcluster <cluster_name>] [-force]
Usage: srvctl status ons [-clientcluster <cluster_name>]
Usage: srvctl export ons -clientcluster <cluster_name> -clientdata <filename>
Usage: srvctl add tfa -diskgroup <dg_name>
Usage: srvctl config tfa
Usage: srvctl start tfa [-node <node_name>]
Usage: srvctl stop tfa [-node <node_name>] [-force]
Usage: srvctl status tfa
Usage: srvctl enable tfa [-node <node_name>]
Usage: srvctl disable tfa [-node <node_name>]
Usage: srvctl remove tfa [-force]
Usage: srvctl add netstorageservice -device <volume_device>
Usage: srvctl config netstorageservice
Usage: srvctl start netstorageservice [-node <node>]
Usage: srvctl stop netstorageservice [-node <node>] [-force]
Usage: srvctl status netstorageservice
Usage: srvctl enable netstorageservice [-node <node>]
Usage: srvctl disable netstorageservice [-node <node>]
Usage: srvctl remove netstorageservice

4.3.1 config查看配置

--- 查看数据库配置
 
–不带任何参数时,显示OCR中注册的所有数据库
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config database

 

–使用-d选项,查看某个数据库配置
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config database -d ocean
 
注: 该输出结果显示数据库ocean由2个节点组成,各自实例名交ocean1和ocean2。两个实例的$ORACLE_HOME是/app/oracle/product/19.2.0/dbhome_1

–使用-a选项查看配置的详细信息
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config database -d ocean -a

--- 查看Node Application的配置
 
–不带任何参数,返回节点名,实例名和$ORACLE_HOME
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config nodeapps -n rac1

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config nodeapps -n rac2

–使用-a选项,查看VIP配置
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config nodeapps -n rac1 -a

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config nodeapps -n rac2 -a

–使用-s选项,查看ONS: 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config nodeapps -n rac1 -s

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config nodeapps -n rac2 -s

–使用-l选项,查看Listener: 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config nodeapps -n rac1 -l

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config nodeapps -n rac2 -l

--- 查看Listener.
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config listener -n rac1 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config listener -n rac2

--- 查看ASM 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config asm -n rac1
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config asm -n rac2 

--- 查看Service 

[grid@rac1 bin]$ srvctl config service -help

Displays the configuration for the service.

Usage: srvctl config service {-db <db_unique_name> [-service <service_name>] | -serverpool <serverpool_name> [-db <db_unique_name>]} [-verbose]
    -db <db_unique_name>           Unique name for the database
    -serverpool <pool_name>        Display information on nodes within server pool
    -service <service>             Service name
    -verbose                       Verbose output
    -help                          Print usage


---example

–查看数据库所有service配置 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config service -d ocean 

4.3.2 add添加对象

一般情况下,应用层资源都是在图形界面的帮助下注册到OCR中的,比如VIP,ONS实在安装最后阶段创建的,而数据库,ASM是执行DBCA的过程中自动注册到OCR中的,Listener是通过netca工具。 但是有些时候需要手工把资源注册到OCR中, 这时候就需要add命令了。

--- 添加数据库
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl add database -d luffy -o $ORACLE_HOME
 
--- 添加实例
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl add instance -d luffy -n rac1 -i luffy1 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl add instance -d luffy -n rac2 -i luffy2 

--- 添加服务,需要使用4个参数 
-s :服务名 
-r:首选实例名 
-a:备选实例名 
-P:TAF策略,可选值为None(缺省值),Basic,preconnect。 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl add service -d dmm -s dmmservice -r rac1 -a rac2 -P BASIC
 
--- 确认添加成功 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config service -d dmm -s dmmservice -a

4.3.3 启动、禁用对象

默认情况下数据库、实例、服务、ASM都是随着CRS的启动而自启动的,有时候由于维护的需要,可以先关闭这个特性

--- 配置数据库随CRS的启动而自动启动 

–启用数据库的自启动: 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl enable database -d ocean 

–查看配置
 
[root@rac1 bin]#  /app/product/19.2.0/crs/bin/srvctl config database -d ocean -a
 
–禁止数据库在CRS启动后自启动,这时需要手动启动 

[root@rac1 bin]#  /app/product/19.2.0/crs/bin/srvctl disable database -d ocean 

--- 关闭某个实例的自动启动 

[root@rac1 bin]#  /app/product/19.2.0/crs/bin/srvctl disable instance -d ocean -i ocean1 
[root@rac1 bin]#  /app/product/19.2.0/crs/bin/srvctl enable instance -d ocean -i ocean1 

–查看信息 

[root@rac1 bin]#  /app/product/19.2.0/crs/bin/srvctl config database -d ocean -a 

--- 禁止某个服务在实例上运行 

[root@rac1 bin]#  /app/product/19.2.0/crs/bin/srvctl enable service -d ocean -s oceanservice -i ocean1 
[root@rac1 bin]#  /app/product/19.2.0/crs/bin/srvctl disable service -d ocean -s oceanservice -i ocean1 

–查看 

[root@rac1 bin]#  /app/product/19.2.0/crs/bin/srvctl config service -d ocean -a 

4.3.4 remove删除对象

remove命令删除的是对象在OCR中的定义信息,对象本身(例如数据库的数据文件等)不会被删除,以后随时可以使用add命令重新添加到OCR中。

--- 删除Service,在删除之前,命令会给出确定提示
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl remove service -d ocean -s oceanservice
 
--- 删除实例,删除之前同样会给出提示 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl remove instance -d ocean -i ocean1

--- 删除数据库 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl remove database -d ocean

 4.3.5 启停对象、查看对象状态

在RAC环境下启动,关闭数据库虽然仍然可以使用SQL/PLUS方法,但是更推荐使用srvctl命令来做这些工作,这可以保证即使更新CRS中运行信息,可以使用start/stop命令启动,停止对象,然后使用status命令查看对象状态。

--- 启动数据库,默认启动到open状态
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl start database -d ocean 
 
--- 配置实例启动模式 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl start database -d ocean -i ocean1 -o mount 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl start database -d ocean -i ocean1 -o nomount 

--- 关闭实例,并指定关闭方式 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl stop instance -d ocean -i ocean1 -o immediate 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl stop instance -d ocean -i ocean1 -o abort 

--- 在指定实例上启动服务: 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl start service -d ocean -s oceanservice -i ocean1
 
–查看服务状态 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl status service -d ocean -v 

--- 关闭指定实例上的服务 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl stop service -d ocean -s oceanservice -i ocean1 

–查看服务状态 

[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl status service -d ocean -v

4.3.6 srvctl追踪日志

跟踪srvctl非常简单,只要设置srvm_trace=true这个OS环境变量即可,这个命令的所有函数调用都会输出到屏幕上,可以帮助用户进行诊断。

[root@rac1 bin]# export SRVM_TRACE=TRUE
 
[root@rac1 bin]# /app/product/19.2.0/crs/bin/srvctl config database -d ocean

4.3.7 CRS恢复

OCR磁盘和Votedisk磁盘全部破坏,并且都没有备份,这时最简单的方法就是重新初始话OCR和Votedisk

--- 停止所有节点的Clusterware Stack

[grid@rac1 ~]$crsctl stop crs;

--- 分别在每个节点用root用户执行$CRS_HOME\install\rootdelete.sh脚本

--- 任意一个节点上用root用户执行$CRS_HOME\install\rootinstall.sh脚本

--- 和上一步同一个节点上用root执行$CRS_HOME\root.sh脚本

--- 另一节点用root执行行$CRS_HOME\root.sh脚本

--- 用netca命令重新配置监听,确认注册到Clusterware中

[grid@rac1 ~]$crsctl stat res -t
 
到目前为止,只有Listener,ONS,GSD,VIP注册到OCR中,还需要把ASM, 数据库都注册到OCR中。

---向OCR中添加ASM

[grid@rac1 ~]$srvctl add asm -n rac1 -i +ASM1 -o /app/product/19.2.0/crs 
[grid@rac1 ~]$srvctl add asm -n rac2 -i +ASM2 -o /app/product/19.2.0/crs

--- 启动ASM

[grid@rac1 ~]$srvctl start asm -n rac1 
[grid@rac1 ~]$srvctl start asm -n rac2
 
若在启动时报ORA-27550错误。是因为RAC无法确定使用哪个网卡作为Private Interconnect,解决方法:在两个ASM的pfile文件里添加如下参数:
 
+ASM1.cluster_interconnects=’172.16.33.1′ 
+ASM2.cluster_interconnects=’172.16.33.2′

--- 手工向OCR中添加Database对象

[grid@rac1 ~]$srvctl add database -d ocean -o /app/oracle/product/19.2.0/dbhome_1

--- 添加2个实例对象

[grid@rac1 ~]$srvctl add instance -d ocean -i ocean1 -n rac1 
[grid@rac1 ~]$srvctl add instance -d ocean -i ocean2 -n rac2

--- 配置实例和ASM实例的依赖关系

[grid@rac1 ~]$srvctl modify instance -d ocean -i ocean1 -s +ASM1 
[grid@rac1 ~]$srvctl modify instance -d ocean -i ocean2 -s +ASM2

--- 启动数据库

[grid@rac1 ~]$srvctl start database -d ocean
 
若也出现ORA-27550错误。也是因为RAC无法确定使用哪个网卡作为Private Interconnect,修改pfile参数在重启动即可解决。 
SQL>alter system set cluster_interconnects=’172.16.33.1′ scope=spfile sid=’ocean1′; 
SQL>alter system set cluster_interconnects=’172.16.33.2′ scope=spfile sid=’ocean2′;

5. 集群关闭、启动

5.1 关闭流程(CRS集群关闭-> 数据库实例关闭)

5.1.1 关闭数据库:

--- 关闭数据库实例

[grid@rac1 ~]$ srvctl stop database -d ocean

--- 查看数据库实例状态

[grid@rac1 ~]$ srvctl status database -d ocean

5.1.2 停止HAS(High Availability Services),必须以root用户操作

--- 停止has

[root@rac1 bin]# /app/product/19.2.0/crs/bin/crsctl stop has -f

[root@rac2 bin]# /app/product/19.2.0/crs/bin/crsctl stop has -f

注:此命令只能关闭当前节点的CRS服务,因此需要在RAC的所有节点上执行,启动也一样。has与crs等同

至此集群已经关闭

5.1.3 停止节点集群服务,必须以root用户操作

--- 停止集群

--停止本节点

[root@rac1 bin]# /app/product/19.2.0/crs/bin/crsctl stop cluster

--停止所有节点

[root@rac1 bin]# /app/product/19.2.0/crs/bin/crsctl stop cluster -all

--停止制定节点

[root@rac1 bin]# /app/product/19.2.0/crs/bin/crsctl stop cluster -n rac1 rac2

 

你如果想一条命令把所有的进程全部停止可以使用上述命令。如果不指定参数的话对当前节点有效,如果指定参数的话对相关参数节点有效。
 

5.2 启动流程(CRS集群启动-> 数据库实例启动)

5.2.1 启动HAS

--- 单节点启动

[root@rac1 bin]# ./crsctl start has

[root@rac1 bin]# ./crsctl check has

--- 所有节点启动

[root@rac1 bin]# ./crsctl start cluster -n rac1 rac2

[root@rac1 bin]# ./crsctl start cluster -all

[root@rac1 bin]# ./crsctl check cluster 

 

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值