ORACLE集群启动过程
1.init.crsd
2.init.cssd
3.init.evmd
[root@rac1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2948
Available space (kbytes) : 259172
ID : 777223343
Device/File Name : +ocr
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
crs:Cluster Ready Service control
[grid@rac1 grid]$ crsctl -h
Usage: crsctl add - add a resource, type or other entity
crsctl check - check a service, resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource, type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource, type or other entity
crsctl query - query service state
crsctl pin - pin the nodes in the node list
crsctl relocate - relocate a resource, server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource, server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource, server or other entity
crsctl unpin - unpin the nodes in the node list
crsctl unset - unset an entity value, restoring its default
#启动和停止crs
[root@rac1 grid]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'rac1'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'
CRS-2677: Stop of 'ora.cvu' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'rac2'
CRS-2676: Start of 'ora.cvu' on 'rac2' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac2'
CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.rac1.vip' on 'rac2'
CRS-2676: Start of 'ora.scan1.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac2'
CRS-2677: Stop of 'ora.orcl.db' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ARCH.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'
CRS-2676: Start of 'ora.rac1.vip' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ARCH.dg' on 'rac1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac2' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'rac2'
CRS-2677: Stop of 'ora.OCR.dg' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2676: Start of 'ora.oc4j' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac1'
CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1'
CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@rac1 grid]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@rac1 grid]# crsctl stop cluster -all
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.ARCH.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'
CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ARCH.dg' on 'rac1' succeeded
CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac2'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac2'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.ARCH.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac2'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac2'
CRS-2677: Stop of 'ora.cvu' on 'rac2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ARCH.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'rac2' succeeded
CRS-2677: Stop of 'ora.rac2.vip' on 'rac2' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rac2' succeeded
CRS-2677: Stop of 'ora.OCR.dg' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac1'
CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1'
CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.OCR.dg' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac2'
CRS-2677: Stop of 'ora.ons' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac2'
CRS-2677: Stop of 'ora.net1.network' on 'rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
[root@rac1 grid]# crsctl start cluster -all
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded
[root@rac1 grid]# crsctl enable crs
CRS-4622: Oracle High Availability Services autostart is enabled.
[root@rac1 grid]# crsctl disable crs
CRS-4621: Oracle High Availability Services autostart is disabled.
[root@rac1 ~]# crsctl config crs
CRS-4621: Oracle High Availability Services autostart is disabled.
[root@rac1 grid]# crsctl query crs -h
Usage:
crsctl query crs administrator
Display admin list
crsctl query crs activeversion
Lists the Oracle Clusterware operating version
crsctl query crs releaseversion
Lists the Oracle Clusterware release version
crsctl query crs softwareversion [<nodename>| -all]
Lists the version of Oracle Clusterware software installed
where
Default List software version of the local node
nodename List software version of named node
-all List software version for all the nodes in the cluster
[root@rac1 grid]# crsctl query crs administrator
CRS Administrator List: *
[root@rac1 grid]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.4.0]
[root@rac1 grid]# crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.4.0]
[root@rac1 grid]# crsctl query crs softwareversion
Oracle Clusterware version on node [rac1] is [11.2.0.4.0]
[grid@rac1 ~]$ asmcmd
ASMCMD> ls
ARCH/
DATA/
OCR/
ASMCMD>
ASMCMD> -h
commands:
--------
md_backup, md_restore
lsattr, setattr
cd, cp, du, find, help, ls, lsct, lsdg, lsof, mkalias
mkdir, pwd, rm, rmalias
chdg, chkdg, dropdg, iostat, lsdsk, lsod, mkdg, mount
offline, online, rebal, remap, umount
dsget, dsset, lsop, shutdown, spbackup, spcopy, spget
spmove, spset, startup
chtmpl, lstmpl, mktmpl, rmtmpl
chgrp, chmod, chown, groups, grpmod, lsgrp, lspwusr, lsusr
mkgrp, mkusr, orapwusr, passwd, rmgrp, rmusr
volcreate, voldelete, voldisable, volenable, volinfo
volresize, volset, volstat
asmcmd > startup -mount
asmcmd > shutdown immediate
srv:server control
[grid@rac1 grid]$ srvctl -h
用法: srvctl [-V]
用法: srvctl add database -d <db_unique_name> -o <oracle_home> [-c {RACONENODE | RAC | SINGLE} [-e <server_list>] [-i <inst_name>] [-w <timeout>]] [-m <domain_name>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-n <db_name>] [-y {AUTOMATIC | MANUAL | NORESTART}] [-g \"<serverpool_list>\"] [-x <node_name>] [-a \"<diskgroup_list>\"] [-j \"<acfs_path_list>\"]
用法: srvctl config database [-d <db_unique_name> [-a] ] [-v]
用法: srvctl start database -d <db_unique_name> [-o <start_options>] [-n <node>]
用法: srvctl stop database -d <db_unique_name> [-o <stop_options>] [-f]
用法: srvctl status database -d <db_unique_name> [-f] [-v]
用法: srvctl enable database -d <db_unique_name> [-n <node_name>]
用法: srvctl disable database -d <db_unique_name> [-n <node_name>]
用法: srvctl modify database -d <db_unique_name> [-n <db_name>] [-o <oracle_home>] [-u <oracle_user>] [-e <server_list>] [-w <timeout>] [-m <domain>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-y {AUTOMATIC | MANUAL | NORESTART}] [-g \"<serverpool_list>\" [-x <node_name>]] [-a \"<diskgroup_list>\"|-z] [-j \"<acfs_path_list>\"] [-f]
用法: srvctl remove database -d <db_unique_name> [-f] [-y]
用法: srvctl getenv database -d <db_unique_name> [-t "<name_list>"]
用法: srvctl setenv database -d <db_unique_name> {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>}
用法: srvctl unsetenv database -d <db_unique_name> -t "<name_list>"
用法: srvctl convert database -d <db_unique_name> -c RAC [-n <node>]
用法: srvctl convert database -d <db_unique_name> -c RACONENODE [-i <inst_name>] [-w <timeout>]
用法: srvctl relocate database -d <db_unique_name> {[-n <target>] [-w <timeout>] | -a [-r]} [-v]
用法: srvctl upgrade database -d <db_unique_name> -o <oracle_home>
用法: srvctl downgrade database -d <db_unique_name> -o <oracle_home> -t <to_version>
用法: srvctl add instance -d <db_unique_name> -i <inst_name> -n <node_name> [-f]
用法: srvctl start instance -d <db_unique_name> {-n <node_name> [-i <inst_name>] | -i <inst_name_list>} [-o <start_options>]
用法: srvctl stop instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>} [-o <stop_options>] [-f]
用法: srvctl status instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>} [-f] [-v]
用法: srvctl enable instance -d <db_unique_name> -i "<inst_name_list>"
用法: srvctl disable instance -d <db_unique_name> -i "<inst_name_list>"
用法: srvctl modify instance -d <db_unique_name> -i <inst_name> { -n <node_name> | -z }
用法: srvctl remove instance -d <db_unique_name> -i <inst_name> [-f] [-y]
用法: srvctl add service -d <db_unique_name> -s <service_name> {-r "<preferred_list>" [-a "<available_list>"] [-P {BASIC | NONE | PRECONNECT}] | -g <pool_name> [-c {UNIFORM | SINGLETON}] } [-k <net_num>] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}] [-q {TRUE|FALSE}] [-x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <failover_retries>] [-w <failover_delay>] [-t <edition>] [-f]
用法: srvctl add service -d <db_unique_name> -s <service_name> -u {-r "<new_pref_inst>" | -a "<new_avail_inst>"} [-f]
用法: srvctl config service -d <db_unique_name> [-s <service_name>] [-v]
用法: srvctl enable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
用法: srvctl disable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
用法: srvctl status service -d <db_unique_name> [-s "<service_name_list>"] [-f] [-v]
用法: srvctl modify service -d <db_unique_name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
用法: srvctl modify service -d <db_unique_name> -s <service_name> -i <avail_inst_name> -r [-f]
用法: srvctl modify service -d <db_unique_name> -s <service_name> -n -i "<preferred_list>" [-a "<available_list>"] [-f]
用法: srvctl modify service -d <db_unique_name> -s <service_name> [-g <pool_name>] [-c {UNIFORM | SINGLETON}] [-P {BASIC|NONE}] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}][-q {true|false}] [-x {true|false}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <integer>] [-w <integer>] [-t <edition>]
用法: srvctl relocate service -d <db_unique_name> -s <service_name> {-i <old_inst_name> -t <new_inst_name> | -c <current_node> -n <target_node>} [-f]
用法: srvctl remove service -d <db_unique_name> -s <service_name> [-i <inst_name>] [-f]
用法: srvctl start service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-o <start_options>]
用法: srvctl stop service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-f]
用法: srvctl add nodeapps { { -n <node_name> -A <name|ip>/<netmask>/[if1[|if2...]] } | { -S <subnet>/<netmask>/[if1[|if2...]] } } [-e <em-port>] [-l <ons-local-port>] [-r <ons-remote-port>] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
用法: srvctl config nodeapps [-a] [-g] [-s]
用法: srvctl modify nodeapps {[-n <node_name> -A <new_vip_address>/<netmask>[/if1[|if2|...]]] | [-S <subnet>/<netmask>[/if1[|if2|...]]]} [-u {static|dhcp|mixed}] [-e <em-port>] [ -l <ons-local-port> ] [-r <ons-remote-port> ] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
用法: srvctl start nodeapps [-n <node_name>] [-g] [-v]
用法: srvctl stop nodeapps [-n <node_name>] [-g] [-f] [-r] [-v]
用法: srvctl status nodeapps
用法: srvctl enable nodeapps [-g] [-v]
用法: srvctl disable nodeapps [-g] [-v]
用法: srvctl remove nodeapps [-f] [-y] [-v]
用法: srvctl getenv nodeapps [-a] [-g] [-s] [-t "<name_list>"]
用法: srvctl setenv nodeapps {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"} [-v]
用法: srvctl unsetenv nodeapps -t "<name_list>" [-v]
用法: srvctl add vip -n <node_name> -k <network_number> -A <name|ip>/<netmask>/[if1[|if2...]] [-v]
用法: srvctl config vip { -n <node_name> | -i <vip_name> }
用法: srvctl disable vip -i <vip_name> [-v]
用法: srvctl enable vip -i <vip_name> [-v]
用法: srvctl remove vip -i "<vip_name_list>" [-f] [-y] [-v]
用法: srvctl getenv vip -i <vip_name> [-t "<name_list>"]
用法: srvctl start vip { -n <node_name> | -i <vip_name> } [-v]
用法: srvctl stop vip { -n <node_name> | -i <vip_name> } [-f] [-r] [-v]
用法: srvctl relocate vip -i <vip_name> [-n <node_name>] [-f] [-v]
用法: srvctl status vip { -n <node_name> | -i <vip_name> } [-v]
用法: srvctl setenv vip -i <vip_name> {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"} [-v]
用法: srvctl unsetenv vip -i <vip_name> -t "<name_list>" [-v]
用法: srvctl add network [-k <net_num>] -S <subnet>/<netmask>/[if1[|if2...]] [-w <network_type>] [-v]
用法: srvctl config network [-k <network_number>]
用法: srvctl modify network [-k <network_number>] [-S <subnet>/<netmask>[/if1[|if2...]]] [-w <network_type>] [-v]
用法: srvctl remove network {-k <network_number> | -a} [-f] [-v]
用法: srvctl add asm [-l <lsnr_name>]
用法: srvctl start asm [-n <node_name>] [-o <start_options>]
用法: srvctl stop asm [-n <node_name>] [-o <stop_options>] [-f]
用法: srvctl config asm [-a]
用法: srvctl status asm [-n <node_name>] [-a] [-v]
用法: srvctl enable asm [-n <node_name>]
用法: srvctl disable asm [-n <node_name>]
用法: srvctl modify asm [-l <lsnr_name>]
用法: srvctl remove asm [-f]
用法: srvctl getenv asm [-t <name>[, ...]]
用法: srvctl setenv asm -t "<name>=<val> [,...]" | -T "<name>=<value>"
用法: srvctl unsetenv asm -t "<name>[, ...]"
用法: srvctl start diskgroup -g <dg_name> [-n "<node_list>"]
用法: srvctl stop diskgroup -g <dg_name> [-n "<node_list>"] [-f]
用法: srvctl status diskgroup -g <dg_name> [-n "<node_list>"] [-a] [-v]
用法: srvctl enable diskgroup -g <dg_name> [-n "<node_list>"]
用法: srvctl disable diskgroup -g <dg_name> [-n "<node_list>"]
用法: srvctl remove diskgroup -g <dg_name> [-f]
用法: srvctl add listener [-l <lsnr_name>] [-s] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-o <oracle_home>] [-k <net_num>]
用法: srvctl config listener [-l <lsnr_name>] [-a]
用法: srvctl start listener [-l <lsnr_name>] [-n <node_name>]
用法: srvctl stop listener [-l <lsnr_name>] [-n <node_name>] [-f]
用法: srvctl status listener [-l <lsnr_name>] [-n <node_name>] [-v]
用法: srvctl enable listener [-l <lsnr_name>] [-n <node_name>]
用法: srvctl disable listener [-l <lsnr_name>] [-n <node_name>]
用法: srvctl modify listener [-l <lsnr_name>] [-o <oracle_home>] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-u <oracle_user>] [-k <net_num>]
用法: srvctl remove listener [-l <lsnr_name> | -a] [-f]
用法: srvctl getenv listener [-l <lsnr_name>] [-t <name>[, ...]]
用法: srvctl setenv listener [-l <lsnr_name>] -t "<name>=<val> [,...]" | -T "<name>=<value>"
用法: srvctl unsetenv listener [-l <lsnr_name>] -t "<name>[, ...]"
用法: srvctl add scan -n <scan_name> [-k <network_number>] [-S <subnet>/<netmask>[/if1[|if2|...]]]
用法: srvctl config scan [-i <ordinal_number>]
用法: srvctl start scan [-i <ordinal_number>] [-n <node_name>]
用法: srvctl stop scan [-i <ordinal_number>] [-f]
用法: srvctl relocate scan -i <ordinal_number> [-n <node_name>]
用法: srvctl status scan [-i <ordinal_number>] [-v]
用法: srvctl enable scan [-i <ordinal_number>]
用法: srvctl disable scan [-i <ordinal_number>]
用法: srvctl modify scan -n <scan_name>
用法: srvctl remove scan [-f] [-y]
用法: srvctl add scan_listener [-l <lsnr_name_prefix>] [-s] [-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]]
用法: srvctl config scan_listener [-i <ordinal_number>]
用法: srvctl start scan_listener [-n <node_name>] [-i <ordinal_number>]
用法: srvctl stop scan_listener [-i <ordinal_number>] [-f]
用法: srvctl relocate scan_listener -i <ordinal_number> [-n <node_name>]
用法: srvctl status scan_listener [-i <ordinal_number>] [-v]
用法: srvctl enable scan_listener [-i <ordinal_number>]
用法: srvctl disable scan_listener [-i <ordinal_number>]
用法: srvctl modify scan_listener {-u|-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]}
用法: srvctl remove scan_listener [-f] [-y]
用法: srvctl add srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"] [-f]
用法: srvctl config srvpool [-g <pool_name>]
用法: srvctl status srvpool [-g <pool_name>] [-a]
用法: srvctl status server -n "<server_list>" [-a]
用法: srvctl relocate server -n "<server_list>" -g <pool_name> [-f]
用法: srvctl modify srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"] [-f]
用法: srvctl remove srvpool -g <pool_name>
用法: srvctl add oc4j [-v]
用法: srvctl config oc4j
用法: srvctl start oc4j [-v]
用法: srvctl stop oc4j [-f] [-v]
用法: srvctl relocate oc4j [-n <node_name>] [-v]
用法: srvctl status oc4j [-n <node_name>] [-v]
用法: srvctl enable oc4j [-n <node_name>] [-v]
用法: srvctl disable oc4j [-n <node_name>] [-v]
用法: srvctl modify oc4j -p <oc4j_rmi_port> [-v] [-f]
用法: srvctl remove oc4j [-f] [-v]
用法: srvctl start home -o <oracle_home> -s <state_file> -n <node_name>
用法: srvctl stop home -o <oracle_home> -s <state_file> -n <node_name> [-t <stop_options>] [-f]
用法: srvctl status home -o <oracle_home> -s <state_file> -n <node_name>
用法: srvctl add filesystem -d <volume_device> -v <volume_name> -g <dg_name> [-m <mountpoint_path>] [-u <user>]
用法: srvctl config filesystem -d <volume_device>
用法: srvctl start filesystem -d <volume_device> [-n <node_name>]
用法: srvctl stop filesystem -d <volume_device> [-n <node_name>] [-f]
用法: srvctl status filesystem -d <volume_device> [-v]
用法: srvctl enable filesystem -d <volume_device>
用法: srvctl disable filesystem -d <volume_device>
用法: srvctl modify filesystem -d <volume_device> -u <user>
用法: srvctl remove filesystem -d <volume_device> [-f]
用法: srvctl start gns [-l <log_level>] [-n <node_name>] [-v]
用法: srvctl stop gns [-n <node_name>] [-f] [-v]
用法: srvctl config gns [-a] [-d] [-k] [-m] [-n <node_name>] [-p] [-s] [-V] [-q <name>] [-l] [-v]
用法: srvctl status gns [-n <node_name>] [-v]
用法: srvctl enable gns [-n <node_name>] [-v]
用法: srvctl disable gns [-n <node_name>] [-v]
用法: srvctl relocate gns [-n <node_name>] [-v]
用法: srvctl add gns -d <domain> -i <vip_name|ip> [-v]
用法: srvctl modify gns {-l <log_level> | [-i <ip_address>] [-N <name> -A <address>] [-D <name> -A <address>] [-c <name> -a <alias>] [-u <alias>] [-r <address>] [-V <name>] [-p <parameter>:<value>[,<parameter>:<value>...]] [-F <forwarded_domains>] [-R <refused_domains>] [-X <excluded_interfaces>] [-v]}
用法: srvctl remove gns [-f] [-v]
用法: srvctl add cvu [-t <check_interval_in_minutes>]
用法: srvctl config cvu
用法: srvctl start cvu [-n <node_name>]
用法: srvctl stop cvu [-f]
用法: srvctl relocate cvu [-n <node_name>]
用法: srvctl status cvu [-n <node_name>]
用法: srvctl enable cvu [-n <node_name>]
用法: srvctl disable cvu [-n <node_name>]
用法: srvctl modify cvu -t <check_interval_in_minutes>
用法: srvctl remove cvu [-f]
srvctl start asm -n rac1
srvctl stop asm -n rac1
rac如何启动归档模式
进入该实例
alter system set cluster_database=false scope=spfile sid='orcl1';
srvctl stop database -d orcl;
startup mount;
alter database archivelog;
alter system set cluster_database=true scope=spfile sid='orcl1';
shutdown;
srvctl start datbase -d orcl;
启动关闭数据库
[grid@rac1 grid]$ srvctl stop database -d orcl
[grid@rac1 grid]$ srvctl status database -d orcl
实例 orcl1 没有在 rac1 节点上运行
实例 orcl2 没有在 rac2 节点上运行
[grid@rac1 grid]$ srvctl start database -d orcl
[grid@rac1 grid]$ srvctl status database -d orcl
实例 orcl1 正在节点 rac1 上运行
实例 orcl2 正在节点 rac2 上运行
启动关闭单实例
[grid@rac1 grid]$ srvctl stop instance -d orcl -i orcl2
[grid@rac1 grid]$ srvctl status database -d orcl
实例 orcl1 正在节点 rac1 上运行
实例 orcl2 没有在 rac2 节点上运行
[grid@rac1 grid]$ srvctl start instance -d orcl -i orcl2
[grid@rac1 grid]$ srvctl status database -d orcl
实例 orcl1 正在节点 rac1 上运行
实例 orcl2 正在节点 rac2 上运行
#查看配置了哪些数据库 及对应的详细信息
[grid@rac1 grid]$ srvctl config database
orcl
[root@rac1 ~]# srvctl config database -d orcl -a
数据库唯一名称: orcl
数据库名: orcl
Oracle 主目录: /opt/app/oracle/product/11.2.0.4/db_1
Oracle 用户: oracle
Spfile: +DATA/orcl/spfileorcl.ora
域:
启动选项: open
停止选项: immediate
数据库角色: PRIMARY
管理策略: AUTOMATIC
服务器池: orcl
数据库实例: orcl1,orcl2
磁盘组: DATA,ARCH
装载点路径:
服务:
类型: RAC
数据库已启用 # disable enable 区别在这里
数据库是管理员管理的
#查看scan配置及监听情况
[grid@rac2 ~]$ srvctl status scan
SCAN VIP scan1 已启用
SCAN VIP scan1 正在节点 rac1 上运行
[grid@rac2 ~]$ srvctl status scan_listener
SCAN 监听程序 LISTENER_SCAN1 已启用
SCAN 监听程序 LISTENER_SCAN1 正在节点 rac1 上运行
[grid@rac2 ~]$ srvctl status listener
监听程序 LISTENER 已启用
监听程序 LISTENER 正在节点上运行: rac2,rac1
[grid@rac2 ~]$ srvctl config scan
SCAN 名称: rac-cluster-scan, 网络: 1/192.168.100.0/255.255.255.0/ens33
SCAN VIP 名称: scan1, IP: /rac-cluster-scan/192.168.100.11
ocr备份与恢复
ocrconfig -manualbackup #手动备份
ocrconfig -showbackup #列出手工备份或者自动备份的信息
#恢复ocr步骤
crsctl stop crs
ocrconfig -restore <file_name> #自动从备份信息中恢复ocr
crsctl start crs
等待事件分析
资源主控者:任何时刻都知道资源的完整状态(在哪及锁)
1.面向块的等待:请求的块由其它实例提供(块请求这,块拥有者,资源主控者)
这种事件没有块争用,大部分时间在网络传输和实例请求。一般不是性能瓶颈
gc current block 2-way #2-way代表块在自己机器上;3-way代表块在其它机器上
gc current block 3-way #cr一致读consistent reader ,current当前读
gc cr block 2-way
gc cr block 3-way
2.面向消息的等待:请求的块其它实例都没有,只能去磁盘读
这种事件没有块争用,后续会有磁盘读事件 db file sequential read 或者db file sacttered,一般也不是性能瓶颈
gc current grant 2-way # grant 代表块要去磁盘取
gc current grant 3-way
gc cr grant 2-way
gc cr grant 3-way
3.面向争用的等待:请求的块由另一实例使用(还没写入磁盘),这个时候本实例又DML操作,导致争抢此块。
这种事件有块争用,需要优化。和单实例的 buffer busy waits,read by other sessions 类似处理方法。
gc current block busy
gc cr block busy
gc current buffer busy
4.面向负载的等待:系统cpu资源不足,需要加资源cpu或者节点
gc current block congested
gc cr block congested
gc current grant congested
gc cr grant congested
5.其它事件
gc cr failure\ gc cr request\gc block lost\gc claim blocks lost\ 块丢失,此需要查看网络netstat或者os日志来判断系统情况
全局缓存统计信息
gc current blocks received
gc current blocks served
gc cr blocks served
gc cr blocks received
gc prepare failures
gc blocks lost #此值应该为0,否则有严重性能问题
gc blocks corrupt #对传输的块计算校验和,防止传输的块有问题(磁盘损坏或者网络问题)
AWR(Automatic Workload Repository)自动工作负载信息库:是Oracle安装好后自动启动的,不需要特别的设置。收集的统计信息存储在SYSAUX表空间SYS模式下,以WRM$_*和WRH$_*的格式命名,默认会保留最近7天收集的统计信息。每个小时将收集到的信息写到数据库中,这一系列操作是由一个叫MMON的进程来完成的。
ADDM (Automatic Database Diagnostic Monitor AWR)是Oracle内部的一个顾问系统,能够自动的完成最数据库的一些优化的建议,给出SQL的优化,索引的创建,统计量的收集等建议。
ASH (Active Session History)ASH以V$SESSION为基础,每秒采样一次,记录活动会话等待的事件。不活动的会话不会采样,采样工作由新引入的后台进程MMNL来完成。ASH buffers 的最小值为1MB,最大值不超过30MB.内存中记录数据。期望值是记录一小时的内容。
锁模式
NULL(N)
行共享(RS)
行独占(RX)
共享锁(S)
共享行独占(SRX)
独占(X)
工作量与链接管理
服务器软件负载均衡:listener.ora
客户端软件负载均衡:tnsnames.ora
rac=
(DESCRIPTION =
(LOAD_BALANCE = ON) #RAC中独有,默认就是on,可使链接到不同节点负载均衡,一般不用设置
(FAILOVER=ON) #设置故障转移开启
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1541))
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1541))
(CONNECT_DATA =(SERVICE_NAME = rac)FAILOVER_MODE=(TYPE=SELECT|SESSION|NONE)(METHOD=BASIC|PRECONNECT)(BACKUP=RAC2))
)
故障服务重定位;将服务ERP重新定位到rac1
srvctl relocate service -d orcl -s ERP -i rac2 -t rac1
TAF(Transparent Application Failover)即透明应用程序故障转移技术。当初始化连接出现问题无法连接时,该功能可以保证应用程序重新连接到可用服务。在重新连接过程中,之前的活动事务将会被回滚,但在“具体条件”下TAF可以保证SELECT语句不被终止。这也是RAC亮点之一。
所谓的“具体条件”指的就是V@SESSION中FAILOVER_MODE中METHOD选择“BASIC”、TYPE选择“SELECT”
故障排除之日志
1.数据库日志文件
SELECT * FROM V$DIAG_INFO;
Diag Enabled TRUE
ADR Base /home/oracle
ADR Home /home/oracle/diag/rdbms/cjdwdb/cjdwdb1
Diag Trace /home/oracle/diag/rdbms/cjdwdb/cjdwdb1/trace
Diag Alert /home/oracle/diag/rdbms/cjdwdb/cjdwdb1/alert
Diag Incident /home/oracle/diag/rdbms/cjdwdb/cjdwdb1/incident
Diag Cdump /home/oracle/diag/rdbms/cjdwdb/cjdwdb1/cdump
Health Monitor /home/oracle/diag/rdbms/cjdwdb/cjdwdb1/hm
Default Trace File /home/oracle/diag/rdbms/cjdwdb/cjdwdb1/trace/cjdwdb1_ora_124340.trc
Active Problem Count 1
Active Incident Count 4
2.grid日志文件
[root@rac1 client]# cd $GRID_HOME/log
[root@rac1 log]# pwd
/opt/app/11.2.0.4/grid/log
[root@rac1 log]# ll
总用量 0
drwxr-xr-x 2 grid oinstall 6 4月 7 13:39 crs
drwxrwx--T 5 grid asmadmin 71 4月 9 14:15 diag
drwxr-xr-t 24 root oinstall 326 4月 7 13:43 rac1