Oracle RAC常用维护工具和命令

Oracle RAC常用维护工具和命令
1节点层 olsnodes
2网络层 oifcfg
3集群层 crsctl,ocrcheck,ocrdump,ocrconfig
4应用层 srvctl,onsctl

1)[grid@node1 ~]$ olsnodes -n  -i -s -t
node1   1       node1-vip       Active  Unpinned
2)[grid@node1 ~]$ oifcfg getif
eth0  192.168.6.0  global  public

eth1  10.10.10.0  global  cluster_interconnect

3)配置CRS是否自启动

/etc/inittab中记录CRS自启动
h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
CRS默认随操作系统启动而启动,有时出于维护目的需要关闭这个特性:
[root@node1 bin]# ./crsctl disable crs
CRS-4621: Oracle High Availability Services autostart is disabled.
这个命令实际修改/etc/oracle/scls_scr/node1/root/ohasdstr
[grid@node1 root]$ more ohasdstr 
disable
[root@node1 bin]# ./crsctl enable crs
CRS-4622: Oracle High Availability Services autostart is enabled.
[grid@node1 root]$ more ohasdstr 
enable
停止CRS:
[root@node1 bin]# ./crsctl stop crs
[root@node1 bin]# ./crsctl check crs
CRS-4639: Could not contact Oracle High Availability Services
启动CRS:
[root@node1 bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@node1 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
OCR与VOTEDISK 参考
http://blog.csdn.net/u013820054/article/details/41178719

4)应用层

11g之前使用crs_stat -t查看集群状态
11g之后
[grid@node1 root]$ crs_stat -h
This command is deprecated and has been replaced by 'crsctl status resource'
This command remains for backward compatibility only
[grid@node1 ~]$ crsctl status resource -w "TYPE co 'ora'" -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DG1.dg
               ONLINE  ONLINE       node1                                        
ora.LISTENER.lsnr
               ONLINE  ONLINE       node1                                        
ora.OCR_VOTE.dg
               ONLINE  ONLINE       node1                                        
ora.RCY1.dg
               ONLINE  ONLINE       node1                                        
ora.asm
               ONLINE  ONLINE       node1                    Started             
ora.eons
               ONLINE  ONLINE       node1                                        
ora.gsd
               OFFLINE OFFLINE      node1                                        
ora.net1.network
               ONLINE  ONLINE       node1                                        
ora.ons
               ONLINE  ONLINE       node1                                        
ora.registry.acfs
               ONLINE  ONLINE       node1                                        
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       node1                                        
ora.node1.vip
      1        ONLINE  ONLINE       node1                                        
ora.node2.vip
      1        ONLINE  INTERMEDIATE node1                    FAILED OVER         
ora.oc4j
      1        OFFLINE OFFLINE                                                   
ora.prod.db
      1        ONLINE  ONLINE       node1                    Open                
      2        OFFLINE OFFLINE                                                   
ora.scan1.vip
      1        ONLINE  ONLINE       node1      
srvctl的使用
[grid@node1 ~]$ srvctl -h
Usage: srvctl [-V]
Usage: srvctl add database -d <db_unique_name> -o <oracle_home> [-m <domain_name>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-n <db_name>] [-y {AUTOMATIC | MANUAL}] [-g "<serverpool_list>"] [-x <node_name>] [-a "<diskgroup_list>"]
Usage: srvctl config database [-d <db_unique_name> [-a] ]
Usage: srvctl start database -d <db_unique_name> [-o <start_options>]
Usage: srvctl stop database -d <db_unique_name> [-o <stop_options>] [-f]
Usage: srvctl status database -d <db_unique_name> [-f] [-v]
Usage: srvctl enable database -d <db_unique_name> [-n <node_name>]
Usage: srvctl disable database -d <db_unique_name> [-n <node_name>]
Usage: srvctl modify database -d <db_unique_name> [-n <db_name>] [-o <oracle_home>] [-u <oracle_user>] [-m <domain>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-y {AUTOMATIC | MANUAL}] [-g "<serverpool_list>" [-x <node_name>]] [-a "<diskgroup_list>"|-z]
Usage: srvctl remove database -d <db_unique_name> [-f] [-y]
Usage: srvctl getenv database -d <db_unique_name> [-t "<name_list>"]
Usage: srvctl setenv database -d <db_unique_name> {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>}
Usage: srvctl unsetenv database -d <db_unique_name> -t "<name_list>"
Usage: srvctl add instance -d <db_unique_name> -i <inst_name> -n <node_name> [-f]
Usage: srvctl start instance -d <db_unique_name> {-n <node_name> [-i <inst_name>] | -i <inst_name_list>} [-o <start_options>]
Usage: srvctl stop instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>}  [-o <stop_options>] [-f]
Usage: srvctl status instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>} [-f] [-v]
Usage: srvctl enable instance -d <db_unique_name> -i "<inst_name_list>"
Usage: srvctl disable instance -d <db_unique_name> -i "<inst_name_list>"
Usage: srvctl modify instance -d <db_unique_name> -i <inst_name> { -n <node_name> | -z }
Usage: srvctl remove instance -d <db_unique_name> [-i <inst_name>] [-f] [-y]
Usage: srvctl add service -d <db_unique_name> -s <service_name> {-r "<preferred_list>" [-a "<available_list>"] [-P {BASIC | NONE | PRECONNECT}] | -g <server_pool> [-c {UNIFORM | SINGLETON}] } [-k   <net_num>] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}] [-q {TRUE|FALSE}] [-x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <failover_retries>] [-w <failover_delay>]
Usage: srvctl add service -d <db_unique_name> -s <service_name> -u {-r "<new_pref_inst>" | -a "<new_avail_inst>"}
Usage: srvctl config service -d <db_unique_name> [-s <service_name>] [-a]
Usage: srvctl enable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
Usage: srvctl disable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
Usage: srvctl status service -d <db_unique_name> [-s "<service_name_list>"] [-f] [-v]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> -i <avail_inst_name> -r [-f]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> -n -i "<preferred_list>" [-a "<available_list>"] [-f]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> [-c {UNIFORM | SINGLETON}] [-P {BASIC|PRECONNECT|NONE}] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}][-q {true|false}] [-x {true|false}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <integer>] [-w <integer>]
Usage: srvctl relocate service -d <db_unique_name> -s <service_name> {-i <old_inst_name> -t <new_inst_name> | -c <current_node> -n <target_node>} [-f]
       Specify instances for an administrator-managed database, or nodes for a policy managed database
Usage: srvctl remove service -d <db_unique_name> -s <service_name> [-i <inst_name>] [-f]
Usage: srvctl start service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-o <start_options>]
Usage: srvctl stop service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-f]
Usage: srvctl add nodeapps { { -n <node_name> -A <name|ip>/<netmask>/[if1[|if2...]] } | { -S <subnet>/<netmask>/[if1[|if2...]] } } [-p <portnum>] [-m <multicast-ip-address>] [-e <eons-listen-port>] [-l <ons-local-port>]  [-r <ons-remote-port>] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
Usage: srvctl config nodeapps [-a] [-g] [-s] [-e]
Usage: srvctl modify nodeapps {[-n <node_name> -A <new_vip_address>/<netmask>[/if1[|if2|...]]] | [-S <subnet>/<netmask>[/if1[|if2|...]]]} [-m <multicast-ip-address>] [-p <multicast-portnum>] [-e <eons-listen-port>] [ -l <ons-local-port> ] [-r <ons-remote-port> ] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
Usage: srvctl start nodeapps [-n <node_name>] [-v]
Usage: srvctl stop nodeapps [-n <node_name>] [-f] [-r] [-v]
Usage: srvctl status nodeapps
Usage: srvctl enable nodeapps [-v]
Usage: srvctl disable nodeapps [-v]
Usage: srvctl remove nodeapps [-f] [-y] [-v]
Usage: srvctl getenv nodeapps [-a] [-g] [-s] [-e] [-t "<name_list>"]
Usage: srvctl setenv nodeapps {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
Usage: srvctl unsetenv nodeapps -t "<name_list>" [-v]
Usage: srvctl add vip -n <node_name> -k <network_number> -A <name|ip>/<netmask>/[if1[|if2...]] [-v]
Usage: srvctl config vip { -n <node_name> | -i <vip_name> }
Usage: srvctl disable vip -i <vip_name> [-v]
Usage: srvctl enable vip -i <vip_name> [-v]
Usage: srvctl remove vip -i "<vip_name_list>" [-f] [-y] [-v]
Usage: srvctl getenv vip -i <vip_name> [-t "<name_list>"]
Usage: srvctl start vip { -n <node_name> | -i <vip_name> } [-v]
Usage: srvctl stop vip { -n <node_name>  | -i <vip_name> } [-f] [-r] [-v]
Usage: srvctl status vip { -n <node_name> | -i <vip_name> }
Usage: srvctl setenv vip -i <vip_name> {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
Usage: srvctl unsetenv vip -i <vip_name> -t "<name_list>" [-v]
Usage: srvctl add asm [-l <lsnr_name>]
Usage: srvctl start asm [-n <node_name>] [-o <start_options>]
Usage: srvctl stop asm [-n <node_name>] [-o <stop_options>] [-f]
Usage: srvctl config asm [-a]
Usage: srvctl status asm [-n <node_name>] [-a]
Usage: srvctl enable asm [-n <node_name>]
Usage: srvctl disable asm [-n <node_name>]
Usage: srvctl modify asm [-l <lsnr_name>] 
Usage: srvctl remove asm [-f]
Usage: srvctl getenv asm [-t <name>[, ...]]
Usage: srvctl setenv asm -t "<name>=<val> [,...]" | -T "<name>=<value>"
Usage: srvctl unsetenv asm -t "<name>[, ...]"
Usage: srvctl start diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl stop diskgroup -g <dg_name> [-n "<node_list>"] [-f]
Usage: srvctl status diskgroup -g <dg_name> [-n "<node_list>"] [-a]
Usage: srvctl enable diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl disable diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl remove diskgroup -g <dg_name> [-f]
Usage: srvctl add listener [-l <lsnr_name>] [-s] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-o <oracle_home>] [-k <net_num>]
Usage: srvctl config listener [-l <lsnr_name>] [-a]
Usage: srvctl start listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl stop listener [-l <lsnr_name>] [-n <node_name>] [-f]
Usage: srvctl status listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl enable listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl disable listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl modify listener [-l <lsnr_name>] [-o <oracle_home>] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-u <oracle_user>] [-k <net_num>]
Usage: srvctl remove listener [-l <lsnr_name> | -a] [-f]
Usage: srvctl getenv listener [-l <lsnr_name>] [-t <name>[, ...]]
Usage: srvctl setenv listener [-l <lsnr_name>] -t "<name>=<val> [,...]" | -T "<name>=<value>"
Usage: srvctl unsetenv listener [-l <lsnr_name>] -t "<name>[, ...]"
Usage: srvctl add scan -n <scan_name> [-k <network_number> [-S <subnet>/<netmask>[/if1[|if2|...]]]]
Usage: srvctl config scan [-i <ordinal_number>]
Usage: srvctl start scan [-i <ordinal_number>] [-n <node_name>]
Usage: srvctl stop scan [-i <ordinal_number>] [-f]
Usage: srvctl relocate scan -i <ordinal_number> [-n <node_name>]
Usage: srvctl status scan [-i <ordinal_number>]
Usage: srvctl enable scan [-i <ordinal_number>]
Usage: srvctl disable scan [-i <ordinal_number>]
Usage: srvctl modify scan -n <scan_name>
Usage: srvctl remove scan [-f] [-y]
Usage: srvctl add scan_listener [-l <lsnr_name_prefix>] [-s] [-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]] 
Usage: srvctl config scan_listener [-i <ordinal_number>]
Usage: srvctl start scan_listener [-n <node_name>] [-i <ordinal_number>]
Usage: srvctl stop scan_listener [-i <ordinal_number>] [-f]
Usage: srvctl relocate scan_listener -i <ordinal_number> [-n <node_name>]
Usage: srvctl status scan_listener [-i <ordinal_number>]
Usage: srvctl enable scan_listener [-i <ordinal_number>]
Usage: srvctl disable scan_listener [-i <ordinal_number>]
Usage: srvctl modify scan_listener {-u|-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]} 
Usage: srvctl remove scan_listener [-f] [-y]
Usage: srvctl add srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"]
Usage: srvctl config srvpool [-g <pool_name>]
Usage: srvctl status srvpool [-g <pool_name>] [-a]
Usage: srvctl status server -n "<server_list>" [-a]
Usage: srvctl relocate server -n "<server_list>" -g <pool_name> [-f]
Usage: srvctl modify srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"]
Usage: srvctl remove srvpool -g <pool_name>
Usage: srvctl add oc4j [-v]
Usage: srvctl config oc4j
Usage: srvctl start oc4j [-v]
Usage: srvctl stop oc4j [-f] [-v]
Usage: srvctl relocate oc4j [-n <node_name>] [-v]
Usage: srvctl status oc4j [-n <node_name>]
Usage: srvctl enable oc4j [-n <node_name>] [-v]
Usage: srvctl disable oc4j [-n <node_name>] [-v]
Usage: srvctl modify oc4j -p <oc4j_rmi_port> [-v]
Usage: srvctl remove oc4j [-f] [-v]
Usage: srvctl start home -o <oracle_home> -s <state_file> -n <node_name>
Usage: srvctl stop home -o <oracle_home> -s <state_file> -n <node_name> [-t <stop_options>] [-f]
Usage: srvctl status home -o <oracle_home> -s <state_file> -n <node_name>
Usage: srvctl add filesystem -d <volume_device> -v <volume_name> -g <dg_name> [-m <mountpoint_path>] [-u <user>]
Usage: srvctl config filesystem -d <volume_device>
Usage: srvctl start filesystem -d <volume_device> [-n <node_name>]
Usage: srvctl stop filesystem -d <volume_device> [-n <node_name>] [-f]
Usage: srvctl status filesystem -d <volume_device>
Usage: srvctl enable filesystem -d <volume_device>
Usage: srvctl disable filesystem -d <volume_device>
Usage: srvctl modify filesystem -d <volume_device> -u <user>
Usage: srvctl remove filesystem -d <volume_device> [-f]
Usage: srvctl start gns [-v] [-l <log_level>] [-n <node_name>]
Usage: srvctl stop gns [-v] [-n <node_name>] [-f]
Usage: srvctl config gns [-v] [-a] [-d] [-k] [-m] [-n <node_name>] [-p] [-s] [-V]
Usage: srvctl status gns -n <node_name>
Usage: srvctl enable gns [-v] [-n <node_name>]
Usage: srvctl disable gns [-v] [-n <node_name>]
Usage: srvctl relocate gns [-v] [-n <node_name>] [-f]
Usage: srvctl add gns [-v] -d <domain> -i <vip_name|ip> [-k <network_number> [-S <subnet>/<netmask>[/<interface>]]]
srvctl modify gns [-v] [-f] [-l <log_level>] [-d <domain>] [-i <ip_address>] [-N <name> -A <address>] [-D <name> -A <address>] [-c <name> -a <alias>] [-u <alias>] [-r <address>] [-V <name>] [-F <forwarded_domains>] [-R <refused_domains>] [-X <excluded_interfaces>]
Usage: srvctl remove gns [-f] [-d <domain_name>]  


oracle rac日常基本维护命令 所有实例和服务的状态 $ srvctl status database -d orcl Instance orcl1 is running on node linux1 Instance orcl2 is running on node linux2 单个实例的状态 $ srvctl status instance -d orcl -i orcl2 Instance orcl2 is running on node linux2 在数据库全局命名服务的状态 $ srvctl status service -d orcl -s orcltest Service orcltest is running on instance(s) orcl2, orcl1 特定节点上节点应用程序的状态 $ srvctl status nodeapps -n linux1 VIP is running on node: linux1 GSD is running on node: linux1 Listener is running on node: linux1 ONS daemon is running on node: linux1 ASM 实例的状态 $ srvctl status asm -n linux1 ASM instance +ASM1 is running on node linux1. 列出配置的所有数据库 $ srvctl config database orcl 显示 RAC 数据库的配置 $ srvctl config database -d orcl linux1 orcl1 /u01/app/oracle/product/10.2.0/db_1 linux2 orcl2 /u01/app/oracle/product/10.2.0/db_1 显示指定集群数据库的所有服务 $ srvctl config service -d orcl orcltest PREF: orcl2 orcl1 AVAIL: 显示节点应用程序的配置 —(VIP、GSD、ONS、监听器) $ srvctl config nodeapps -n linux1 -a -g -s -l VIP exists.: /linux1-vip/192.168.1.200/255.255.255.0/eth0:eth1 GSD exists. ONS daemon exists. Listener exists. 显示 ASM 实例的配置 $ srvctl config asm -n linux1 +ASM1 /u01/app/oracle/product/10.2.0/db_1 集群中所有正在运行的实例 SELECT inst_id , instance_number inst_no , instance_name inst_name , parallel , status , database_status db_status , active_state state , host_name host FROM gv$instance ORDER BY inst_id; INST_ID INST_NO INST_NAME PAR STATUS DB_STATUS STATE HOST -------- -------- ---------- --- ------- ------------ --------- ------- 1 1 orcl1 YES OPEN ACTIVE NORMAL rac1 2 2 orcl2 YES OPEN ACTIVE NORMAL rac2 位于磁盘组中的所有数据文件 select name from v$datafile union select member from v$logfile union select name from v$controlfile union select name from v$tempfile; NAME ------------------------------------------- +FLASH_RECOVERY_AREA/orcl/controlfile/current.258.570913191 +FLASH_RECOVERY_AREA/orcl/onlinelog/group_1.257.570913201 +FLASH_RECOVERY_AREA/orcl/onlinelog/group_2.256.570913211 +FLASH_RECOVERY_AREA/orcl/onlinelog/group_3.259.570918285 +FLASH_RECOVERY_AREA/orcl/onlinelog/group_4.260.570918295 +ORCL_DATA1/orcl/controlfile/current.259.570913189 +ORCL_DATA1/orcl/datafile/example.257.570913311 +ORCL_DATA1/orcl/datafile/indx.270.570920045 +ORCL_DATA1/orcl/datafile/sysaux.260.570913287 +ORCL_DATA1/orcl/datafile/system.262.570913215 +ORCL_DATA1/orcl/datafile/undotbs1.261.570913263 +ORCL_DATA1/orcl/datafile/undotbs1.271.570920865 +ORCL_DATA1/orcl/datafile/undotbs2.265.570913331 +ORCL_DATA1/orcl/datafile/undotbs2.272.570921065 +ORCL_DATA1/orcl/datafile/users.264.570913355 +ORCL_DATA1/orcl/datafile/users.269.570919829 +ORCL_DATA1/orcl/onlinelog/group_1.256.570913195 +ORCL_DATA1/orcl/onlinelog/group_2.263.570913205 +ORCL_DATA1/orcl/onlinelog/group_3.266.570918279 +ORCL_DATA1/orcl/onlinelog/group_4.267.570918289 +ORCL_DATA1/orcl/tempfile/temp.258.570913303 21 rows selected. 属于“ORCL_DATA1”磁盘组的所有 ASM 磁盘 SELECT path FROM v$asm_disk WHERE group_number IN (select group_number from v$asm_diskgroup where name = 'ORCL_DATA1'); PATH ---------------------------------- ORCL:VOL1 ORCL:VOL2  二: 启动/停止RAC集群 确保是以 oracle UNIX 用户登录的。我们将从rac1节点运行所有命令: # su – oracle $ hostname Rac1 停止 Oracle RAC 10g 环境 第一步是停止 Oracle 实例。当此实例(和相关服务)关闭后,关闭 ASM 实例。最后,关闭节点应用程序(虚拟IP、GSD、TNS 监听器和 ONS)。 $ export ORACLE_SID=orcl1 $ emctl stop dbconsole $ srvctl stop instance -d orcl -i orcl1 $ srvctl stop asm -n rac1 $ srvctl stop nodeapps –n rac1 启动 Oracle RAC 10g 环境 第一步是启动节点应用程序(虚拟 IP、GSD、TNS 监听器和 ONS)。当成功启动节点应用程序后,启动 ASM 实例。最后,启动 Oracle 实例(和相关服务)以及企业管理器数据库控制台。 $ export ORACLE_SID=orcl1 $ srvctl start nodeapps -n rac1 $ srvctl start asm -n rac1 $ srvctl start instance -d orcl -i orcl1 $ emctl start dbconsole 使用 SRVCTL 启动/停止所有实例 启动/停止所有实例及其启用的服务。我只是觉得有意思就把此步骤作为关闭所有实例的一种方法加进来了! $ srvctl start database -d orcl $ srvctl stop database -d orcl
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值