rac维护基本命令

RAC维护

配置OEM

配置了 Oracle Enterprise Manager (Database Control),可以用它查看数据库的配置和当前状态。

URL 为:https://racnode1:1158/em

 
    
  

检查集群的运行状况(集群化命令)

grid 用户身份运行以下命令

[grid@racnode1 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

所有 Oracle 实例 (数据库状态)

[oracle@racnode1 ~]$ srvctl status database -d racdb
Instance racdb1 is running on node racnode1
Instance racdb2 is running on node racnode2

单个 Oracle 实例 (特定实例的状态)

[oracle@racnode1 ~]$ srvctl status instance -d racdb -i racdb1
Instance racdb1 is running on node racnode1

 

节点应用

节点应用程序 (状态)

[oracle@racnode1 ~]$ srvctl status nodeapps
VIP racnode1-vip is enabled
VIP racnode1-vip is running on node: racnode1
VIP racnode2-vip is enabled VIP racnode2-vip is running on node: racnode2
Network is enabled Network is running on node: racnode1
Network is running on node: racnode2 GSD is disabled
GSD is not running on node: racnode1
GSD is not running on node: racnode2
ONS is enabled
ONS daemon is running on node: racnode1
ONS daemon is running on node: racnode2
eONS is enabled eONS daemon is running on node: racnode1
eONS daemon is running on node: racnode2

节点应用程序 (配置)

[oracle@racnode1 ~]$ srvctl config nodeapps
VIP exists.:racnode1 VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0
VIP exists.:racnode2
VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
eONS daemon exists. Multicast port 24057, multicast IP address 234.194.43.168,
listening port 2016

列出配置的所有数据库

[oracle@racnode1 ~]$ srvctl config database racdb

数据库 (配置)

[oracle@racnode1 ~]$ srvctl config database -d racdb -a
Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +RACDB_DATA/racdb/spfileracdb.ora
Domain: idevelopment.info
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb1,racdb2
Disk Groups: RACDB_DATA,FRA
Services: 
Database is enabled
Database is administrator managed

 

ASM

ASM —(状态)

[oracle@racnode1 ~]$ srvctl status asm
ASM is running on racnode1,racnode2

ASM —(配置)

$ srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.

 

TNS

TNS 监听器(状态)

[oracle@racnode1 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): racnode1,racnode2

TNS 监听器(配置)

[oracle@racnode1 ~]$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home:  
 /u01/app/11.2.0/grid on node(s) racnode2,racnode1
End points: TCP:1521

 

SCAN

SCAN —(状态)

[oracle@racnode1 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node racnode1

SCAN —(配置)

[oracle@racnode1 ~]$ srvctl config scan
SCAN name: racnode-cluster-scan, Network: 1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /racnode-cluster-scan/192.168.1.187

 

VIP

VIP —(特定节点的状态)

[oracle@racnode1 ~]$ srvctl status vip -n racnode1
VIP racnode1-vip is enabled
VIP racnode1-vip is running on node: racnode1 

[oracle@racnode1 ~]$ srvctl status vip -n racnode2
VIP racnode2-vip is enabled
VIP racnode2-vip is running on node: racnode2

VIP —(特定节点的配置)

[oracle@racnode1 ~]$ srvctl config vip -n racnode1
VIP exists.:racnode1
VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0 

[oracle@racnode1 ~]$ srvctl config vip -n racnode2
VIP exists.:racnode2
VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0

节点应用程序配置 VIPGSDONS、监听器)

[oracle@racnode1 ~]$ srvctl config nodeapps -a -g -s -l
-l option has been deprecated and will be ignored.
VIP exists.:racnode1
VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0
VIP exists.:racnode2
VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
Name: LISTENER
Network: 1, Owner: grid
Home:  
 /u01/app/11.2.0/grid on node(s) racnode2,racnode1
End points: TCP:1521

验证所有集群节点间的时钟同步

[oracle@racnode1 ~]$ cluvfy comp clocksync -verbose 

Verifying Clock Synchronization across the cluster nodes  

Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed 

Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes 
  Node Name                             Status                    
  ------------------------------------  ------------------------
  racnode1                             
passed
                                 
 
Result: CTSS resource check passed  

Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed 

Check CTSS state started...
Check: CTSS state
  Node Name                             State                   
  ------------------------------------  ------------------------
  racnode1                              Active                  
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
  Node Name     Time Offset               Status                  
  ------------  ------------------------  ------------------------
  racnode1      0.0                      
passed
                                 
  

Time offset is within the specified limits on the following set of nodes:  "[racnode1]" 
Result: Check of clock time offsets passed  

Oracle Cluster Time Synchronization Services check passed 

Verification of Clock Synchronization across the cluster nodes was successful.

集群中所有正在运行的实例  (SQL)


SELECT
    inst_id
  , instance_number inst_no
  , instance_name inst_name
  , parallel
  , status
  , database_status db_status
  , active_state state
  , host_name host
FROM gv$instance
ORDER BY inst_id; 
 INST_ID  INST_NO INST_NAME  PAR STATUS  DB_STATUS    STATE     HOST
-------- -------- ---------- --- ------- ------------ --------- -------
       1        1 racdb1     YES OPEN    ACTIVE       NORMAL    racnode1
       2        2 racdb2     YES OPEN    ACTIVE       NORMAL    racnode2

所有数据库文件及它们所在的 ASM 磁盘组  (SQL)


select name from v$datafile
union
select member from v$logfile
union
select name from v$controlfile
union
select name from v$tempfile; 
NAME
-------------------------------------------
+FRA/racdb/controlfile/current.256.703530389
+FRA/racdb/onlinelog/group_1.257.703530391
+FRA/racdb/onlinelog/group_2.258.703530393
+FRA/racdb/onlinelog/group_3.259.703533497
+FRA/racdb/onlinelog/group_4.260.703533499
+RACDB_DATA/racdb/controlfile/current.256.703530389
+RACDB_DATA/racdb/datafile/example.263.703530435
+RACDB_DATA/racdb/datafile/indx.270.703542993
+RACDB_DATA/racdb/datafile/sysaux.260.703530411
+RACDB_DATA/racdb/datafile/system.259.703530397
+RACDB_DATA/racdb/datafile/undotbs1.261.703530423
+RACDB_DATA/racdb/datafile/undotbs2.264.703530441
+RACDB_DATA/racdb/datafile/users.265.703530447
+RACDB_DATA/racdb/datafile/users.269.703542943
+RACDB_DATA/racdb/onlinelog/group_1.257.703530391
+RACDB_DATA/racdb/onlinelog/group_2.258.703530393
+RACDB_DATA/racdb/onlinelog/group_3.266.703533497
+RACDB_DATA/racdb/onlinelog/group_4.267.703533499
+RACDB_DATA/racdb/tempfile/temp.262.703530429 

19 rows selected.

ASM 磁盘卷 — (SQL)
SELECT path
FROM   v$asm_disk;
 
PATH
----------------------------------
ORCL:CRSVOL1
ORCL:DATAVOL1
ORCL:FRAVOL1

启动/停止集群

Oracle Grid Infrastructure 已由 grid 用户安装,Oracle RAC 软件已由 oracle用户安装。一个名为 racdb 的功能完善的集群化数据库正在运行。

所有服务(包括 Oracle ClusterwareASM、网络、SCANVIPOracle Database 等)应在 Linux 节点每次重新引导时自动启动。

有时为了进行维护,需要在某节点上关闭 Oracle 服务,稍后再重启 Oracle Clusterware 系统。或者,您可能发现 Enterprise Manager 没有运行而需要启动它。停止/启动操作需要以 root 身份来执行。

在本地服务器上停止 Oracle Clusterware 系统

racnode1 节点上使用 crsctl stop cluster 命令停止 Oracle Clusterware 系统:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster
CRS-2673: Attempting to stop 'ora.crsd' on 'racnode1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on
'racnode1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'racnode1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'racnode1'
CRS-2673: Attempting to stop 'ora.racdb.db' on 'racnode1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'racnode1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'racnode1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.racnode1.vip' on 'racnode1'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'racnode1'
CRS-2677: Stop of 'ora.scan1.vip' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'racnode2'
CRS-2677: Stop of 'ora.racnode1.vip' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.racnode1.vip' on 'racnode2'
CRS-2677: Stop of 'ora.registry.acfs' on 'racnode1' succeeded
CRS-2676: Start of 'ora.racnode1.vip' on 'racnode2' succeeded           
                               

CRS-2676: Start of 'ora.scan1.vip' on 'racnode2' succeeded              
                               

CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'racnode2'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'racnode2' succeeded    
                               

CRS-2677: Stop of 'ora.CRS.dg' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.racdb.db' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'racnode1'
CRS-2673: Attempting to stop 'ora.RACDB_DATA.dg' on 'racnode1'
CRS-2677: Stop of 'ora.RACDB_DATA.dg' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racnode1'
CRS-2677: Stop of 'ora.asm' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'racnode1'
CRS-2673: Attempting to stop 'ora.eons' on 'racnode1'
CRS-2677: Stop of 'ora.ons' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'racnode1'
CRS-2677: Stop of 'ora.net1.network' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.eons' on 'racnode1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racnode1' has
completed
CRS-2677: Stop of 'ora.crsd' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'racnode1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'racnode1'
CRS-2673: Attempting to stop 'ora.evmd' on 'racnode1'
CRS-2673: Attempting to stop 'ora.asm' on 'racnode1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'racnode1' succeeded
CRS-2677: Stop of 'ora.asm' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'racnode1'
CRS-2677: Stop of 'ora.cssd' on 'racnode1' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'racnode1'
CRS-2677: Stop of 'ora.diskmon' on 'racnode1' succeeded

 

注:在运行crsctl stop cluster命令之后,如果 Oracle Clusterware 管理的资源中有任何一个还在运行,则整个命令失败。使用 -f 选项无条件地停止所有资源并停止 Oracle Clusterware 系统。

另请注意,可通过指定 -all 选项在集群中所有服务器上停止 Oracle Clusterware 系统。以下命令将在 racnode1 racnode2 上停止 Oracle Clusterware 系统:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster -all

 

在本地服务器上启动 Oracle Clusterware 系统

racnode1 节点上使用 crsctl start cluster 命令启动 Oracle Clusterware 系统:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'racnode1'
CRS-2676: Start of 'ora.cssdmonitor' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'racnode1'
CRS-2672: Attempting to start 'ora.diskmon' on 'racnode1'
CRS-2676: Start of 'ora.diskmon' on 'racnode1' succeeded
CRS-2676: Start of 'ora.cssd' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'racnode1'
CRS-2676: Start of 'ora.ctssd' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'racnode1'
CRS-2672: Attempting to start 'ora.asm' on 'racnode1'
CRS-2676: Start of 'ora.evmd' on 'racnode1' succeeded
CRS-2676: Start of 'ora.asm' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'racnode1'
CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded

 

注:可通过指定 -all 选项在集群中所有服务器上启动 Oracle Clusterware 系统。

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -all

 

还可以通过列出服务器(各服务器之间以空格分隔)在集群中一个或多个指定的服务器上启动 Oracle Clusterware 系统:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -n
racnode1 racnode2

 

使用 SRVCTL 启动/停止所有实例

可使用以下命令来启动/停止所有实例及相关服务

[oracle@racnode1 ~]$ srvctl stop database -d racdb 
[oracle@racnode1 ~]$ srvctl start database -d racdb

Openfiler 服务器中使用 lvscan 命令检查所有逻辑卷的状态:

[root@openfiler ~]# lvscan

  ACTIVE            '/dev/rac.sharedisk1/ocrvdisk1' [4.00 GB] inherit

  ACTIVE            '/dev/rac.sharedisk1/ocrvdisk2' [4.00 GB] inherit

  ACTIVE            '/dev/rac.sharedisk1/ocrvdisk3' [4.00 GB] inherit

  ACTIVE            '/dev/rac.sharedisk1/ractest_dbfile1' [11.72 GB] inherit

  ACTIVE            '/dev/rac.sharedisk1/fra1' [8.16 GB] inherit

注意:每个逻辑卷的状态设置为 inactive(工作系统上每个逻辑卷的状态应设置为 ACTIVE

技巧

[oracle@racnode1 ~]$ su - grid -c "crs_stat -t -v"
Password: *********

 

检查两个节点上的 Oracle TNS 监听器进程

[grid@racnode1 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' |
awk '{print $9}'

LISTENER_SCAN1

LISTENER
[grid@racnode2 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' |
awk '{print $9}'

LISTENER

确认针对 Oracle Clusterware 文件的 Oracle ASM 功能

如果在 Oracle ASM 上安装了 OCR 和表决磁盘文件,则以 Grid Infrastructure 安装所有者的身份,使用下面的命令语法来确认当前正在运行已安装的 Oracle ASM

[grid@racnode1 ~]$ srvctl status asm -a
ASM is running on racnode1,racnode2
ASM is enabled.

检查 Oracle 集群注册表 (OCR)

[grid@racnode1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120         
         Used space (kbytes)      :       2404         
         Available space (kbytes) :     259716         
         ID                       : 1259866904          
         Device/File Name         :       +CRS
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

检查表决磁盘

[grid@racnode1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   4cbbd0de4c694f50bfd3857ebd8ad8c4 (ORCL:CRSVOL1) [CRS]
Located 1 voting disk(s).

 

 

 

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/20976446/viewspace-750971/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/20976446/viewspace-750971/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值