2017-11-21 DBA日记,oracle asm ha配置全记录

一、案例描述

想实现ORACLE DATABASE高可用,但又没有足够的预算购买ORACLE RAC组件,只好使用HA架构了,在过往的案例中,一般都是在采用基于操作系统层的HA软件+ORACLE+文件系统(基于操作系统层的,如ext4,jfs2等存放数据),这个套路就是实现简单,但是损失ASM提供的IO性能了。那么有什么方法,既能实现HA,又能使用ASM所提供IO性能及便捷的数据文件管理呢?

二、问题

HA软件( Veritas, SFRAC, IBMPowerHA, or HP Serviceguard)+ORACLE+ASM可行吗?

三、数据收集和分析

带着问题在网上寻找案例

  1. POWERHA 安装配置简介 https://www.ibm.com/developerworks/cn/aix/library/au-powerhaintro/
  2. 结果真让我找到一个案例,内容如下:
  • 双机(HA)环境下Oracle DB 11gR2+ASM安装配置注意事项 http://ju.outofmemory.cn/entry/31203
  • 关键点:Oracle Support Document ID 1296124.1 《How To Setup ASM (11.2) On An Active/Passive Cluster (Non-RAC).》
A) Setup the Active/Passive Cluster using third party clusterware (which provides the failover) such as Veritas,
SFRAC, IBMPowerHA, or HP Serviceguard.
Note: Please check with your cluster vendor for details about the clusterware configuration.
B) Then in order to setup an ASM 11gR2 or 12cR1 (Grid Infrastructure) Active/Passive Cluster (non-RAC), please
follow the next steps:
1) Ask your SA and/or Storage Administration for 2 candidate physical disks (they can be raw devices or block
devices or ASMLIB disks or NAS devices or LUNs, etc, see also Document 452924.1 [How to Prepare Storage
for ASM] for details), each disk should have a size of 300MB or so and must be located in the same shared
2017/11/21 文档 1296124.1
https://support.oracle.com/epmos/faces/DocumentDisplay?_adf.ctrl-state=pyo5fl2hk_4&id=1296124.1 2/10
storage among the 2 cluster nodes.
2) On node #1, install the 11.2 or 12.1 Grid Infrastructure Standalone ("Install and Configure Grid Infrastructure
for a Standalone Server"), following the instructions described in the Document 1092213.1 [Section: "1) ASM
11gR2 Installation & Configuration"] on page #18, but only create one diskgroup, at this moment, named
"NODE1DG" thru the OUI (page #20), using only one of the 2 disks (300MB) requested in the step #1 above (as a
candidate disk), then proceed with the installation & configuration following the same document.
3) After complete the 11.2 or 12.1 Grid Infrastructure Standalone installation and configuration, the next
components will be up and running on node #1:
=)> CSS service.
=)> OHAS/SIHA services.
=)> Grid Infrastructure Lis tener service (port 1521).
=)> ASM instance.
=)> "NODE1DG" d iskgroup will be mounted.
4) Then ask your SA and/or Storage Administration for all the other candidate physical disks, which will be used
to create your DATA, RECO & BACKUP diskgroups, (they can be raw devices or block devices or ASMLIB disks
or NASs device or LUNs, etc, see also Document 452924.1 [How to Prepare Storage for ASM] for details), each
disk must be located in the same shared storage among the 2 cluster nodes (must be visible & accessible from
each node) and the size of each disk will depend of the total space required to allocated all the databases and
the number of disks will depend of the desired ASM redundancy used to create the diskgroups.
5) On node #1, connect to the +ASM instance (thru SQL*Plus or thru ASMCA) and create the new DATA, RECO &
BACKUP diskgroups using the candidate disks from step #4 above, for additional information please check the
Document 1092213.1 [Section: "1) ASM 11gR2 Installation & Configuration"] on page # 42.
6) On node #1, check the CSS & OHAS state:
$> <Grid Infrastructure Home>/bin/crsctl check has
$> <Grid Infrastructure Home>/bin/crsctl check css
7) On node #1, check all the services state:
$> <Grid Infrastructure Home>/bin/crsctl stat res -t
8) Then on node #1, shutdown the OHAS services to setup node #2:
$> <Grid Infrastructure Home>/bin/crsctl stop has

9) On node #2, install the 11.2 or 12.1 Grid Infrastructure Standalone ("Install and Configure Grid Infrastructure
for a Standalone Server"), following the instructions described in the Document 1092213.1 [Section: "1) ASM
11gR2 or 12cR1 Installation & Configuration"] on page #18, but only create one diskgroup, at this moment,
named "NODE2DG" thru the OUI (page #20), using the second disk (300MB) requested in the step #1 above (as
a candidate disk), then proceed with the installation & configuration following the same document.
10) After complete the 11.2 Grid Infrastructure Standalone installation and configuration, the next components
will be up and running on node #2:
=)> CSS service.
=)> OHAS/SIHA services.
=)> Grid Infrastructure Lis tener service (port 1521).
=)> ASM instance.
=)> "NODE2DG" d iskgroup will be mounted.
11) On node #2 connect to the +ASM instance (thru SQL*Plus) and include in the ASM_DISKSTRING parameter
(discovery path) the list of member disks associated with the DATA, RECO & BACKUP diskgroups as follow:
SQL> alter system set asm_diskstring= <'ORCL:*' or '/dev/rdsk/*' or '/dev/emcpower*', etc..>
scope=both;
Note 1: Include the candidate disk associated with the "NODE2DG" diskgroup.
Note 2: Do not include the candidate disk associated with the "NODE1DG" diskgroup.
12) On node #2 connect to the +ASM instance (thru SQL*Plus) and include in the ASM_DISKGROUPS parameter,
the list of diskgroups NODE2DG, DATA, RECO & BACKUP diskgroups as follow:
SQL> alter system set asm_diskgroups= 'NODE2DG', 'DATA', 'RECO', 'BACKUP' scope=both;
13) Then, on node #2 connect to the +ASM instance (thru SQL*Plus) and mount the diskgroups (to populate the
OHAS stack):

SQL> alter diskgroup 'DATA' mount; SQL> alter diskgroup 'RECO' mount; SQL> alter diskgroup 'BACKUP' moun t;
Note 1: There is no "srvctl add" command for Oracle ASM diskgroups. Diskgroups are automatically added to
the Oracle Restart configuration when they are first mounted. If you remove a diskgroup from the Oracle
Restart configuration and later want to add it back, connect to the Oracle ASM instance with SQL*Plus and
use an ALTER DISKGROUP ... MOUNT command.
This is documented in the next manual:
http://docs.oracle.com/cd/E11882_01/server.112/e25494/restart.htm#CHDJEEIJ
Note 2: 'NODE2DG' diskgroup is already mounted on +ASM (node #2).
14) On node #2, check the CSS & OHAS state:
$> <Grid Infrastructure Home>/bin/crsctl check has
$> <Grid Infrastructure Home>/bin/crsctl check css
15) On node #2, check all the services state:
$> <Grid Infrastructure Home>/bin/crsctl stat res -t
16) Then on node #2, shutdown the OHAS services, startup the OHAS services and validate all the OHAS services
started again:
$> <Grid Infrastructure Home>/bin/crsctl stop has
$> <Grid Infrastructure Home>/bin/crsctl start has
$> <Grid Infrastructure Home>/bin/crsctl stat res -t

17) Then on node #2, shutdown the OHAS services (to validate OHAS layer on node #1):
$> <Grid Infrastructure Home>/bin/crsctl stop has
18) And finally on node #1, shutdown the OHAS services, startup the OHAS services and validate all the OHAS
services started again:
$> <Grid Infrastructure Home>/bin/crsctl stop has
$> <Grid Infrastructure Home>/bin/crsctl start has
$> <Grid Infrastructure Home>/bin/crsctl stat res -t
19) Then using your clusterware Active/Passive third party configuration test the failover between nodes.
20) Now you can create the 11.2 or 12.1 Database(s) (from node #1):
20.1) As a first step, please install the 11.2.0.X or 12.1 RDBMS Oracle Home (as oracle OS user).
20.2) Then, create the 11.2 or 12.1 database using the DBCA located on your 11.2.0.X or 12.1 RDBMS Oracle
Home (as oracle OS user).
20.3) You will need to select the +DATA diskgroup (as a main diskgroup) to place the datafiles, controlfiles &
redolog files and +RECO diskgroup to multiplex the controlfile and redolog files.
20.4) +BACKUP diskgroup can be used to place the archivelogs and RMAN backups.
21) Then verify the new database(s) were added in the SIHA/OHAS/Restart stack as follow (as grid user):
$> <Grid Infrastructure Home>/bin/crsctl stat res -t
Or:
$> <Grid Infrastructure Home>/bin/crsctl stat res ora.<db name>.db
Example:
[grid@asmgrid ~]$ crsctl stat res ora.db11202.db NAME=ora.db11202.db

TYPE=ora.database.type TARGET=ONLINE STATE=ONLINE on asmgrid
22) Connect as grid user (in node #1) and restart OHAS stack as follow:
$> <Grid Infrastructure Home>/bin/crsctl stop has
$> <Grid Infrastructure Home>/bin/crsctl start has
23) Wait between 1 or 3 or 5 minutes (depending of the # of resources) and verify all the resources (CSS,
DISKMON, ASM, diskgroups, Listeners & Databases) are valid and started (in node #1):
$> <Grid Infrastructure Home>/bin/crsctl stat res -t
24) Then on node #1, shutdown the OHAS services:
$> <Grid Infrastructure Home>/bin/crsctl stop has
25) Connect to node #2 (as grid user) and start the OHAS services:
$> <Grid Infrastructure Home>/bin/crsctl start has
$> <Grid Infrastructure Home>/bin/crsctl stat res -t
26) Connect as oracle OS user (in node #2) and add the database service resource into the SIHA/OHAS/Restart
stack, very important, you must use the srvctl command located at the <RDBMS 11.2 or 12.1 Database Oracle
Home>/bin (do NOT use the one located at the Grid Infrastructure Oracle Home), as follow:
$> <RDBMS database Oracle Home>/bin/srvctl add database -d DB11202 -n DB11202 -o
/u01/app/oracle/product/11.2.0/dbhome_1 -p +DATA/DB11202/spfileDB11202.ora -s OPEN -y
AUTOMATIC -a DATA,RECO -t IMMEDIATE
Example:
[oracle@asmgrid ~]$ . oraenv ORACLE_SID = [DB11202] ? DB11202

The Oracle base remains unchanged with value /u01/app/oracle [oracle@asmgrid ~]$ echo $ORACLE_HOME /u01/app/oracle/product/11.2.0/dbhome _1 [oracle@asmgrid ~]$ srvctl add database -d DB11202 -n DB11202 -o
/u01/app/oracle/product/11.2.0/dbhome_1 -p +DATA/DB11202/spfileDB11202.ora -s OPEN -y
AUTOMATIC -a DATA,RECO -t IMMEDIATE
For additional options, please check the next manual:
http://docs.oracle.com/cd/E11882_01/server.112/e25494/restart.htm#ADMIN12708
Oracle Database Administrator's Guide
11g Release 2 (11.2)
Part Number E25494 -02
=)> SRVCTL Command Reference for Oracle Restart
==)> srvctl add database
27) Then verify the new setting associated with your database resource (connected as oracle OS user) from node
#2:
$> <RDBMS database Oracle Home>/bin/srvctl config database -d DB11202
Your database should look a similar configuration like this, example:
[oracle@asmgrid ~]$ srvctl config database -d DB11202 Database unique name: DB11202 Database name: DB11202 Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1 Oracle user: grid Spfile: +DATA/DB11202/spfileDB11202.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Disk Groups: DATA,RECO Services: [oracle@asmgrid ~]$
27.1) Copy the initxxx.ora from Node #1 to Node #2 before startup via sqlplus.
27.2) Create the audit directory manually before start the instance on node #2.
mkdir -p $ORACLE_BASE/admin/xxx/adump
28) Manually, startup the database instances & open the databases for the first time (thru SQL*Plus), this is 100%
required the first time to set the STATE=ONLINE flag of each associated database resource, so the next time

OHAS is restarted (on node #2), it will automatically open the database:
[oracle@asmgrid ~]$ . oraenv ORACLE_SID = [DB11202] ? DB1 1202 The Oracle base remains unchanged with value /u01/app/oracle
[oracle@asmgrid ~]$ sqlplus /as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on Fri Feb 24 20:05:44 2012
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup ORACLE instance started.
Total System Global Area 1221992448 bytes Fixed Size 1343832 bytes Variable Size 922750632 bytes Database Buffers 285212672 bytes Redo Buffers 12685312 bytes Database mounted. Database opened. SQL> show parameter spfile
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string +DATA/db11202/spfiledb11202.ora SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options
29) Connect as grid user and restart OHAS stack as follow:
$> <Grid Infrastructure Home>/bin/crsctl stop has
$> <Grid Infrastructure Home>/bin/crsctl start has
30) Wait between 1 or 3 or 5 minutes (depending of the # of resources) and verify all the resources (CSS,
DISKMON, ASM, diskgroups, Listeners & Databases) are valid and started:
$> <Grid Infrastructure Home>/bin/crsctl stat res -t
C) Alternatively,you can implement "Oracle RAC One Node", which offers the same or better benefits, for
additional information please check the next document:http://www.oracle.com/technetwork/database/clustering/overview/ds-rac-one-node-11gr2-185089.pdf
  1. 仔细阅读以上文档后,关键点如下:
    • HA软件是用于产生VIP用于数据库服务,以及故障转移时VIP漂移和一些善后工作,如:确保主节点上的磁盘资源没有再写入,数据库与监听服务完全关闭。
    • 主从节点要像RAC那些以共享方式接入lun
    • 主节点在接管理数据盘时,从节点是不能mount起来了,防止非RAC下,多主写入引起数据混乱。
    • 从节点接管时,主节点的HAS进程一定要关闭 crsctl stop has
    • 最重要一点是主从需要各自独占一个diskgroup用于存放spfile。

四、安装环境

  • 两台主机,同时连接共享存储,网络互通。
  • os: ibm aix 6.1
  • db: oracle 11gR2
  • storage: ASM
  • disk:
    • rhdiskpower0 100g 用于 建立asmdg1 存放primary asm spfile
    • rhdiskpower1 100g 用于 建立asmdg2 存放standby asm spfile
    • rhdiskpower2 100g 用于 建立datadg 存放datafile
    • rhdiskpower3 500g 用于 建立flashdg 存放归档日志和备份

五、主节点——安装配置GRID

5.1安装前准备

跳过了,具体安装可以参考oracle grid + database 单实例安装的准备。

  1. AIX的磁盘要按如下配置,主从节点都要执行以下命令:
chown oracle:asmadmin /dev/rhdiskpower0
chown oracle:asmadmin /dev/rhdiskpower1 
chown oracle:asmadmin /dev/rhdiskpower2 
chown oracle:asmadmin /dev/rhdiskpower3 


chdev -l hdiskpower0     -a reserve_policy=no_reserve
chdev -l hdiskpower1     -a reserve_policy=no_reserve
chdev -l hdiskpower2     -a reserve_policy=no_reserve
chdev -l hdiskpower3     -a reserve_policy=no_reserve
  1. 由于之前在测试机上安装过oracle rac,所以先把原来的软件删除。也由于某种原因无法使标准的deinstall软件删除,所以采用手工方式,过程如下:
rm -rf /u01/oracle/grid
rm -rf /u01/oracle/db
rm -rf /u01/oraInventory
rm -rf /u01/base
find /tmp -user oracle -exec rm -rf {} \;
rm -rf /etc/init.ohasd
rm -rf /etc/inittab.crs
rm -rf /etc/inittab.no_crs
rm -rf /etc/oracle
rm -rf /etc/ora*
rm -fr /etc/rc.d/rc2.d/K19ohasd
rm -fr /etc/rc.d/rc2.d/S96ohasd
  • ps: 一定要删除干净,不然手尾好长,总是无缘无故报错。

5.2主节点安装

解压安装包

unzip *.zip

p10404530_112030_AIX64-5L_1of7.zip p10404530_112030_AIX64-5L_2of7.zip p10404530_112030_AIX64-5L_3of7.zip

  • ps:如果你的机器没有装unzip包也可以用java来解压力,命令如下:

jar xvf *.zip chmod -R 777 ./database chmod -R 777 ./grid ps: 用jar解压后的权限是不对的,所以还需要手工改一下.不然,后面安装会报错的。

静默安装grid

第一次使用静默安装,觉得不错,不用远程调用图形界面那么痛苦。命令如下:

  1. 修订静默安装的配置文件:
    • 路径如下:./grid/response/grid_install.rsp
    • vi grid_install.rsp 看到如下选项,进行配置,配置好安装项:安装To configure Grid Infrastructure for stand alone server.安装目录,安装用户,其它按提示去做就行了。 简单摘录如下:
###############################################################################
##                                                                           ##
## Instructions to fill this response file                                   ##
## To install and configure 'Grid Infrastructure for Cluster'                ##
##  - Fill out sections A,B,C,D,E,F and G                                    ##
##  - Fill out section G if OCR and voting disk should be placed on ASM      ##
##                                                                           ##
## To install and configure 'Grid Infrastructure for Standalone server'      ##
##  - Fill out sections A,B and G                                           ##
##                                                                           ##
## To install software for 'Grid Infrastructure'                             ##
##  - Fill out sections A,B and C                                           ##
##                                                                           ##
## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                       ##
##                                                                           ##
###############################################################################
#------------------------------------------------------------------------------
# Do not change the following system generated value.
#------------------------------------------------------------------------------

#-------------------------------------------------------------------------------
# Specify the installation option.
# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY
#   CRS_CONFIG - To configure Grid Infrastructure for cluster
#   HA_CONFIG  - To configure Grid Infrastructure for stand alone server
#   UPGRADE    - To upgrade clusterware software of earlier release
#   CRS_SWONLY - To install clusterware files only (can be configured for cluster
#                or stand alone server later)
#-------------------------------------------------------------------------------
oracle.install.option=HA_CONFIG
  1. 运行静默安装脚本:./runInstaller -showProgress -ignorePrereq -silent -responseFile /u01/install_file/grid/response/grid_install.rsp
  2. 安装成功,然后使用root运行脚本。虽然在静默安装时会提醒已经运行过了,但是那是假的。还是自己运行一下。
  3. 检查安装成果
crsctl status res -t
sqlplus / as sysasm
select name from v$diskgroup;
  1. 顺利通过验证后,继续DATABASE

六、主节点——安装配置database

  1. 使用静默安装软件,配置方式如上,按提示和所需要编辑./database/response/db_install.rsp,不过有一个参数要特别配置一下,不然就会要你输入密码去更新的。

DECLINE_SECURITY_UPDATES=true

  1. 运行脚本

./runInstaller -ignorePrereq -ignoreSysPrereqs -showProgress -silent -responseFile /u01/install_file/database/response/db_install.rsp

  1. 安装成功,在root下运行脚本。
  2. 检查安装是否成功。

dbca
弹出图形界面,安装成功。

  1. 创建数据库,在dbca中按提示点点,,输入即可。不多说了。
  2. 数据库安装成功。

七、[重点] 备节点的配置

  1. 按照主节点方式创建好grid ,并启用,这里要记住如下几点:
    • 使用/dev/rhdiskpower1来创建asmdg2
    • asm实例名称可以这样。
    • 这时是看不到主节点上创建的datadg和falshdg的,磁盘也认不到的,直到ASM实例创建成功为止。
  2. 按照主节点方式安装database 软件,这里要记住如下几点:
    • 只安装软件,不要创建数据库
    • .profile与主节点保持一致。
    • $ORACLE_HOME/dbs下的inithatest.ora 和 orapwhatest保持一致。
    • 目录保持一致,特别是用存放审计文件的目录,请按主节点手工创建好
mkdir -p /u01/base/admin/hatest/adump
mkdir -p /u01/base/admin/hatest/bdump
mkdir -p /u01/base/admin/hatest/pfile
mkdir -p /u01/base/admin/hatest/scripts
ps: 如果有漏就按提示一个一个建就好了。

八、主从切换

8.1 手工测试

  1. 主从状态: 主: asm 实例运行,amsdg1 datadg1 flashdg1 mounted ; db: open 从: asm 实例运行,amsdg1 mounted ; db:closed
  2. 关闭主,使用root , crsctl stop has
  3. 从mount起datadg,flashdg
alter diskgroup datadg mount;
alter diskgroup flashdg mount;

好像做完这步之后在asm spfile的diskgroup就自动添加上

  1. 从启动数据库
lsnrctl start
sqlplus / as sysdba
startup
  1. 启动成功,查询数据有返回。
  2. 将数据库加集群资源中
srvctl add database -d hatest -o /u01/oracle/db -p +DATADG/hatest/spfilehatest.ora -r primary
srvctl start database -d hatest
srvctl add listener
su -
$ORACLE_CRS_HOME/crsctl stop has
  1. 禁用自启动has
root/>crsctl disable has

8.2 HA+脚本配置

  1. HA的配置脚本放置在/HA72/下,分别是db_start.sh db_stop.sh,内容如下:

db_start.sh

/u01/oracle/grid/bin/crsctl start has

db_stop.sh

/u01/oracle/grid/bin/crsctl stop has

8.3 POWERHA手工切换

使用root用户在第一节点上执行以下步骤:

  • smit hacmp
  • 选System Management (C-SPOC)
  • 选Resource Group and Applications
  • 选Move Resource Groups to Another Node
  • 选imdb_RG
  • 选节点

执行crstctl status res -t ,从节点上自动启动数据库资源,主节点自动关闭数据库资源

九、后记

  1. 更改ASM spfile的方法
sqlplus / as sysasm
    create spfile='+asmdg1' from pfile='/tmp/asm.ora';
    alter diskgroup asmdg1 set attribute 'COMPATIBLE.ASM'='11.2'; ---不执行这个步骤,在执行create spfile='+asmdg1'时报错
    create spfile='+asmdg1' from pfile='/tmp/asm.ora';
asmcmd
    cd +asmdg1   --直接在新创建的spfile目录下,然后执行以下命令
    spget    --执行成功后,重启asm spfile自动就会指向新路径
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值