Applies to:
Oracle Database - Enterprise Edition - Version 11.2.0.1 and later
Information in this document applies to any platform.
Goal
The primary goal of this how-to is establish a data guard configuration where the primary site is RAC and the standby site is RAC.
Please Note: The cluster nodes each have 3 network interfaces, 1 to used for the cluster interconnect, 1 for the public network and 1 for log shipping purposes between the primary and standby sites. These interfaces must be in place prior to starting the configuration steps detailed in this document.
This configuration is specific to Oracle Server Enterprise Edition 11g Release 2 only.
This how-to assumes a RAC primary and RAC standby database have already been created. It is focusing on building a data guard broker configuration that uses a network other than the public network for enabling the broker configuration and shipping of logs between the sites.
Solution
Cluster Configuration Information
Details the configuration prior to establishing the data guard broker environment for RAC Primary to RAC Standby using a second network and set of network interfaces in each GRID cluster and RAC database site.
RAC Primary Nodes
Interface: eth0
Node 1:
10.187.115.125 grid1vm1.au.oracle.com
10.187.115.139 grid1vm1-vip.au.oracle.com
Node 2:
10.187.115.126 grid1vm2.au.oracle.com
10.187.115.140 grid1vm2-vip.au.oracle.com
Single Client Access Name (SCAN)
10.187.115.133 grid1vmscan1.au.oracle.com
Data Guard IP Addresses/Interfaces:
Interface: eth2
Node 1:
192.168.11.125 grid1vm1-dg.au.oracle.com
192.168.11.225 grid1vm1-dg-vip.au.oracle.com
Node 2:
192.168.11.126 grid1vm2-dg.au.oracle.com
192.168.11.226 grid1vm2-dg-vip.au.oracle.com
Database Details
db_name db112a
db_unique_name db112a
instance_names db112a1, db112a2
RAC Standby Nodes
Interface: eth0
Node 1:
10.187.115.128 grid2vm1.au.oracle.com
10.187.115.142 grid2vm1-vip.au.oracle.com grid2vm1-vip
Node 2:
10.187.115.129 grid2vm2.au.oracle.com
10.187.115.143 grid2vm2-vip.au.oracle.com grid2vm2-vip
Single Client Access Name (SCAN)
10.187.115.136 grid2vmscan1.au.oracle.com
Data Guard IP Addresses/Interfaces:
Interface: eth2
Node 1:
192.168.11.128 grid2vm1-dg
192.168.11.228 grid2vm1-dg-vip.au.oracle.com
Node 2:
192.168.11.129 grid2vm2-dg
192.168.11.229 grid2vm2-dg-vip.au.oracle.com
Database Details
db_name db112a
db_unique_name db112a_stb
instance_names db112a1, db112a2
Listener Configuration Details
The way the listeners now operate in an 11.2 environment has altered dramatically when compared to previous releases. Some of the changes relevant to data guard are the following:
- The introduction of the SCAN and SCAN listener(s) (there can be up to 3 SCAN listeners) for handling client connections.
- The listener configuration details are now held in both the GRID cluster registry (OCR) and the GRID_HOME/network/admin directory.
- RAC enabled listeners must run out of the 11.2 GRID Home
- RAC enabled listeners must have Virtual IP Addresses in order for the listener to be configured
- The network the listeners are to be configured against must be registered in the OCR as a resource
- Administration of the listeners should always be performed through netca
The SCAN Listener Configuration
The SCAN listener and SCAN VIP can reside on any node of the cluster and move from node to node as a result of nodes leaving the cluster.
The Primary Site:
In the example below we can see the SCAN listener at the Primary Site is running on the node grid1vm2
oracle 13775 1 0 Aug18 ? 00:00:02 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_DG -inherit
oracle 14737 1 0 Aug12 ? 00:00:08 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER -inherit
oracle 18728 1 0 Aug19 ? 00:00:02 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
[oracle@grid1vm2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.LISTENER.lsnr
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
..
.
ora.net1.network
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
..
.
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE grid1vm2
ora.db112a.db
1 ONLINE ONLINE grid1vm1 Open
2 ONLINE ONLINE grid1vm2 Open
..
.
ora.grid1vm1.vip
1 ONLINE ONLINE grid1vm1
..
.
ora.grid1vm2.vip
1 ONLINE ONLINE grid1vm2
..
.
ora.scan1.vip
1 ONLINE ONLINE grid1vm2
As a result the status of the SCAN listener and its service listing can only be checked from the node it is running on. You will not be able to access the SCAN listeners details from a node it is NOT running on.
LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 20-AUG-2011 01:07:59
Copyright (c) 1991, 2010, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date 19-AUG-2011 00:04:39
Uptime 1 days 1 hr. 3 min. 19 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/11.2.0.2/grid/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/grid1vm2/listener_scan1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.133)(PORT=1521)))
Services Summary...
Service "DB112A.au.oracle.com" has 2 instance(s).
Instance "db112a1", status READY, has 2 handler(s) for this service...
Instance "db112a2", status READY, has 2 handler(s) for this service...
..
.
The command completed successfully
[oracle@grid1vm2 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node grid1vm2
[oracle@grid1vm2 ~]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
The Standby site:
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[oracle@grid2vm1 ~]$ ps -ef | grep tns
oracle 3202 1 0 Aug11 ? 00:00:06 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
oracle 8051 1 0 04:49 ? 00:00:03 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_DG -inherit
oracle 10572 1 0 05:26 ? 00:00:01 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER -inherit
oracle 29833 29788 0 23:18 pts/1 00:00:00 grep tns
[oracle@grid2vm1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.LISTENER.lsnr
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
..
.
ora.net1.network
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
..
.
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE grid2vm1
..
.
ora.db112a_stb.db
1 ONLINE INTERMEDIATE grid2vm1 Mounted (Closed)
2 ONLINE INTERMEDIATE grid2vm2 Mounted (Closed)
..
.
ora.grid2vm1.vip
1 ONLINE ONLINE grid2vm1
..
.
ora.grid2vm2.vip
1 ONLINE ONLINE grid2vm2
..
.
ora.scan1.vip
1 ONLINE ONLINE grid2vm1
[oracle@grid2vm1 ~]$ lsnrctl status listener_scan1
LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 19-AUG-2011 23:19:09
Copyright (c) 1991, 2010, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date 11-AUG-2011 20:18:32
Uptime 8 days 3 hr. 0 min. 36 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/11.2.0.2/grid/network/admin/listener.ora
Listener Log File /u01/app/11.2.0.2/grid/log/diag/tnslsnr/grid2vm1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.136)(PORT=1521)))
Services Summary...
Service "db112a_stb.au.oracle.com" has 2 instance(s).
Instance "db112a1", status READY, has 1 handler(s) for this service...
Instance "db112a2", status READY, has 1 handler(s) for this service...
..
.
The command completed successfully
[oracle@grid2vm1 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node grid2vm1
[oracle@grid2vm1 ~]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
The Local Listeners Configuration
Each node in the cluster has a node listener which will be bound to the VIP and public host IP.
The Primary Site:
oracle 14737 1 0 Aug12 ? 00:00:09 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER -inherit
oracle 18728 1 0 Aug19 ? 00:00:02 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
[oracle@grid1vm2 ~]$ /sbin/ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:3E:6D:71:40
inet addr:10.187.115.126 Bcast:10.187.115.255 Mask:255.255.254.0
inet6 addr: fe80::216:3eff:fe6d:7140/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6457652 errors:0 dropped:0 overruns:0 frame:0
TX packets:173719 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:840105961 (801.1 MiB) TX bytes:93958502 (89.6 MiB)
eth0:3 Link encap:Ethernet HWaddr 00:16:3E:6D:71:40
inet addr:10.187.115.140 Bcast:10.187.115.255 Mask:255.255.254.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
[oracle@grid1vm2 ~]$ lsnrctl status
LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 20-AUG-2011 01:34:24
Copyright (c) 1991, 2010, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date 12-AUG-2011 01:26:54
Uptime 8 days 0 hr. 7 min. 30 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/11.2.0.2/grid/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/grid1vm2/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.126)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.140)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "DB112A.au.oracle.com" has 1 instance(s).
Instance "db112a2", status READY, has 2 handler(s) for this service...
..
.
The command completed successfully
The Standby Site:
oracle 3202 1 0 Aug11 ? 00:00:06 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
oracle 10572 1 0 05:26 ? 00:00:01 /u01/app/11.2.0.2/grid/bin/tnslsnr LISTENER -inherit
[oracle@grid2vm1 ~]$ /sbin/ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:3E:F4:35:04
inet addr:10.187.115.128 Bcast:10.187.115.255 Mask:255.255.254.0
inet6 addr: fe80::216:3eff:fef4:3504/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8149993 errors:0 dropped:0 overruns:0 frame:0
TX packets:951427 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2737221120 (2.5 GiB) TX bytes:4646486293 (4.3 GiB)
eth0:3 Link encap:Ethernet HWaddr 00:16:3E:F4:35:04
inet addr:10.187.115.142 Bcast:10.187.115.255 Mask:255.255.254.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
[oracle@grid2vm1 ~]$ lsnrctl status
LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 19-AUG-2011 23:27:03
Copyright (c) 1991, 2010, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date 19-AUG-2011 05:26:38
Uptime 0 days 18 hr. 0 min. 25 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/11.2.0.2/grid/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/grid2vm1/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.128)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.187.115.142)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "db112a_stb.au.oracle.com" has 1 instance(s).
Instance "db112a1", status READY, has 2 handler(s) for this service...
..
.
The command completed successfully
The RDBMS TNSNAMES.ora
[oracle@grid1vm1 admin]$ cat tnsnames.ora
# tnsnames.ora.grid1vm1 Network Configuration File: /u01/app/oracle/product/11.2.0/dbhome_2/network/admin/tnsnames.ora.grid1vm1
# Generated by Oracle configuration tools.
DB112A_PRM =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = grid1vmscan1.au.oracle.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = db112a.au.oracle.com)
)
)
DB112A_STB =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = grid2vmscan1.au.oracle.com)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = db112a_stb.au.oracle.com)
)
)
Building the Data Guard Broker and Log Shipping Network
The process for establishing the environment as a Data Guard configuration using a second network other than the public network
1. Establish the new entries in the hosts file for the second network interfaces and VIP's required. The following hosts file also includes the SCAN and publicly resolvable hostnames and VIPS which would normally be resolved through the DNS. This hosts file is the same across all cluster nodes and both the primary and standby sites.
[root@ovmsrv1 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 ovmsrv1.au.oracle.com ovmsrv1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
#######################
# Public IP Addresses
#######################
#
# Oracle Virtual Servers - Hypervisors
#
10.187.115.123 ovmsrv1.au.oracle.com ovmsrv1
10.187.115.124 ovmsrv2.au.oracle.com ovmsrv2
#
# Virtual Machines
#
10.187.115.125 grid1vm1.au.oracle.com grid1vm1
10.187.115.126 grid1vm2.au.oracle.com grid1vm2
10.187.115.127 grid1vm3.au.oracle.com grid1vm3
10.187.115.128 grid2vm1.au.oracle.com grid2vm1
10.187.115.129 grid2vm2.au.oracle.com grid2vm2
10.187.115.130 grid2vm3.au.oracle.com grid2vm3
10.187.115.131 grid1filer1.au.oracle.com grid1filer1
10.187.115.132 grid2filer1.au.oracle.com grid2filer1
10.187.115.133 grid1vmscan1.au.oracle.com grid1vmscan1
10.187.115.134 grid1vmscan2.au.oracle.com grid1vmscan2
10.187.115.135 grid1vmscan3.au.oracle.com grid1vmscan3
10.187.115.136 grid2vmscan1.au.oracle.com grid2vmscan1
10.187.115.137 grid2vmscan2.au.oracle.com grid2vmscan2
10.187.115.138 grid2vmscan3.au.oracle.com grid2vmscan3
10.187.115.139 grid1vm1-vip.au.oracle.com grid1vm1-vip
10.187.115.140 grid1vm2-vip.au.oracle.com grid1vm2-vip
10.187.115.141 grid1vm3-vip.au.oracle.com grid1vm3-vip
10.187.115.142 grid2vm1-vip.au.oracle.com grid2vm1-vip
10.187.115.143 grid2vm2-vip.au.oracle.com grid2vm2-vip
10.187.115.144 grid2vm3-vip.au.oracle.com grid2vm3-vip
######################
# Private IP Addresses
######################
#
# Interconnect
#
192.168.10.123 ovmsrv1-prv
192.168.10.124 ovmsrv2-prv
192.168.10.125 grid1vm1-prv
192.168.10.126 grid1vm2-prv
192.168.10.127 grid1vm3-prv
192.168.10.128 grid2vm1-prv
192.168.10.129 grid2vm2-prv
192.168.10.130 grid2vm3-prv
192.168.10.131 gridfiler1-prv
192.168.10.132 gridfiler2-prv
##################################
# Data Guard Log Shipping Network
##################################a
#
# Data Guard Private IP's
#
192.168.11.125 grid1vm1-dg
192.168.11.126 grid1vm2-dg
192.168.11.127 grid1vm3-dg
192.168.11.128 grid2vm1-dg
192.168.11.129 grid2vm2-dg
192.168.11.130 grid2vm3-dg
#
# Data Guard VIP's
#
192.168.11.225 grid1vm1-dg-vip.au.oracle.com grid1vm1-dg-vip
192.168.11.226 grid1vm2-dg-vip.au.oracle.com grid1vm2-dg-vip
192.168.11.227 grid1vm3-dg-vip.au.oracle.com grid1vm3-dg-vip
192.168.11.228 grid2vm1-dg-vip.au.oracle.com grid2vm1-dg-vip
192.168.11.229 grid2vm2-dg-vip.au.oracle.com grid2vm2-dg-vip
192.168.11.230 grid2vm3-dg-vip.au.oracle.com grid2vm3-dg-vip
2. Add the new network configuration to the GRID environment. The new network will running on eth2 using the network subnet of 192.168.11.0. In this case the same network is being used between the primary and standby sites as there is now router and the connection is facilitated through a network switch only.
At the Primary site on one of the cluster nodes add the new network resource using srvctl.
Successfully added Network.
[root@grid1vm1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.net1.network
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
ora.net2.network
OFFLINE OFFLINE grid1vm1
OFFLINE OFFLINE grid1vm2
..
.
At the Standby Site on one of the cluster nodes add the new network resource using srvctl
[root@grid2vm1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.net1.network
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
ora.net2.network
OFFLINE OFFLINE grid2vm1
OFFLINE OFFLINE grid2vm2
..
.
3. Start the new network resources.
From the Primary Site:
CRS-2672: Attempting to start 'ora.net2.network' on 'grid1vm1'
CRS-2672: Attempting to start 'ora.net2.network' on 'grid1vm2'
CRS-2676: Start of 'ora.net2.network' on 'grid1vm2' succeeded
CRS-2676: Start of 'ora.net2.network' on 'grid1vm1' succeeded
[root@grid1vm1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.net1.network
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
ora.net2.network
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
..
.
ora.db112a.db
1 ONLINE ONLINE grid1vm1 Open
2 ONLINE ONLINE grid1vm2 Open
From the Standby Site:
[root@grid2vm1 ~]# crsctl start res ora.net2.network
CRS-2672: Attempting to start 'ora.net2.network' on 'grid2vm1'
CRS-2672: Attempting to start 'ora.net2.network' on 'grid2vm2'
CRS-2676: Start of 'ora.net2.network' on 'grid2vm2' succeeded
CRS-2676: Start of 'ora.net2.network' on 'grid2vm1' succeeded
[root@grid2vm1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.net1.network
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
ora.net2.network
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
..
.
ora.db112a_stb.db
1 ONLINE INTERMEDIATE grid2vm1 Mounted (Closed)
2 ONLINE INTERMEDIATE grid2vm2 Mounted (Closed)
4. Add the new VIP addresses to GRID environment
# Data Guard VIP's
192.168.11.225 grid1vm1-dg-vip
192.168.11.226 grid1vm2-dg-vip
192.168.11.227 grid1vm3-dg-vip
192.168.11.228 grid2vm1-dg-vip
192.168.11.229 grid2vm2-dg-vip
192.168.11.230 grid2vm3-dg-vip
At the Primary site from one of the cluster nodes
[root@grid1vm1 ~]# srvctl add vip -n grid1vm2 -A 192.168.11.226/255.255.255.0 -k 2
At the Standby site from one of the cluster nodes
[root@grid2vm1 ~]# srvctl add vip -n grid2vm2 -A 192.168.11.229/255.255.255.0 -k 2
5. Start the new VIP resources on each cluster node
At the Primary site from one of the cluster nodes
[root@grid1vm1 ~]# srvctl start vip -i grid1vm2-dg-vip
[root@grid1vm1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.net1.network
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
ora.net2.network
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
..
.
ora.db112a.db
1 ONLINE ONLINE grid1vm1 Open
2 ONLINE ONLINE grid1vm2 Open
ora.grid1vm1-dg-vip.vip
1 ONLINE ONLINE grid1vm1
ora.grid1vm1.vip
1 ONLINE ONLINE grid1vm1
ora.grid1vm2-dg-vip.vip
1 ONLINE ONLINE grid1vm2
ora.grid1vm2.vip
..
.
At the Standby site from one of the cluster nodes
[root@grid2vm1 ~]# srvctl start vip -i grid2vm2-dg-vip
[root@grid2vm1 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.net1.network
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
ora.net2.network
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
..
.
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
..
.
ora.db112a_stb.db
1 ONLINE INTERMEDIATE grid2vm1 Mounted (Closed)
2 ONLINE INTERMEDIATE grid2vm2 Mounted (Closed)
ora.grid2vm1-dg-vip.vip
1 ONLINE ONLINE grid2vm1
ora.grid2vm1.vip
1 ONLINE ONLINE grid2vm1
ora.grid2vm2-dg-vip.vip
1 ONLINE ONLINE grid2vm2
ora.grid2vm2.vip
1 ONLINE ONLINE grid2vm2
6. Create the new listener for handling Data Guard Broker based connections
From the Primary Site and first node of the Primary add the new data guard specific listener as the owner of the GRID home to the new network using netca. In this case TCP is the protocol and port 1555 will be used. The new listener name will be listener_dg
[oracle@grid1vm1 ~]$ netca
Oracle Net Services Configuration:
Oracle Net Configuration Assistant is launched from Grid Infrastructure home. Network configuration will be clusterwide.
Configuring Listener:LISTENER_DG
grid1vm1...
grid1vm2...
Listener configuration complete.
Oracle Net Listener Startup:
Listener started successfully.
Oracle Net Services configuration successful. The exit code is 0
[oracle@grid1vm1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.LISTENER.lsnr
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
ora.LISTENER_DG.lsnr
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
..
.
ora.net1.network
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
ora.net2.network
ONLINE ONLINE grid1vm1
ONLINE ONLINE grid1vm2
..
.
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE grid1vm1
ora.db112a.db
1 ONLINE ONLINE grid1vm1 Open
2 ONLINE ONLINE grid1vm2 Open
ora.grid1vm1-dg-vip.vip
1 ONLINE ONLINE grid1vm1
ora.grid1vm1.vip
1 ONLINE ONLINE grid1vm1
ora.grid1vm2-dg-vip.vip
1 ONLINE ONLINE grid1vm2
ora.grid1vm2.vip
1 ONLINE ONLINE grid1vm2
ora.oc4j
1 OFFLINE OFFLINE
ora.scan1.vip
1 ONLINE ONLINE grid1vm1
From the Standby Site and first node of the Standby add the new data guard specific listener as the owner of the GRID home to the new network using netca. In this case TCP is the protocol and port 1555 will be used. The new listener name will be listener_dg
Oracle Net Services Configuration:
Oracle Net Configuration Assistant is launched from Grid Infrastructure home. Network configuration will be clusterwide.
Configuring Listener:LISTENER_DG
grid2vm1
grid2vm2
Listener configuration complete.
Oracle Net Listener Startup:
Listener started successfully.
Oracle Net Services configuration successful. The exit code is 0
[oracle@grid2vm1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
..
.
ora.LISTENER.lsnr
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
ora.LISTENER_DG.lsnr
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
..
.
ora.net1.network
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
ora.net2.network
ONLINE ONLINE grid2vm1
ONLINE ONLINE grid2vm2
..
.
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE grid2vm1
ora.db112a.db
1 ONLINE ONLINE grid2vm1 Open
2 ONLINE ONLINE grid2vm2 Open
ora.grid2vm1-dg-vip.vip
1 ONLINE ONLINE grid2vm1
ora.grid2vm1.vip
1 ONLINE ONLINE grid2vm1
ora.grid2vm2-dg-vip.vip
1 ONLINE ONLINE grid2vm2
ora.grid2vm2.vip
1 ONLINE ONLINE grid2vm2
..
.
ora.scan1.vip
1 ONLINE ONLINE grid2vm1
7. The Log shipping between the sites must use the default database service for shipping of logs to ensure the service is available for log shipping purposes all throughout the process of switchover operations. The default database service will be registered in the listeners using the db_unique_name value for the primary and standby sites.
For Example:
The Primary Site listener used in log shipping will show -
LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 19-SEP-2014 11:01:07
Copyright (c) 1991, 2011, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_DG)))
Services Summary...
Service "db112a.au.oracle.com" has 2 instance(s).
Instance "db112a1", status READY, has 2 handler(s) for this service...
Handler(s):
"DEDICATED" established:6 refused:0 state:ready
LOCAL SERVER
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.11.225)(PORT=1555)))
Instance "db112a2", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:1 refused:0 state:ready
REMOTE SERVER
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.11.226)(PORT=1555)))
The Standy Site listener used in log shipping will show -
LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 20-SEP-2014 05:29:40
Copyright (c) 1991, 2011, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_DG)))
Services Summary...
Service "db112a_stb.au.oracle.com" has 2 instance(s).
Instance "db112a1", status READY, has 2 handler(s) for this service...
Handler(s):
"DEDICATED" established:5 refused:0 state:ready
LOCAL SERVER
"DEDICATED" established:0 refused:0 state:ready
REMOTE SERVER
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.11.228)(PORT=1555)))
Instance "db112a2", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:2 refused:0 state:ready
REMOTE SERVER
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.11.229)(PORT=1555)))
8. Add the new TNS alias entries required for log shipping and listener registrations for each instance in each of the clusters associated with the primary and standby sites. These need to added to the tnsnames.ora in the home from which the RDBMS instances are running. This tnsnames.ora will need to be replicated across all cluster nodes and both the primary and standby sites.
# Generated by Oracle configuration tools.
# Public network interface based connections via the SCAN
DB112A_PRM =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = grid1vmscan1.au.oracle.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = db112a.au.oracle.com)
)
)
DB112A_STB =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = grid2vmscan1.au.oracle.com)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = db112a_stb.au.oracle.com)
)
)
# Private network interface alias' used to facilitate shipping and switchover operations
DB112A_PRM_DGMGRL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = grid1vm1-dg-vip.au.oracle.com)(PORT = 1555))
(ADDRESS = (PROTOCOL = TCP)(HOST = grid1vm2-dg-vip.au.oracle.com)(PORT = 1555))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = db112a_dgmgrl.au.oracle.com)
)
)
DB112A_STB_DGMGRL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = grid2vm1-dg-vip.au.oracle.com)(PORT = 1555))
(ADDRESS = (PROTOCOL = TCP)(HOST = grid2vm2-dg-vip.au.oracle.com)(PORT = 1555))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = db112a_stb_dgmgrl.au.oracle.com)
)
)
DB112A_STB_SHIP =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = grid2vm1-dg-vip.au.oracle.com)(PORT = 1555))
(ADDRESS = (PROTOCOL = TCP)(HOST = grid2vm2-dg-vip.au.oracle.com)(PORT = 1555))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = db112a_stb.au.oracle.com)
)
)
DB112A_PRM_SHIP =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = grid1vm1-dg-vip.au.oracle.com)(PORT = 1555))
(ADDRESS = (PROTOCOL = TCP)(HOST = grid1vm2-dg-vip.au.oracle.com)(PORT = 1555))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = db112a.au.oracle.com)
)
)
#
# LISTENER_NETWORKS Aliases used to implement service registrations with both local and remote listeners across the cluster nodes
# and data guard environment.
#
GRID1VM1_LOCAL_NET1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.187.115.125 )(PORT = 1521)))
GRID1VM1_LOCAL_NET2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.225 )(PORT = 1555)))
GRID1VM2_LOCAL_NET1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.187.115.126 )(PORT = 1521)))
GRID1VM2_LOCAL_NET2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.226 )(PORT = 1555)))
GRID1_REMOTE_NET2 =
(DESCRIPTION_LIST =
(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.225 )(PORT = 1555)))
(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.226 )(PORT = 1555))))
GRID2VM1_LOCAL_NET1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.187.115.128 )(PORT = 1521))
GRID2VM1_LOCAL_NET2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.228 )(PORT = 1555))
GRID2VM2_LOCAL_NET1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.187.115.129 )(PORT = 1521))
GRID2VM2_LOCAL_NET2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.229 )(PORT = 1555))
GRID2_REMOTE_NET2 =
(DESCRIPTION_LIST =
(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.228 )(PORT = 1555)))
(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.11.229 )(PORT = 1555))))
9. In order for the RDBMS instances to be automatically restarted after a switchover operation the static connection end points and services must be set in each listener.ora for the new listener, listener_dg
The following entry needs to be placed in each listener.ora in the GRID home across all nodes in the cluster at each site
For Primary site node 1 and instance 1
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = db112a_dgmgrl.au.oracle.com)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_2)
(SID_NAME = db112a1)
)
)
For Primary site node 2 and instance 2
SID_LIST_LISTENER_DG =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = db112a_dgmgrl.au.oracle.com)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_2)
(SID_NAME = db112a2)
)
)
For Standby site node 1 and instance 1
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = db112a_stb_dgmgrl.au.oracle.com)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_2)
(SID_NAME = db112a1)
)
)
For Standby site node 2 and instance 2
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = db112a_stb_dgmgrl.au.oracle.com)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_2)
(SID_NAME = db112a2)
)
)
10. Set the listener_networks parameter to ensure the instances register the broker related services in the appropriate listeners within the cluster.
For Primary site node 1 and instance 1 set the listener_network parameter across each cluster instances using the appropriate alias for each instance.
SQL> ALTER SYSTEM SET listener_networks='((NAME=network1)(LOCAL_LISTENER=GRID1VM2_LOCAL_NET1)(REMOTE_LISTENER=grid1vmscan1:1521))','((NAME=network2)(LOCAL_LISTENER=GRID1VM2_LOCAL_NET2)(REMOTE_LISTENER=GRID1_REMOTE_NET2))' SCOPE=BOTH SID='db112a2';
For the Standby site node 1 and instance 1 set the listener_network parameter across each cluster instances using the appropriate alias for each instance.
SQL> ALTER SYSTEM SET listener_networks='((NAME=network1)(LOCAL_LISTENER=GRID2VM2_LOCAL_NET1)(REMOTE_LISTENER=grid2vmscan1:1521))','((NAME=network2)(LOCAL_LISTENER=GRID2VM2_LOCAL_NET2)(REMOTE_LISTENER=GRID2_REMOTE_NET2))' SCOPE=BOTH SID='db112a2';
11. Set the data guard broker specific parameters in the instances across the cluster and each site associated with the Data Guard Broker configuration. In RAC the broker configuration files must be placed in cluster aware storage, ideally ASM based storage in 11.2.
From the Primary site set the location for the broker configuration files and enable the broker.
SQL> ALTER SYSTEM SET dg_broker_config_file2='+FRA/db112a/dg_broker2.dat' SCOPE=SPFILE SID='*';
SQL> ALTER SYSTEM SET dg_broker_start=TRUE SCOPE=SPFILE SID='*';
From the Standby site set the location for the broker configuration files and enable the broker.
SQL> ALTER SYSTEM SET dg_broker_config_file2='+DATA/db112a_stb/dg_broker2.dat' SCOPE=SPFILE SID='*';
SQL> ALTER SYSTEM SET dg_broker_start=TRUE SCOPE=SPFILE SID='*';
12. Create the Broker configuration. From the Primary site and node 1 having the environment set to the RDBMS home and SID for instance 1 perform the following:
[oracle@grid1vm1 ~]$ . oraenv
ORACLE_SID = [oracle] ? db112a1
The Oracle base has been set to /u01/app/oracle
[oracle@grid1vm1 ~]$ dgmgrl
DGMGRL for Linux: Version 11.2.0.2.0 - 64bit Production
Copyright (c) 2000, 2009, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /
Connected.
DGMGRL> create configuration dg112 as
> primary database is db112a connect identifier is db112a_prm_ship;
Configuration "dg112" created with primary database "db112a".
DGMGRL> add database db112a_stb as connect identifier is db112a_stb_ship maintained as physical;
Database "db112a_stb" added.
13. Set the Static Connect Identifier in the Broker Configuration for each Site
DGMGRL> edit instance db112a1 on database db112a set PROPERTY StaticConnectIdentifier='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=grid1vm1-dg-vip.au.oracle.com)(PORT=1555))(CONNECT_DATA=(SERVICE_NAME=db112a_dgmgrl.au.oracle.com)(INSTANCE_NAME=db112a1)(SERVER=DEDICATED)))';
DGMGRL> edit instance db112a2 on database db112a_stb set PROPERTY StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=grid2vm2-dg-vip.au.oracle.com)(PORT=1555))(CONNECT_DATA=(SERVICE_NAME=db112a_stb_dgmgrl.au.oracle.com)(INSTANCE_NAME=db112a2)(SERVER=DEDICATED)))';
DGMGRL> edit instance db112a1 on database db112a_stb set PROPERTY StaticConnectIdentifier='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=grid2vm1-dg-vip.au.oracle.com)(PORT=1555))(CONNECT_DATA=(SERVICE_NAME=db112a_stb_dgmgrl.au.oracle.com)(INSTANCE_NAME=db112a1)(SERVER=DEDICATED)))';
14. Enable the Broker configuration
It will take a little time before the configuration is created as the broker must establish the broker configuration file and post these to the primary and standby sites.
If all sites TNS aliases and connect strings used in the configuration and parameters within the RDBMS instances are set correctly then the configuration should be enabled successfully without error.
Configuration - dg112
Protection Mode: MaxPerformance
Databases:
db112a - Primary database
db112a_stb - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS
DGMGRL> show database verbose db112a
Database - db112a
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
db112a1
db112a2
Properties:
DGConnectIdentifier = 'db112a_prm_ship'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'MANUAL'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName(*)
StaticConnectIdentifier(*)
StandbyArchiveLocation(*)
AlternateLocation(*)
LogArchiveTrace(*)
LogArchiveFormat(*)
TopWaitEvents(*)
(*) - Please check specific instance for the property value
Database Status:
SUCCESS
DGMGRL> show database verbose db112a_stb
Database - db112a_stb
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds
Apply Lag: 0 seconds
Real Time Query: OFF
Instance(s):
db112a1 (apply instance)
db112a2
Properties:
DGConnectIdentifier = 'db112a_stb_ship'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'MANUAL'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName(*)
StaticConnectIdentifier(*)
StandbyArchiveLocation(*)
AlternateLocation(*)
LogArchiveTrace(*)
LogArchiveFormat(*)
TopWaitEvents(*)
(*) - Please check specific instance for the property value
Database Status:
SUCCESS