Creating a Standby using RMAN Duplicate (RAC or Non-RAC) (Doc ID 1617946.1)
In this Document
APPLIES TO:Oracle Database - Enterprise Edition - Version 18.104.22.168 to 22.214.171.124 [Release 11.2 to 12.1]
Information in this document applies to any platform.
Maximum Availability Architecture
The Maximum Availability Architecture (MAA) defines Oracle’s most comprehensive architecture for reducing downtime for scheduled outages as well as preventing, detecting and recovering from unscheduled outages. Real Application Clusters (RAC) and Oracle Data Guard are integral components of the Database MAA reference architectures and solutions.
More detailed information, such as a discussion of the purpose of MAA and the benefits it provides, can be found on the Oracle Technology Network (OTN) athttp://www.oracle.com/technetwork/database/features/availability/maa-096107.html
Purpose of this Document
The purpose of this document is to provide a step-by-step guide for creating a standby database in an 11.2 or 12c environment integrating the MAA Data Guard configuration best practices. This paper assumes that the following conditions exist:
This document applies to both Oracle Server versions 11.2.0.x and 12.1.0.x or higher. There are some important differences in how the DUPLICATE FOR STANDBY FROM ACTIVE DATABASE functions between the two releases which are noted below:
All of the examples illustrated in this document use the following naming:
To expedite active database RMAN duplicate commands, use multiple RMAN channels to spread across primary nodes. By doing this we can utilize more network interfaces for the data transfer.
Parallelize backups across all primary database nodes leveraging disk and network bandwidth as available.
MAA best practice recommend using 8 channels minimum or 2 channels per node.
Make the following RMAN configuration changes at the Primary. Our example uses 16 preconfigured channels for RMAN to use during the standby creation step. These 16 channels will suffice for 2-4 node RAC clusters but will need to be increased if your Production RAC cluster is larger than 4 nodes.
You will use Oracle Net service load balancing to distribute these RMAN channels evenly among the allocated instances.
1) Create a service to run RMAN on allocated instances in the cluster:
2) When running RMAN, use the service name in the connect string for the “target” parameter:
Steps to Duplicate the Primary Database
The following are the steps used to create the Data Guard Standby database:
1. Create standby redo logs on the primary database that are the same size of the online redo logs. This will ensure that standby redo log files are automatically created at the standby during the RMAN duplication process and that standby redo log files are available on the current Primary after a role transition occurs in the future and the Primary becomes a standby.
Oracle recommends having the same number of standby redo log files as there are online redo log files plus one additional standby redo log for each thread. Our primary database has 6 online redo log files, 3 per thread. We therefore need 4 Standby Redo Log files per thread (Primary 3 plus 1) for a total of 8 Standby Redo Log files. For example:
2. Database force logging is recommended as an MAA Data Guard best practice. To enable force logging, use the following command on the primary:
3. For the RMAN duplication process it is necessary to setup a temporary listener which will later be removed.
In the standby database home, create and start a listener that offers a static SID entry for the standby database with the ORACLE_SID for the standby (boston1) and ORACLE_HOME for the standby.
4. In the database home on the primary node, create an Oracle Net alias to connect to the listener created in the above step.
5. For active database duplication, you must create a password file for the auxiliary instance and establish Oracle Net connectivity. This is a temporary password file as it will be overwritten during the duplicate operation.
Create a password file on the standby host in the $ORACLE_HOME/dbs directory with the same SYS (or SYSDG if you use that username in 12c) as the Primary database. For example:
6. On the standby host in the ORACLE_HOME/dbs directory create a pfile (initboston1.ora) with the following parameters. It is recommended to set the sga_target same as that of the primary database. For example:
7. On all standby hosts create the audit directory for the boston database:
8. On all primary hosts create an Oracle Net alias to reach the boston database on the standby nodes. Make sure that all hosts have both a chicago and boston Oracle Net alias and that all the aliases reference the scan listener and not the node vip. Also, if local_listener is set to an alias on the primary create a corresponding entry on the standby side that points to the local listener on that system. For example:
9. On the standby host set the SID to the standby SID (boston1) and startup nomount the standby/auxiliary instance.
10. If the cluster_interconnects parameter is set on the primary it will be necessary to temporarily unset it in the SPFILE only. It will be returned to the original values in Step 13.
The values of cluster_interconnects for each instance should be noted before resetting the SPFILE as they will be replaced after standby has been created. For example, execute the following commands and note down the values for each instance in the Primary cluster.
Now reset the cluster_interconnects parameter temporarily in the SPFILE only:
11. On the primary host run an RMAN script that duplicates the primary database using the “duplicate target database for standby from active database” command. Note that the contents of the duplicate command will vary depending on your environment. The following example covers the majority of use cases. For any case not covered by the examples please consult chapter 24 of the Oracle® Database Backup and Recovery User's Guide for complete information.
The first example illustrates how to duplicate between two systems where the ASM diskgroup names are the same:
The following example illustrates how to duplicate between two systems where the source is on a file system and the target is using an ASM diskgroup. The same process will work where both databases use ASM but they have different Disk Group names.
In our example we use the file system names for the Primary but if this was ASM to ASM with different disk group names you would substitute the Primary Disk group name.
12. Reset CLUSTER_INTERCONNECTS to original value from Step 11 in the SPFILE with the following commands. Do NOT restart any primary instance.
13. Stop and remove the listener created in step 3. Also remove the TNS entry created in Step 4.
If the standby is version 11.2 then no action is required.
If the standby database is at Version 12.1 or higher copy the password file to ASM. For Example:
15. Create the standby spfile in ASM.
Create a spfile in +DATA for the standby database:
16. On standby host create an initboston1.ora file that points to the spfile created in the above step.
17. Restart the standby instance.
18. If the standby database is single instance(SI), then only execute the following steps. If standby is a RAC configuration this step will be executed later as part of RAC configuration.
Complete Standby RAC Configuration
After completing the steps in the above section the standby database has been created on the standby host. When the primary is RAC configuration and you want the standby to be RAC configuration, additional configuration steps need to be completed on the standby. These remaining configuration steps are covered below.
1. Create a temporary pfile from spfile on the standby.
2. Modify the parameters on the Standby to update the instance specific RAC parameters. For example:
3. Create a spfile in +DATA for the standby database:
If the standby database is version 11.2, you must copy the password file to the $ORACLE_HOME/dbs directory on all the other standby hosts and name it per the standby SID on each host. For example on node2 you would name it $ORACLE_HOME/dbs/orapwboston2.
If the standby database is version 12.1 then no action is needed. The password was copied to ASM in Step 14, which is visible to all nodes across the cluster.
5. On all standby hosts create an initboston<SID Number>.ora file that points to the spfile created in the above step. For example:
6. Restart all standby instances. Register the database with CRS.
Creating a Data Guard Broker configuration:
The following section describes the basic steps on how to create a Data Guard broker configuration. For complete information on the Data Guard broker consult the Oracle® Data Guard Broker guide.
1. On both the primary and standby configure the Data Guard broker metadata files and enable the broker:
2. Add static sid entries into the local node listener.ora located in the grid infrastructure home on all hosts in the configuration. Please refer to <Note 1387859.1> for instructions on how to complete this):
Note that each static entry references the SID for each node, boston1, boston2 or chicago1, chicago2 for an ADMIN managed RAC cluster. If the RAC is a Policy Managed RAC please refer to the note above for specific instructions on static entries in 11g Release 2.
3. Bounce or reload all the listeners where the above modification was made (primary and standby nodes):
4. On a primary host connect with dgmgrl and create the configuration:
5. Verify that the configuration created successfully by using the show configuration command:
6. Flashback database is required to reinstate a failed primary after a failover role transition. Optionally enable flashback on both the primary and standby:
To enable flashback database on the standby the redo apply process must first be stopped. Once flashback has been enabled redo apply can be restarted:
The above steps can also be accomplished using the broker with the following commands. (you must be connected to the standby database via DGMGRL):
Application Service Considerations with Data Guard:
In an Oracle Data Guard configuration, applications must use Services to connect to the databases as there is more than one database with different SID names that could be the Primary database. And with Oracle Active Data Guard, any physical standby database
could be made available to provide read only access to the data. To enable client connections, services need to be created for the workload that performs updates which will only be started on the database in the configuration that is currently in the 'Primary'
role as well as services for read only reports and queries that should only be present on physical standbys that are open in Active Data Guard.
The following is an example of how to configure a Read/Write service (workload) and a Read/Only service (reports) on the Primary and Standby database.
If the primary / standby is configured with Oracle Restart, RAC One Node or RAC, the services are created using srvctl. (Note for JDBC application server side TAF attributes should be set to NONE). If you are creating services for a multitenant database then use service commands that include the –pdb option.
For example, here we add show adding two services to each database in the configuration. WORKLOAD for Read/Write connections and REPORTS for Read/Only connections. WORKLOAD will only run on the database that is running in the Primary role and REPORTS will only run on the database that is in the PHYSICAL STANDBY role.
2. Create Oracle Net aliases that reference the above services for the client applications to use.