Patch 12419353 - 11.2.0.2.3 GI Patch Set Update (Includes Database PSU 11.2.0.2.3)

Released: July 19, 2011, Last Updated: August 9, 2011

This document is accurate at the time of release. For any changes and additional information regarding GI PSU 11.2.0.2.3, see these related documents that are available at My Oracle Support (http://support.oracle.com/):

  • Note 854428.1 Patch Set Updates for Oracle Products

  • Note 1272288.1 11.2.0.2.X Grid Infrastructure Bundle/PSU Known Issues

This document includes the following sections:

1 Patch Information

GI Patch Set Update (PSU) patches are cumulative. That is, the content of all previous PSUs (if any) is included in the latest GI PSU 11.2.0.2.3 patch.

Table 1 describes installation types and security content. For each installation type, it indicates the most recent PSUs, which includes new security fixes that are pertinent to that installation type. If there are no security fixes to be applied to an installation type, then "None" is indicated. If a specific PSUs is listed, then apply that or any later PSUs patch to be current with security fixes.

Table 1 Installation Types and Security Content

Installation TypeLatest PSU with Security Fixes

Server homes

11.2.0.2.3 GI PSU

Client-Only Installations

None

Instant Client Installations

None

(The Instant Client installation is not the same as the client-only Installation. For additional information about Instant Client installations, see Oracle Database Concepts.)


2 Patch Installation and Deinstallation

This section includes the following sections:

2.1 Patch Installation Prerequisites

You must satisfy the conditions in the following sections before applying the patch:

2.1.1 OPatch Utility Information

You must use the OPatch utility version 11.2.0.1.5 or later to apply this patch. Oracle recommends that you use the latest released OPatch for 11.2 releases, which is available for download from My Oracle Support patch 6880880 by selecting ARU link for the 11.2.0.0.0 release. It is recommended that you download the Opatch utility and the GI PSU 11.2.0.2.3 patch in a shared location to be able to access them from any node in the cluster for the patch application on each node.


Note:

When patching the GI Home, a shared location on ACFS only needs to be unmounted on the node where the GI Home is being patched.

The new opatch utility should be updated in all the Oracle RAC database homes and the GI home that are being patched. To update Opatch, use the following instructions.


  1. Download the OPatch utility to a temporary directory.

  2. For each Oracle RAC database homes and the GI home that are being patched, run the following commands as the home owner to extract the OPatch utility.

    unzip <OPATCH-ZIP> -d <ORACLE_HOME>
    <ORACLE_HOME>/OPatch/opatch version
    

The version output of the previous command should be 11.2.0.1.5 or later.

For information about OPatch documentation, including any known issues, see My Oracle Support Note 293369.1 OPatch documentation list.

2.1.2 OCM Configuration

The OPatch utility will prompt for your OCM (Oracle Configuration Manager) response file when it is run. You should enter a complete path of OCM response file if you already have created this in your environment.

If you do not have the OCM response file (ocm.rsp) and you wish to use one during the patch application, then you should run the following command to create it.

As the Grid home owner execute:

%<ORACLE_HOME>/OPatch/ocm/bin/emocmrsp

You can also invoke opatch auto with the -ocmrf <OCM response file path> option to run opatch auto in silent mode.

2.1.3 Validation of Oracle Inventory

Before beginning patch application, check the consistency of inventory information for GI home and each database home to be patched. Run the following command as respective Oracle home owner to check the consistency.

%<ORACLE_HOME>/OPatch/opatch lsinventory -detail -oh <ORACLE_HOME>

If this command succeeds, it lists the Oracle components that are installed in the home. The command will fail if the Oracle Inventory is not set up properly. If this happens, contact Oracle Support Services for assistance.

2.1.4 Downloading OPatch

If you have not already done so, download OPatch 11.2.0.1.5 or later, as explained in Section 2.1.1, "OPatch Utility Information".

2.1.5 Unzipping the GI PSU 11.2.0.2.3 Patch

The patch application requires explicit user actions to run 'opatch auto' command on each node of Oracle clusterware. So, it is recommended that you download and unzip the GI PSU 11.2.0.2.3 patch in a shared location to be able to access it from any node in the cluster and then as the Grid home owner execute the unzip command.


Note:

Do not unzip the GI PSU 11.2.0.2.3 patch in the top level /tmp directory.

The unzipped patch location should have read permission for ORA_INSTALL group in order to patch Oracle homes owned by different owners. The ORA_INSTALL group is the primary group of the user who owns the GI home or the group owner of the Oracle central inventory.

(In this readme, the downloaded patch location directory is referred as <UNZIPPED_PATCH_LOCATION>.)

%cd <UNZIPPED_PATCH_LOCATION>

Unzip the GI PSU 11.2.0.2.3 patch as grid home owner in a shared location. As the Grid home owner execute:

%unzip p12419353_112020_Linux.zip

For example, if <UNZIPPED_PATCH_LOCATION> in your environment is /u01/oracle/patches, enter the following command:

%cd /u01/oracle/patches

Unzip the GI PSU 11.2.0.2.3 patch as grid home owner in a shared location. As the Grid home owner execute:

%unzip p12419353_112020_Linux.zip

2.2 OPatch Automation for GI

The Opatch utility has automated the patch application for the Oracle Grid Infrastructure (GI) home and the Oracle RAC database homes. It operates by querying existing configurations and automating the steps required for patching each Oracle RAC database home of same version and the GI home.

The utility must be executed by an operating system (OS) user with root privileges (usually the user root), and it must be executed on each node in the cluster if the GI home or Oracle RAC database home is in Non-shared storage. The utility should not be run in parallel on the cluster nodes.

Depending on command line options specified, one invocation of Opatch can patch the GI home, one or more Oracle RAC database homes, or both GI and Oracle RAC database homes of the same Oracle release version. You can also roll back the patch with the same selectivity.

Add the directory containing the opatch to the $PATH environment variable. For example:

export PATH=$PATH:<GI_HOME path>/OPatch

To patch GI home and all Oracle RAC database homes of the same version:

#opatch auto <UNZIPPED_PATCH_LOCATION>

To patch only the GI home:

#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME>

To patch one or more Oracle RAC database homes:

#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <path to RAC database1 home>, <path of the RAC database1 home>

To roll back the patch from the GI home and each Oracle RAC database home:

#opatch auto <UNZIPPED_PATCH_LOCATION> -rollback

To roll back the patch from the GI home:

#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <path to GI home> -rollback

To roll back the patch from the Oracle RAC database home:

#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <path to RAC database home> -rollback

For more information about opatch auto, see My Oracle Support Note 293369.1 OPatch documentation list.

For detailed patch installation instructions, see Section 2.4, "Patch Installation".

2.3 One-off Patch Conflict Detection and Resolution

For an introduction to the PSU one-off patch concepts, see "Patch Set Updates Patch Conflict Resolution" in My Oracle Support Note 854428.1 Patch Set Updates for Oracle Products.

The fastest and easiest way to determine whether you have one-off patches in the Oracle home that conflict with the PSU, and to get the necessary conflict resolution patches, is to use the Patch Recommendations and Patch Plans features on the Patches & Updates tab in My Oracle Support. These features work in conjunction with the My Oracle Support Configuration Manager. Recorded training sessions on these features can be found in Note 603505.1.

However, if you are not using My Oracle Support Patch Plans, follow these steps:

  1. Determine whether any currently installed one-off patches conflict with the PSU patch as follows:

    In the unzipped directory as in Section 2.1.5.

    opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./12419331
    
  2. The report will indicate the patches that conflict with PSU 12419331 and the patches for which PSU 12419331 is a superset.

    Note that Oracle proactively provides PSU 11.2.0.2.3 one-off patches for common conflicts.

  3. Use My Oracle Support Note 1061295.1 Patch Set Updates - One-off Patch Conflict Resolution to determine, for each conflicting patch, whether a conflict resolution patch is already available, and if you need to request a new conflict resolution patch or if the conflict may be ignored.

  4. When all the one-off patches that you have requested are available at My Oracle Support, proceed with Section 2.4, "Patch Installation".

2.4 Patch Installation

This section will guide you through the steps required to apply this GI PSU 11.2.0.2.3 patch to RAC database homes, the Grid home, or all relevant homes on the cluster.


Note:

When patching the GI Home, a shared location on ACFS only needs to be unmounted on the node where the GI Home is being patched.

The patch instructions will differ based on the configuration of the Grid infrastructure and the Oracle RAC database homes.

The patch installations will also differ based on following aspects of your existing configuration:

  • GI home is shared or non-shared

  • The Oracle RAC database home is shared or non-shared

  • The Oracle RAC database home software is on ACFS or non-ACFS file systems.

  • Patch all the Oracle RAC database and the GI homes together, or patch each home individually

You must choose the most appropriate case that is suitable based on the existing configurations and your patch intention.


Note:

You must stop the EM agent processes running from the database home, prior to patching the Oracle RAC database or GI Home. Execute the following command on the node to be patched.

As the Oracle RAC database home owner execute:

%<ORACLE_HOME>/bin/emctl stop dbconsole

Case 1: Patching Oracle RAC Database Homes and the GI Home Together

Follow the instructions in this section if you would like to patch all the Oracle RAC database homes of release version 11.2.0.2 and the 11.2.0.2 GI home.

Case 1.1: GI Home Is Shared

Follow these instructions in this section if the GI home is shared.


Note:

Patching a shared GI home requires shutdown of Oracle GI stack on all the remote nodes in the cluster. This also means you need to stop all Oracle RAC databases which depend on GI stack, ASM for data files, or an ACFS file system.

  1. Make sure to stop the Oracle databases running from the Oracle RAC database homes.

    As Oracle database home owner:

    <ORACLE_HOME>/bin/srvctl stop database –d <db-unique-name>
    

    ORACLE_HOME: Complete path of the Oracle database home.

  2. Make sure the ACFS file systems are unmounted on all the nodes. Use instructions in Section 2.8 for unmounting ACFS file systems.

  3. As root user, execute the following on all the remote nodes to stop the CRS stack:

    <GI_HOME>/bin/crsctl stop crs
    
  4. Patch the GI home.

    On local node, as root user, execute the following command:

    #opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME>
    
  5. Start the Oracle GI stack on all the remote nodes.

    As root user execute:

    #<GI_HOME>/bin/crsctl start crs
    
  6. Mount ACFS file systems. See Section 2.9.

  7. For each Oracle RAC database home, execute the following command on each node if the database home software is not shared.

    For each database home execute the following as root user:

    #opatch auto <UNZIPPED_PATCH_LOCATION> -oh <ORACLE_HOME>
    

    ORACLE_HOME: Complete path of Oracle database home.


    Note:

    The previous command should be executed only once on any one node if the database home is shared.

  8. Restart the Oracle databases that you have previously stopped in step 1.

    As the database home owner execute:

    <ORACLE_HOME>/bin/srvctl start database –d <db-unique-name>
    

Case 1.2: GI Home Is Not Shared

Case 1.2.1: ACFS File System Is Not Configured and Database Homes Are Not Shared

Follow these instructions in this section if the GI home is not shared and none of the Oracle database homes is shared.

As root user execute the following command on each node of the cluster:

#opatch auto <UNZIPPED_PATCH_LOCATION>

Case 1.2.2A: Patching the GI Home and Database Home Together, the GI Home Is Not Shared, the Database Home Is Shared on ACFS

  1. From the Oracle database home, make sure to stop the Oracle RAC databases running on all nodes.

    As the database home owner execute:

    <ORACLE_HOME>/bin/srvctl stop database –d <db-unique-name>
    
  2. On the 1st node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.

  3. On the 1st node, apply the patch to the GI Home using the opatch auto command.

    As root user, execute the following command:

    opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME>
    
  4. On the 1st node, remount ACFS file systems. See Section 2.9 for instructions.

  5. On the 1st node, apply the patch to the Database home using the opatch auto command. This operation will patch the Database home across the cluster given that it is a shared ACFS home.

    As root user, execute the following command:

    opatch auto <UNZIPPED_PATCH_LOCATION> -oh <DATABASE_HOME>
    
  6. On the 1st node only, restart the Oracle database which you have previously stopped in Step 1.

    As the database home owner execute:

    <ORACLE_HOME>/bin/srvctl start database –d <db-unique-name> -n <nodename>
    
  7. On the 2nd (next) node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.

  8. On the 2nd node, apply the patch to GI Home using the opatch auto command.

    As root user, execute the following command:

    opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME>
    
  9. On the 2nd node, running the opatch auto command in Step 8 will restart the stack.

  10. On the 2nd node, remount ACFS file systems. See Section 2.9 for instructions.

  11. On the 2nd node only, restart the Oracle database which you have previously stopped in Step 1.

    As the database home owner execute:

    <ORACLE_HOME>/bin/srvctl start database –d <db-unique-name> -n <nodename>
    
  12. Repeat Steps 7 through 10 for all remaining nodes of the cluster.

Case 1.2.2B: Patching the GI Home and the Database Home Together, the GI Home Is Not Shared, the Database Home Is Not Shared

For each node, perform the following steps:

  1. On the local node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.

  2. On the local node, apply the patch to the GI home and to the Database home.

    As root user, execute the following command:

    opatch auto <UNZIPPED_PATCH_LOCATION>
    

    This operation will patch both the CRS home and the Database home.

  3. The opatch auto command will restart the stack on the local node and restarts the Database on the local node.

  4. Repeat Steps 1 through 3 for all remaining nodes of the cluster.

Case 2: Patching Oracle RAC Database Homes

You should use the following instructions if you prefer to patch Oracle RAC databases alone with this GI PSU 11.2.0.2.3 patch.

Case 2.1: Non-Shared Oracle RAC Database Homes

  1. Execute the following command on each node of the cluster.

    As root user execute:

    #opatch auto <UNZIPPED_PATCH_LOCATION> -oh <Comma separated Oracle home paths>
    

Case 2.2: Shared Oracle RAC Database Homes

  1. Make sure to stop the databases running from the Oracle RAC database homes that you would like to patch. Execute the following command to stop each database.

    As Oracle database home owner execute:

    <ORACLE_HOME>/bin/srvctl stop database -d <db_unique_name>
    
  2. As root user execute only on the local node.

    #opatch auto <UNZIPPED_PATCH_LOCATION> -oh <Comma separated Oracle home paths>
    
  3. Restart the Oracle databases that were previously stopped in Step 1. Execute the following command for each database.

    As Oracle database home owner execute:

    <ORACLE_HOME>/bin/srvctl start database -d <db_unique_name>
    

Case 3: Patching GI Home Alone

You should use the following instructions if you prefer to patch Oracle GI (Grid Infrastructure) home alone with this GI PSU 11.2.0.2.3 patch.

Case 3.1: Shared GI Home

Follow these instructions in this section if the GI home is shared.


Note:

Patching a shared GI home requires shutdown of Oracle GI stack on all the remote nodes in the cluster. This also means you need to stop all Oracle RAC databases that depend on the GI stack, ASM for data file, or ACFS file system for database software.

  1. Make sure to stop the Oracle databases running from the Oracle RAC database homes.

    As Oracle database home owner:

    <ORACLE_HOME>/bin/srvctl stop database –d <db-unique-name>
    
  2. Make sure the ACFS file systems are unmounted on all the nodes. Use instructions in Section 2.8 for unmounting ACFS file systems.

  3. As root user, execute the following on all the remote nodes to stop the CRS stack:

    <GI_HOME>/bin/crsctl stop crs
    
  4. Execute the following command on the local node

    As root user execute:

    #opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME>
    
  5. Start the Oracle GI stack on all the remote nodes.

    As root user execute:

    #<GI_HOME>/bin/crsctl start crs
    
  6. Mount ACFS file systems. See Section 2.9.

  7. Restart the Oracle databases that you have previously stopped in Step 1.

    As the database home owner execute:

    <ORACLE_HOME>/bin/srvctl start database –d <db-unique-name>
    

Case 3.2: Non-Shared GI Home

If the GI home is not shared then use the following instructions to patch the home.

Case 3.2.1: ACFS File System Is Not Configured

Follow these instructions in this section if the GI home is not shared and none of the Oracle database homes use ACFS file system for its software files.

Execute the following on each node of the cluster.

As root user execute:

#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI Home>

Case 3.2.2: ACFS File System Is Configured

Repeat Steps 1 through 5 for each node in the cluster:

  1. From the Oracle database home, stop the Oracle RAC database running on that node.

    As the database home owner execute:

    <ORACLE_HOME>/bin/srvctl stop instance –d <db-unique-name> -n <node_name>
    
  2. Unmount all ACFS filesystems on this node using instructions in Section 2.8.

  3. Apply the patch to the GI home on that node using the opatch auto command.

    Execute the following command on that node in the cluster.

    As root user execute:

    #opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME>
    
  4. Remount ACFS file systems on that node. See Section 2.9 for instructions.

  5. Restart the Oracle database on that node that you have previously stopped in Step 1.

    As the database home owner execute:

    <ORACLE_HOME>/bin/srvctl start database –d <db-unique-name>
    

Case 4: Patching Oracle Restart Home

You must keep the Oracle Restart stack up and running when you are patching. Use the following instructions to patch Oracle Restart home.

As root user execute:

#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <Oracle-Restart-home>

Case 5: Patching a Software Only GI Home Installation

  1. Apply the CRS patch using.

    As the GI home owner execute:

    $<GI_HOME>/OPatch/opatch napply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/12419353
    

    As the GI home owner execute:

    $<GI_HOME>/OPatch/opatch napply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/12419331
    

Case 6: Patching a Software Only Oracle RAC Home Installation

  1. Run the pre script for DB component of the patch.

    As the database home owner execute:

    $<UNZIPPED_PATCH_LOCATION>/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome <ORACLE_HOME>
    
  2. Apply the DB patch.

    As the database home owner execute:

    $<ORACLE_HOME>/OPatch/opatch napply -oh <ORACLE_HOME> -local <UNZIPPED_PATCH_LOCATION>/12419353/custom/server/12419353
    $<ORACLE_HOME>/OPatch/opatch napply -oh <ORACLE_HOME> -local <UNZIPPED_PATCH_LOCATION>/12419331
    
  3. Run the post script for DB component of the patch.

    As the database home owner execute:

    $<UNZIPPED_PATCH_LOCATION>/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome <ORACLE_HOME>
    

2.5 Patch Post-Installation Instructions

After installing the patch, perform the following actions:

  1. Apply conflict resolution patches as explained in Section 2.5.1.

  2. Load modified SQL files into the database, as explained in Section 2.5.2.

  3. Upgrade Oracle Recovery Manager catalog, as explained in Section 2.5.3.

2.5.1 Applying Conflict Resolution Patches

Apply the patch conflict resolution one-off patches that were determined to be needed when you performed the steps in Section 2.3, "One-off Patch Conflict Detection and Resolution".

2.5.2 Loading Modified SQL Files into the Database

The following steps load modified SQL files into the database. For a RAC environment, perform these steps on only one node.

  1. For each database instance running on the Oracle home being patched, connect to the database using SQL*Plus. Connect as SYSDBA and run the catbundle.sql script as follows:

    cd $ORACLE_HOME/rdbms/admin
    sqlplus /nolog
    SQL> CONNECT / AS SYSDBA
    SQL> STARTUP
    SQL> @catbundle.sql psu apply
    SQL> QUIT
    

    The catbundle.sql execution is reflected in the dba_registry_history view by a row associated with bundle series PSU.

    For information about the catbundle.sql script, see My Oracle Support Note 605795.1 Introduction to Oracle Database catbundle.sql.

  2. Check the following log files in $ORACLE_BASE/cfgtoollogs/catbundle for any errors:

    catbundle_PSU_<database SID>_APPLY_<TIMESTAMP>.log
    catbundle_PSU_<database SID>_GENERATE_<TIMESTAMP>.log
    

    where TIMESTAMP is of the form YYYYMMMDD_HH_MM_SS. If there are errors, refer to Section 3, "Known Issues".

2.5.3 Upgrade Oracle Recovery Manager Catalog

If you are using the Oracle Recovery Manager, the catalog needs to be upgraded. Enter the following command to upgrade it:

$ rman catalog username/password@alias
RMAN> UPGRADE CATALOG;

2.6 Patch Post-Installation Instructions for Databases Created or Upgraded after Installation of PSU 11.2.0.2.3 in the Oracle Home

These instructions are for a database that is created or upgraded after the installation of PSU 11.2.0.2.3.

You must execute the steps in Section 2.5.2, "Loading Modified SQL Files into the Database" for any new database only if it was created by any of the following methods:

  • Using DBCA (Database Configuration Assistant) to select a sample database (General, Data Warehouse, Transaction Processing)

  • Using a script that was created by DBCA that creates a database from a sample database

There are no actions required for databases that have been upgraded.

2.7 Patch Deinstallation

You can use the following steps to roll back GI and GI PSU 11.2.0.2.3 patches. Choose the instructions that apply to your needs.


Note:

You must stop the EM agent processes running from the database home, prior to rolling back the patch from Oracle RAC database or GI Home. Execute the following command on the node to be patched.

As the Oracle RAC database home owner execute:

%<ORACLE_HOME>/bin/emctl stop dbconsole

Case 1: Rolling Back the Oracle RAC Database Homes and GI Homes Together

Follow the instructions in this section if you would like to rollback the patch from all the Oracle RAC database homes of release version 11.2.0.2 and the 11.2.0.2 GI home.

Case 1.1 GI Home Is Shared

Follow these instructions in this section if the GI home is shared.


Note:

An operation on a shared GI home requires shutdown of the Oracle GI stack on all the remote nodes in the cluster. This also means you need to stop all Oracle RAC databases that depend on the GI stack, ASM for data file, or ACFS file system.

  1. Make sure to stop the Oracle databases running from the Oracle RAC database homes.

    As Oracle database home owner:

    <ORACLE_HOME>/bin/srvctl stop database -d <db-unique-name>
    

    ORACLE_HOME: Complete path of the Oracle database home.

  2. Make sure the ACFS file systems are unmounted on all the nodes. Use instructions in Section 2.8 for un-mounting ACFS file systems.

  3. As root user, execute the following on all the remote nodes to stop the CRS stack:

    <GI_HOME>/bin/crsctl stop crs
    
  4. Rollback the patch from the GI home.

    On local node, as root user, execute the following command:

    #opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME> -rollback
    
  5. Start the Oracle GI stack on all the remote nodes.

    As root user execute:

    #<GI_HOME>/bin/crsctl start crs
    
  6. Mount ACFS file systems. See Section 2.9.

  7. For each Oracle RAC database home, execute the following command on each node if the database home software is not shared.

    For each database home, execute the following as root user:

    #opatch auto <UNZIPPED_PATCH_LOCATION> -oh <ORACLE_HOME> -rollback
    

    ORACLE_HOME: Complete path of Oracle database home.


    Note:

    The previous command should be executed only once on any one node if the database home is shared.

  8. Restart the Oracle databases that you have previously stopped in Step 1.

    As the database home owner execute:

    <ORACLE_HOME>/bin/srvctl start database -d <db-unique-name>
    

Case 1.2: GI Home Is Not Shared

Case 1.2.1: ACFS File System Is Not Configured and Database Homes Are Not Shared

Follow these instructions in this section if the GI home is not shared and none of the Oracle database homes is shared.

As root user, execute the following command on each node of the cluster.

#opatch auto <UNZIPPED_PATCH_LOCATION> -rollback

Case 1.2.2A: Patching the GI Home and Database Home Together, the GI Home Is Not Shared, the Database Home Is Shared on ACFS

  1. From the Oracle database home, make sure to stop the Oracle RAC databases running on all nodes.

    As the database home owner execute:

    <ORACLE_HOME>/bin/srvctl stop database –d <db-unique-name>
    
  2. On the 1st node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.

  3. On the 1st node, apply the patch to the GI Home using the opatch auto command.

    As root user, execute the following command:

    opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME>
    
  4. On the 1st node, remount ACFS file systems. See Section 2.9 for instructions.

  5. On the 1st node, apply the patch to the Database home using the opatch auto command. This operation will patch the Database home across the cluster given that it is a shared ACFS home.

    As root user, execute the following command:

    opatch auto <UNZIPPED_PATCH_LOCATION> -oh <DATABASE_HOME>
    
  6. On the 1st node only, restart the Oracle database which you have previously stopped in Step 1.

    As the database home owner execute:

    <ORACLE_HOME>/bin/srvctl start database –d <db-unique-name> -n <nodename>
    
  7. On the 2nd (next) node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.

  8. On the 2nd node, apply the patch to GI Home using the opatch auto command.

    As root user, execute the following command:

    opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME>
    
  9. On the 2nd node, running the opatch auto command in Step 8 will restart the stack.

  10. On the 2nd node, remount ACFS file systems. See Section 2.9 for instructions.

  11. On the 2nd node only, restart the Oracle database which you have previously stopped in Step 1.

    As the database home owner execute:

    <ORACLE_HOME>/bin/srvctl start database –d <db-unique-name> -n <nodename>
    
  12. Repeat Steps 7 through 10 for all remaining nodes of the cluster.

Case 1.2.2B: Patching the GI Home and the Database Home Together, the GI Home Is Not Shared, the Database Home Is Not Shared

For each node, perform the following steps:

  1. On the local node, unmount the ACFS file systems. Use instructions in Section 2.8 for unmounting ACFS file systems.

  2. On the local node, apply the patch to the GI home and to the Database home.

    As root user, execute the following command:

    opatch auto <UNZIPPED_PATCH_LOCATION>
    

    This operation will patch both the CRS home and the Database home.

  3. The opatch auto command will restart the stack on the local node and restarts the Database on the local node.

  4. Repeat Steps 1 through 3 for all remaining nodes of the cluster.

Case 2: Rolling Back from the Oracle RAC Database Homes

You should use the following instructions if you prefer to rollback the patch from Oracle RAC databases homes alone.

Case 2.1: Non-Shared Oracle RAC Database Homes

Execute the following command on each node of the cluster.

As root user execute:

#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <Comma separated Oracle home paths> -rollback

Case 2.2: Shared Oracle RAC Database Homes

  1. Make sure to stop the databases running from the Oracle RAC database homes from which you would like to rollback the patch. Execute the following command to stop each database.

    As Oracle database home owner execute:

    <ORACLE_HOME>/bin/srvctl stop database -d <db_unique_name>
    
  2. As root user execute only on the local node.

    #opatch auto <UNZIPPED_PATCH_LOCATION> -oh <Comma separated Oracle home paths> -rollback
    
  3. Restart the Oracle databases that were previously stopped in Step 1. Execute the following command for each database.

    As Oracle database home owner execute:

    <ORACLE_HOME>/bin/srvctl start database -d <db_unique_name>
    

Case 3: Rolling Back from the GI Home Alone

You should use the following instructions if you prefer to rollback patch from the Oracle GI (Grid Infrastructure) home alone.

Case 3.1 Shared GI Home

Follow these instructions in this section if the GI home is shared.


Note:

An operation in a shared GI home requires shutdown of Oracle GI stack on all the remote nodes in the cluster. This also means you need to stop all Oracle RAC databases that depend on the GI stack, ASM for data file, or ACFS file system for database software.

  1. Make sure to stop the Oracle databases running from the Oracle RAC database homes.

    As Oracle database home owner:

    <ORACLE_HOME>/bin/srvctl stop database -d <db-unique-name>
    
  2. Make sure the ACFS file systems are unmounted on all the nodes. Use instructions in Section 2.8 for unmounting ACFS file systems.

  3. As root user, execute the following on all the remote nodes to stop the CRS stack:

    <GI_HOME>/bin/crsctl stop crs
    
  4. Execute the following command on the local node.

    As root user execute:

    #opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME> -rollback
    
  5. Start the Oracle GI stack on all the remote nodes.

    As root user execute:

    #<GI_HOME>/bin/crsctl start crs
    
  6. Mount ACFS file systems. See Section 2.9.

  7. Restart the Oracle databases that you have previously stopped in Step 1.

    As the database home owner execute:

    <ORACLE_HOME>/bin/srvctl start database -d <db-unique-name>
    

Case 3.2: Non-Shared GI Home

If the GI home is not shared, then use the following instructions to rollback the patch from the GI home.

Case 3.2.1: ACFS File System Is Not Configured

Follow these instructions in this section if the GI home is not shared and none of the Oracle database homes is shared.

Execute the following on each node of the cluster.

As root user execute:

#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI Home> -rollback

Case 3.2.2: ACFS File System Is Configured

Repeat Steps 1 through 5 for each node in the cluster:

  1. From the Oracle database home, stop the Oracle RAC database running on that node.

    As the database home owner execute:

    <ORACLE_HOME>/bin/srvctl stop instance -d <db-unique-name> -n <node_name>
    
  2. Make sure the ACFS file systems are unmounted on that node. Use instructions in Section 2.8 for unmounting ACFS file systems.

  3. Apply the patch to the GI home on that node using the opatch auto command.

    Execute the following command on each node in the cluster.

    As root user execute:

    #opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME> -rollback
    
  4. Remount ACFS file systems on that node. See Section 2.9 for instructions.

  5. Restart the Oracle database on that node that you have previously stopped in Step 1.

    As the database home owner execute:

    <ORACLE_HOME>/bin/srvctl start instance -d <db-unique-name> -n <node_name>
    

Case 4: Rolling Back the Patch from Oracle Restart Home

You must keep the Oracle Restart stack up and running when you are rolling back the patch from the Oracle Restart home. Use the following instructions to roll back the patch from the Oracle Restart home.

As root user execute:

#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <Oracle-Restart-home> -rollback

Case 5: Rolling Back the Patch from a Software Only GI Home Installation

  1. Roll back the CRS patch.

    As the GI home owner execute:

    $<GI_HOME>/OPatch/opatch rollback -local -id 12419353 -oh <GI_HOME> 
    $<GI_HOME>/OPatch/opatch rollback -local -id 12419331 -oh <GI_HOME> 
    

Case 6: Rolling Back the Patch from a Software Only Oracle RAC Home Installation

  1. Run the pre script for DB component of the patch.

    As the database home owner execute:

    $<UNZIPPED_PATCH_LOCATION>/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome <ORACLE_HOME>
    
  2. Roll back the DB patch from the database home.

    As the database home owner execute:

    $<ORACLE_HOME>/OPatch/opatch rollback -local -id 12419353 -oh <ORACLE_HOME>
    $<ORACLE_HOME>/OPatch/opatch rollback -local -id 12419331 -oh <ORACLE_HOME>
    
  3. Run the post script for DB component of the patch.

    As the database home owner execute:

    $<UNZIPPED_PATCH_LOCATION>/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome <ORACLE_HOME
    

2.8 Unmounting ACFS File Systems

ACFS file systems can be used by Oracle RAC for hosting software files for the database. It can also be used as a general purpose file system for non-database files.The ACFS file system is managed and administered by the Oracle GRID Infrastructure. So ACFS file systems will be impacted when shutting down the GI stack for patching GI homes.

Shut down the processes using the software files on ACFS and then unmount the ACFS file system.


Note:

Make sure to stop the non-Oracle processes that use ACFS file systems.

If the ACFS file system is used by Oracle database software, then perform Steps 1 and 2.

  1. Execute the following command to find the names of the CRS managed ACFS file system resource.

    As root user execute:

    # crsctl stat res -w "TYPE = ora.acfs.type"
    
  2. Execute the following command to stop the CRS managed ACFS file system resource with the resource name found from Step 1.

    As root user execute:

    #crsctl stop res <acfs file system resource name> -n <nodename>
    

If the ACFS file system is not used for Oracle Database software and is registered in the ACFS registry, perform the following steps.

  1. Execute the following command to find all ACFS file system mount points.

    As the root user execute:

    #/sbin/acfsutil registry
    
  2. Unmount ACFS file systems found in Step 1.

    As the root user execute:

    # /bin/umount <mount-point>
    

    Note:

    On Solaris operating system use: /sbin/umount.

    On AIX operating system, use: /etc/umount.


  3. Verify that the ACFS file systems are unmounted. Execute the following command to verify.

    As the root user execute:

    #/sbin/acfsutil info fs
    

    The previous command should return the following message if there is no ACFS file systems mounted.

    "acfsutil info fs: ACFS-03036: no mounted ACFS file systems"
    

2.9 Mounting ACFS File Systems

If the ACFS file system is used by Oracle database software, then perform Steps 1 and 2.

  1. Execute the following command to find the names of the CRS managed ACFS file system resource.

    As root user execute:

    # crsctl stat res -w "TYPE = ora.acfs.type"
    
  2. Execute the following command to start and mount the CRS managed ACFS file system resource with the resource name found from Step 1.

    As root user execute:

    #crsctl start res <acfs file system resource name> -n <nodename>
    

If the ACFS file system is not used for Oracle Database software and is registered in the ACFS registry, these file systems should get automatically mounted when the CRS stack comes up. Perform Steps 1 and 2 if it is not already mounted.

  1. Execute the following command to find all ACFS file system mount points.

    As the root user execute:

    #/sbin/acfsutil registry
    
  2. Mount ACFS file systems found in Step 1.

    As the root user execute:

    # /bin/mount <mount-point>
    

    Note:

    On Solaris operating system use: /sbin/mount.

    On AIX operating system, use: /etc/mount.


2.10 Patch Post-Deinstallation Instructions for a RAC Environment

Follow these steps only on the node for which the steps in Section 2.5.2, "Loading Modified SQL Files into the Database" were executed during the patch application.:

  1. Start all database instances running from the Oracle home. (For more information, see Oracle Database Administrator's Guide.)

  2. For each database instance running out of the ORACLE_HOME, connect to the database using SQL*Plus as SYSDBA and run the rollback script as follows:

    cd $ORACLE_HOME/rdbms/admin
    sqlplus /nolog
    SQL> CONNECT / AS SYSDBA
    SQL> STARTUP
    SQL> @catbundle_PSU_<database SID PREFIX>_ROLLBACK.sql
    SQL> QUIT
    

    In a RAC environment, the name of the rollback script will have the format catbundle_PSU_<database SID PREFIX>_ROLLBACK.sql.

  3. Check the log file for any errors. The log file is found in $ORACLE_BASE/cfgtoollogs/catbundle and is named catbundle_PSU_<database SID>_ROLLBACK_<TIMESTAMP>.log where TIMESTAMP is of the form YYYYMMMDD_HH_MM_SS. If there are errors, refer to Section 3, "Known Issues".

All other instances can be started and accessed as usual while you are executing the deinstallation steps.

3 Known Issues

For information about OPatch issues, see My Oracle Support Note 293369.1 OPatch documentation list.

For issues documented after the release of this PSUs, see My Oracle Support Note 1272288.1 11.2.0.2.X Grid Infrastructure Bundle/PSU Known Issues.

Other known issues are as follows.

Issue 1   

Known Issues for Opatch Auto

Bug 10339274 - 'OPATCH AUTO' FAILED TO APPLY 11202 PATCH ON EXADATA RAC CLUSTER WITH 11201 RAC

Bug 10339251 - MIN OPATCH ISSUE FOR DB HOME SETUP IN EXADATA RAC CLUSTER USING 'OPATCH AUTO'

These two issues are observed in an environment where lower version database homes coexist with 11202 clusterware and database homes and opatch auto is used to apply the 11202 GIBundle.

Workaround:

Apply the 11202 GIBundle to the 11202 GI Home and Oracle RAC database home as follows:

#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI Home path>
#opatch auto <UNZIPPED_PATCH_LOCATION> -oh <11202 ORACLE_HOME1_PATH>,<11202 ORACLE_HOME1_PATH>
Issue 2   

Bug 10226210 (11,11.2.0.2GIBTWO) 11202_GI_OPATCH_AUTO: OPATCH TAKES MORE STORAGE SPACE AFTER ROLLBACK SUCCEED

Workaround:

Execute the following command as Oracle home owner after a successful rollback to recover the storage used by the backup operation:

opatch util cleanup -silent
Issue 3   

Bug 11799240 - 11202_GIBTWO:STEP 3 FAILED,CAN'T ACCESS THE NEXT OUI PAGE DURING SETTING UP GI

After applying GI PSU 11.2.0.2.3, the Grid Infrastructure Configuration Wizard fails with error INS-42017 when choosing the nodes of the cluster.

Workaround:

Apply the one off patch for the bug 10055663.

Issue 4   

Bug 11856928 - 11202_GIBTWO_HPI:PATCH SUC CRS START FAIL FOR PERM DENY TO MKDIR $EXTERNAL_ORACL

This issue is seen only on the HPI platform when the opatch auto command is invoked from a directory that does not have write permission to the root user.

Workaround:

Execute the opatch auto from a directory that has write permission to the root user.

Issue 5   

The following ignorable errors may be encountered while running the catbundle.sql script or its rollback script:

ORA-29809: cannot drop an operator with dependent objects
ORA-29931: specified association does not exist
ORA-29830: operator does not exist
ORA-00942: table or view does not exist
ORA-00955: name is already used by an existing object
ORA-01430: column being added already exists in table
ORA-01432: public synonym to be dropped does not exist
ORA-01434: private synonym to be dropped does not exist
ORA-01435: user does not exist
ORA-01917: user or role 'XDB' does not exist
ORA-01920: user name '<user-name>' conflicts with another user or role name
ORA-01921: role name '<role name>' conflicts with another user or role name
ORA-01952: system privileges not granted to 'WKSYS'
ORA-02303: cannot drop or replace a type with type or table dependents
ORA-02443: Cannot drop constraint - nonexistent constraint
ORA-04043: object <object-name> does not exist
ORA-29832: cannot drop or replace an indextype with dependent indexes
ORA-29844: duplicate operator name specified 
ORA-14452: attempt to create, alter or drop an index on temporary table already in use
ORA-06512: at line <line number>. If this error follow any of above errors, then can be safely ignored.
ORA-01927: cannot REVOKE privileges you did not grant
Issue 6   

Bug 12619571 - 11202_GIBTHREE: PATCH FAILED IN MULTI-BYTES LANG ENV ISSUE SHOULD BE DOCUMENTED

This issue is seen when trying to run opatch auto to apply the GI PSU patch in the Japanese environment. The cause of the problem is that opatch auto currently only supports the English language environment.

Workaround:

Always keep the environmemt as the English language environmemt when running opatch auto to apply the GI PSU.

4 References

The following documents are references for this patch.

Note 293369.1 OPatch documentation list

Note 360870.1 Impact of Java Security Vulnerabilities on Oracle Products

Note 468959.1 Enterprise Manager Grid Control Known Issues

5 Bugs Fixed by This Patch

This patch includes the following bug fixes:

5.1 CPU Molecules

CPU molecules in GI PSU 11.2.0.2.3:

GI PSU 11.2.0.2.3 contains the following new CPU 11.2.0.2 molecules:

12586486 - DB-11.2.0.2-MOLECULE-004-CPUJUL2011

12586487 - DB-11.2.0.2-MOLECULE-005-CPUJUL2011

12586488 - DB-11.2.0.2-MOLECULE-006-CPUJUL2011

12586489 - DB-11.2.0.2-MOLECULE-007-CPUJUL2011

12586490 - DB-11.2.0.2-MOLECULE-008-CPUJUL2011

12586491 - DB-11.2.0.2-MOLECULE-009-CPUJUL2011

12586492 - DB-11.2.0.2-MOLECULE-010-CPUJUL2011

12586493 - DB-11.2.0.2-MOLECULE-011-CPUJUL2011

12586494 - DB-11.2.0.2-MOLECULE-012-CPUJUL2011

12586495 - DB-11.2.0.2-MOLECULE-013-CPUJUL2011

12586496 - DB-11.2.0.2-MOLECULE-014-CPUJUL2011

5.2 Bugs Fixed in GI PSU 11.2.0.2.3

GI PSU 11.2.0.2.3 contains all fixes previously released in GI PSU 11.2.0.2.2 (see Section 5.3 for a list of these bug fixes) and the following new fixes:


Note:

ACFS is not supported on HP and therefore the bug fixes for ACFS do not apply to the HP GI PSU 3.

Automatic Storage Management

6892311 - PROVIDE REASON FOR MOUNT FORCE FAILURE WITHOUT REQUIRING PST DUMP

9078442 - ORA-19762 FROM ASMCMD CP COPYING FILE WITH DIFFERENT BYTE ORDER FROM FILESYSTEM

9572787 - LONG WAITS FOR ENQ: AM CONTENTION FOLLOWING CELL CRASH CAUSED CLUSTERWIDE OUTAGE

9953542 - TB_SOL_SP: HIT 7445 [KFKLCLOSE()+20] ERROR WHEN DG OFFLINE

10040921 - HUNG DATABASE WORKLOAD AND BACKGROUNDS AFTER INDUCING WRITE ERRORS ON AVD VOLUME

10155605 - 11201-OCE:DISABLE FC IN ONE NODE, ASM DISKGOUP FORCE DISMOUNTED IN OTHER NODES.

10278372 - TB:X:CONSISTENTLY PRINT "WARNING: ELAPSED TIME DID NOT ADVANCE" IN ASM ALERT LOG

10310299 - TB:X:LOST WRITES DUE TO RESYNC MISSING EXTENTS WHEN DISK GO OFFLINE DURING REBAL

10324294 - DBMV2: DBFS INSTANCE WAITS MUCH FOR "ASM METADATA FILE OPERATION"

10356782 - DBMV2+: ASM INSTANCE CRASH WITH ORA-600 : [KFCGET0_04], [25],

10367188 - TB:X:REBOOT 2 CELL NODES,ASM FOREGROUND PROCESS HIT ORA-600[KFNSMASTERWAIT01]

10621169 - FORCE DISMOUNT IN ASM RECOVERY MAY DROP REDO'S AND CAUSE METADATA CORRUPTIONS

11065646 - ASM MAY PICK INCORRECT PST WHEN MULTIPLE COPIES EXTANT

11664719 - 11203_ASM_X64:ARB0 STUCK IN DG REBALANCE

11695285 - ORA-15081 I/O WRITE ERROR OCCURED AFTER CELL NODE FAILURE TEST

11707302 - FOUND CORRUPTED ASM FILES AFTER CELL NODES FAILURE TESTING.

11707699 - DATABASE CANNOT MOUNT DUE TO ORA-00214: CONTROL FILE INCONSISTENCY

11800170 - ASM IN KSV WAIT AFTER APPLICATION OF 11.2.0.2 GRID PSU

11800854 - BUG TO TRACK LRG 5135625

12620422 - FAILED TO ONLINE DISKS BECAUSE OF A POSSIBLE RACING RESYNC

Buffer Cache Management

11674485 - LOST DISK WRITE INCORRECTLY SIGNALLED IN STANDBY DATABASE WHEN APPLYING REDO

Generic

9748749 - ORA-7445 [KOXSS2GPAGE]

10082277 - EXCESSIVE ALLOCATION IN PCUR OF "KKSCSADDCHILDNO" CAUSES ORA-4031 ERRORS

10126094 - ORA-600 [KGLLOCKOWNERSLISTDELETE] OR [KGLLOCKOWNERSLISTAPPEND-OVF]

10142788 - APPS 11I PL/SQL NCOMP:ORA-04030: OUT OF PROCESS MEMORY

10258337 - UNUSABLE INDEX SEGMENT NOT REMOVED FOR "ALTER TABLE MOVE"

10378005 - EXPDP RAISES ORA-00600[KOLRARFC: INVALID LOB TYPE], EXP IS SUCCESSFUL

10636231 - HIGH VERSION COUNT FOR INSERT STATEMENTS WITH REASON INST_DRTLD_MISMATCH

12431716 - UNEXPECTED CHANGE IN MUTEX WAIT BEHAVIOUR IN 11.2.0.2.2 PSU (HIGHER CPU POSSIBLE

High Availability

9869401 - REDO TRANSPORT COMPRESSION (RTC) MESSAGES APPEARING IN ALERT LOG

10157249 - CATALOG UPGRADE TO 11.2.0.2 FAILS WITH ORA-1

10193846 - RMAN DUPLICATE FAILS WITH ORA-19755 WHEN BCT FILE OF PRIMARY IS NOT ACCESSIBLE

10648873 - SR11.2.0.3TXN_REGRESS - TRC - KCRFW_REDO_WRITE

11664046 - STBH: WRONG SEQUENCE NUMBER GENERATED AFTER DB SWITCHOVER FROM STBY TO PRIMARY

Oracle Portable ClusterWare

8906163 - PE: NETWORK AND VIP RESOURCES FAIL TO START IN SOLARIS CONTAINERS

9593552 - GIPCCONNECT() IS NOT ASYNC 11.2.0.2GIBTWO

9897335 - TB-ASM: UNNECCESSARY OCR OPERATION LOG MESSAGES IN ASM ALERT LOG WITH ASM OCR

9902536 - LNX64-11202-MESSAGE: EXCESSIVE GNS LOGGING IN CRS ALERT FILE WHEN SELFCHECK FAIL

9916145 - LX64: INTERNAL ERROR IN CRSD.LOG, MISROUTED REQUEST, ASSERT IN CLSM2M.CPP

9916435 - ROOTCRS.PL FAILS TO CREATE NODEAPPS DURING ADD NODE OPERATION

9939306 - SERVICES NOT COMING UP AFTER SWITCHOVER USING SRVCTL START DATABASE

10012319 - ORA-600 [KFDVF_CSS], [19], [542] ON STARTUP OF ASM DURING ADDNODE

10019726 - MEMORY LEAK 1.2MB/HR IN CRSD.BIN ON NON-N NODE

10056713 - LNX64-11202-CSS: SPLIT BRAIN WHEN START CRS STACK IN PARALLEL WITH PRIV NIC DOWN

10103954 - INTERMITTENT "CANNOT COMMUNICATE WITH CRSD DAEMON" ERRORS

10104377 - GIPC ENSURE INITIAL MESSAGE IS NOT LOST DURING ESTABLISH PHASE

10115514 - SOL-X64-11202: CLIENT REGISTER IN GLOBAL GROUP MASTER#DISKMON#GROUP#MX NOT EXIT

10190153 - HPI-SG-11202 ORA.CTSSD AND ORA.CRSD GOES OFFLINE AFTER KILL GIPC ON CRS MASTER

10231906 - 11202-OCE-SYMANTEC:DOWN ONE OF PRIVAE LINKS ON NODE 3,OCSSD CRASHED ON NODE 3

10233811 - AFTER PATCHING GRID HOME, UNABLE TO START RESOURCES DBFS AND GOLDEN

10253630 - TB:X:HANG DETECTED,"WAITING FOR INSTANCE RECOVERY OF GROUP 2" FOR 45 MINUTES

10272615 - TB:X:SHUTDOWN SERVICE CELLD ON 2 CELL NODES,CSSD ABORT IN CLSSNMRCFGMGRTHREAD

10280665 - TB:X:STOP CELLD ON 2 CELL NODES,CSSD ABORT IN CLSSNMVVERIFYPENDINGCONFIGVFS

10299006 - AFTER 11.2.0.2 UPGRADE, ORAAGENT.BIN CONNECTS TO DATABASE WITH TOO MANY SESSIONS

10322157 - 11202_GIBONE: PERM OF FILES UNDER $CH/CRS/SBS CHANGED AFTER PATCHED

10324594 - STATIC ENDPOINT IN THE LEASE BLOCKS OVERWRITTEN DURING UPGRADE

10331452 - SOL-11202-UD: 10205->11202 NETWORK RES USR_ORA_IF VALUE MISSED AFTER UPGRADE

10357258 - SOL-11202-UD: 10205->11202 [IPMP] HUNDREDS OF DUP IP AFTER INTRA-NODE FAILOVER

10361177 - LNX64-11203-GNS: MANY GNS SELF CHECK FAILURE ALERT MESSAGES

10385838 - TB:X:CSS CORE DUMP AT GIPCHAINTERNALSEND

10397652 - AIX-11202-GIPC:DISABLE SWITCH PORT FOR ONE PRIVATE NIC,HAIP DID NOT FAILOVER

10398810 - DOUBLE FREE IN SETUPWORK DUE TO TIMING

10419987 - PEER LISTENER IS ACCESSING A GROCK THAT IS ALREADY DELETED

10621175 - TB_RAC_X64:X: CLSSSCEXIT: CSSD SIGNAL 11 IN THREAD GMDEATHCHECK

10622973 - LOSS OF LEGACY FEATURES IN 11.2

10631693 - TB:X:CLSSNMHANDLEVFDISCOVERACK: NO PENDINGCONFIGURATION TO COMPLETE. CSS ABORT

10637483 - TB:X:REBOOT ONE CELL NODE, CSS ABORT AT CLSSNMVDDISCTHREAD

10637741 - HARD STOP DEPENDENCY CAN CAUSE WRONG FAIL-OVER ORDER

10638381 - 11202-OCE-SYMANTEC: HAIP FAIL TO START WHEN PRIVATE IP IS PLUMBED ON VIRTUAL NIC

11069614 - RDBMS INSTANCE CRASH DUE TO SLOW REAP OF GIPC MESSAGES ON CMT SYSTEMS

11071429 - PORT 11GR2 CRS TO EL6

11654726 - SCAN LISTENER STARTUP FAILS IF /VAR/OPT/ORACLE/LISTENER.ORA EXISTS.

11663339 - DBMV2:SHARED PROCESS SPINNING CAUSES DELAY IN PRIMARY MEMBER CLEANUP

11682409 - RE-USING OCI MEMORY ACROSS CONNECTIONS CAUSES A MEMORY CORRUPTION

11698552 - SRVCTL REPORT WRONG STATUS FOR DATABASE INSTANCE.

11741224 - INCORRECT ACTIVE VERSION CHECK WHILE ENABLING THE BATCH FUNCTIONALITY

11744313 - LNX64-11203-RACG: UNEXPECTED CRSD RESTART DURING PARALLEL STACK START

11775080 - ORA-29701/29702 OCCURS WHEN WORKLOAD TEST RUNNING FOR A LONG TIME AND IS RESTART

11781515 - EVMD/CRSD FAIL TO START AFTER REBOOT, EVEN AFTER CRSCTL START CLUSTERWARE

11807012 - LNX64-11203-RACG: DB SERVICE RUNS INTO "UNKNOWN" STATE AFTER STACK START

11812615 - LNX64-11203-DIT: INCONSISTENT PERMISSION BEFORE/AFTER ROOTCRS.PL -UNLOCK/-PATCH

11828633 - DATABASE SERVICE DID NOT FAIL OVER AND COULD NOT BE STARTED AFTER NODE FAILURE

11840629 - KERNEL CRASH DUMP AND REBOOT FAIL INSIDE SOLARIS CONTAINER

11866171 - ENABLE CRASHDUMP WHEN REBOOTING THE MACHINE (LINUX)

11877079 - HUNDREDS OF ORAAGENT.BIN@HOSTNAME SESSSIONS IN 11.2.0.2 DATABASE

11899801 - 11202_GIBTWO_HPI:AFTER KILL ASM PMON, POLICY AND ADMIN DB RUNNING ON SAME SERVER

11904778 - LNX64-OEL6-11202: CRS STACK CAN'T BE START AFTER RESTART

11933693 - 11.1.0.7 DATABASE INSTANCE TERMINATED BY 11.2.0.2 CRS AGENT

11936945 - CVU NOT RECOGNIZING THE OEL6 ON LINUX

12332919 - ORAAGENT KEEPS EXITING

12340501 - SRVCTL SHOWS INSTANCE AS DOWN AFTER RELOCATION

12340700 - EVMD CONF FILES CAN HAVE WRONG PERMISSIONS AFTER INSTALL

12349848 - LNX64-11203: VIPS FELL OFFLINE WHEN BRING DOWN 3/4 PUBLIC NICS ONE BY ONE

12378938 - THE LISTENER STOPS WHEN THE ORA.NET1.NETWORK'S STATE IS CHANGED TO UNKNOWN

12380213 - 11203_110415:ERROR EXCEPTION WHILE INSTALLATION 11202 DB WITH DATAFILES ON 11203

12399977 - TYPO IN SUB PERFORM_START_SERVICE RETURNS ZERO (SUCCESS) EVEN WHEN FAILED

12677816 - SCAN LISTENER FAILD TO STARTUP IF /VAR/OPT/ORACLE/LISTENER.ORA EXIST

Oracle Space Management

8223165 - ORA-00600 [KTSXTFFS2] AFTER DATABASE STARTUP

9443361 - WRONG RESULTS (ROWDATA) FOR SELECT IN SERIAL FROM COMPRESSED TABLE

10061015 - LNX64-11202:HIT MANY ORA-600 ARGUMENTS: [KTFBHGET:CLSVIOL_KCBGCUR_9] DURING DBCA

10132870 - INDEX BLOCK CORRUPTION - ORA-600 [KCBZPBUF_2], [6401] ON RECOVER

10324526 - ORA-600 [KDDUMMY_BLKCHK] [6106] WHEN UPDATE SUBPARTITION OF TABLE IN TTS

Oracle Transaction Management

10053725 - TS11.2.0.3V3 - TRC - K2GUPDATEGLOBALPREPARECOUNT

10233732 - ORA-600 [K2GUGPC: PTCNT >= TCNT] OCCURS IN A DATABASE LINK TRANSACTION

Oracle Universal Storage Management

9867867 - SUSE10-LNX64-11202:NODE REBOOT HANG WHILE ORACLE_HOME LOCATED ON ACFS

9936659 - LNX64-11202-CRS: ORACLE HOME PUT ON ACFS, DB INST FAILS TO RESTART AFTER CRASH

9942881 - TIGHTEN UP KILL SEMANTICS FOR 'CLEAN' ACTION.

10113899 - AIX KSPRINTTOBUFFER TIMESTAMPS NEEDS TIME SINCE BOOT AND WALL_CLOCK TIMES

10266447 - ROOTUPGRADE.SH FAILS: 'FATAL: MODULE ORACLEOKS NOT FOUND' , ACFS-9121, ACFS-9310

11789566 - ACFS RECOVERY PHASE 2

11804097 - GBM LOCK TAKEN WHEN DETERMINING WHETHER THE FILE SYSTEM IS MOUNTED AND ONLINE

11846686 - ACFSROOT FAILS ON ORACLELINUX-RELEASE-5-6.0.1 RUNNUNG A 2.6.18 KERNEL

12318560 - ALLOW IOS TO RESTART WHEN WRITE ERROR MESG RETURNS SUCCESS

12326246 - ASM TO RETURN DIFF VALUES WHEN OFFLINE MESG UNSUCCESSFUL

12378675 - AIX-11203-HA-ACFS: HIT INVALID ASM BLOCK HEADER WHEN CONFIGURE DG USING AIX LVS

12398567 - ACFS FILE SYSTEM NOT ACCESSIBLE

12545060 - CHOWN OR RM CMD TO LOST+FOUND DIR IN ACFS FAILS ON LINUX

Oracle Utilities

9735282 - GETTING ORA-31693, ORA-2354, ORA-1426 WHEN IMPORTING PARTITIONED TABLE

Oracle Virtual Operating System Services

10317487 - RMAN CONTROLFILE BACKUP FAILS WITH ODM ERROR ORA-17500 AND ORA-245

11651810 - STBH: HIGH HARD PARSING DUE TO FILEOPENBLOCK EATING UP SHARED POOL MEMORY

XML Database

10368698 - PERF ISSUE WITH UPDATE RESOURCE_VIEW DURING AND AFTER UPGRADING TO 11.2.0.2

5.3 Bugs Fixed in GI PSU 11.2.0.2.2

This section describes bugs fixed in the GI PSU 11.2.0.2.2 release.

ACFS

10015603 - KERNEL PANIC IN OKS DRIVER WHEN SHUTDOWING CRS STACK

10178670 - ACFS VOLUMES ARE NOT MOUNTING ONCE RESTARTED THE SERVER

10019796 - FAIL TO GET ENCRYPTION STATUS OF FILES UNTIL DOING ENCR OP FIRST

10029794 - THE DIR CAN'T READ EVEN IF THE DIR IS NOT IN ANY REALM

10056808 - MOUNT ACFS FS FAILED WHEN FS IS FULL

10061534 - DB INSTANCE TERMINATED DUE TO ORA-445 WHEN START INSTANCE ON ALL NODES

10069698 - THE EXISTING FILE COULD CORRUPT IF INPUT INCORRECT PKCS PASSOWRD

10070563 - MULTIPLE WRITES TO THE SAME BLOCK WITH REPLICATION ON CAN GO OUT OF ORDER

10087118 - UNMOUNT PANICS IF ANOTHER USER IS SITTING IN A SNAPSHOT ROOT DIRECTORY

10216878 - REPLI-RELATED RESOURCE FAILED TO FAILOVER WHEN DG DISMOUNTED

10228079 - MOUTING DG ORA-15196 [KFC.C:25316] [ENDIAN_KFBH] AFTER NODE REBOOT

10241696 - FAILED TO MOUNT ACFS FS TO DIRECTORY CREATED ON ANOTHER ACFS FS

10252497 - ADVM/ACFS FAILS TO INSTALL ON SLES10

9861790 - LX64: ADVM DRIVERS HANGING OS DURING ACFS START ATTEMPTS

9906432 - KERNEL PANIC WHILE DISMOUNT ACFS DG FORCE

9975343 - FAIL TO PREPARE SECURITY IF SET ENCRYPTION FIRST ON THE OTHER NODE

10283549 - FIX AIX PANIC AND REMOVE -DAIX_PERF

10283596 - ACFS:KERNEL PANIC DURING USM LABEL PATCHING - ON AIX

10326548 - WRITE-PROTETED ACFS FILES SHOULD NOT BE DELETED BY NON-ROOT USER

ADVM

10045316 - RAC DB INSTALL ON SHARED ACFS HANGS AT LINKING PHASE

10283167 - ASM INSTANCE CANNOT STARTUP DUE TO EXISTENCE OF VMBX PROCESS

10268642 - NODE PANIC FOR BAD TRAP IN "ORACLEADVM" FOR NULL POINTER

10150020 - LINUX HANGS IN ADVM MIRROR RECOVERY, AFTER ASM EVICTIONS

Automatic Storage Management

9788588 - STALENESS REGISTRY MAY GET CLEARED PREMATURELY

10022980 - DISK NOT EXPELLED WHEN COMPACT DISABLED

10040531 - ORA-600 [KFRHTADD01] TRYING TO MOUNT RECO DISKGROUP

10209232 - STBH: DB STUCK WITH A STALE EXTENT MAP AND RESULTS IN DATA CORRUPTIONS

10073683 - ORA-600 [KFCBINITSLOT40] ON ASM ON DBMV2 WITH BP5

9715581 - DBMV2: EXADATA AUTO MANAGEMENT FAILED TO BRING UP DISKS ONLINE

10019218 - ASM DROPPED DISKS BEFORE DISK_REPAIR_TIME EXPIRED

10084145 - DBMV2: ORA-600 [1427] MOUNTING DISKGROUP AFTER ALL CELLS RESTARTED

11067567 - KEPT GENERATING "ELAPSED TIME DID NOT ADVANCE " IN ASM ALERT LOG

10356513 - DISK OFFLINED WITH NON ZERO TIMEOUT EXPELLED IMMEDIATELY

10332589 - TB:X:MOUNT NORMAL REDUNDANCY DG, FAILED WITH ORA-00600:[KFCINITRQ20]

10329146 - MARKING DIFFERENT SR BITS FROM MULTIPLE DBWS CAN CAUSE A LOST WRITE

10299224 - TB:X:PIVOTING AN EXTENT ON AN OFFLINE DISK CAN CAUSE STALE XMAPS IN RDBMS

10245086 - ORA-01210 DURING CREATE TABLESPACE

10230571 - TB:X:REBOOT ONE CELL NODE, RBAL HIT ORA-600[17183]

10228151 - ASM DISKGROUPS NOT GETTING MOUNTED

10227288 - DG FORCIBLY DISMOUNTED AFTER ONE FG LOST DUE TO "COULD NOT READ PST FOR GRP 4"

10222719 - ASM INSTANCE HANGS WITH RBAL PROCESS WAITS ON "NO FREE BUFFER"

10102506 - DISK RESYNC TAKES A LONG TIME EVEN WITH NO STALE EXTENTS

10094201 - DISK OFFLINE IS SLOW

10190642 - ORA-00600: [1433] FOLLOWED BY INSTANCE CRASH WITH ASM ON EXADATA

11067567 - 11202_gibtwo: kept generating "elapsed time did not advance " in asm alert log

Buffer Cache Management

9651350 - ora-00308 and ora-27037 when ora-8103 without event 10736 been set

10110863 - trace files is still generated after applying patch:9651350

10205230 - tb_x64: hit ora-00600: [kclwcrs_6]

10332111 - sql running long in active standby

CRS Group

CLEANUP

9949676 - GNSD.BIN CORE DUMP AFTER KILL ASM PMON ON ALL NODES AT SAME TIME

9975837 - GNS INCORRECTLY PROCESSES IPV6 LOOKUP REQUESTS

10007185 - GNS DUMPS CORE IN CLSKGOPANIC AT CLSKPDVA 717

10028343 - GNS CAN NOT BE RELOCATED AFTER PUBLIC RESTARTED

CRS

9876201 - OHASD AGENT CORE DUMP AT EONSHTTP.C:162

10011084 - 11202 STEP3 MODIFY BINARY AFTER INSTALLATION CANNOT EXCUTE SUCCESSFULLY

10028235 - 'CLSNVIPAGENT.CPP', LINE 1522: ERROR: FORMAL ARGUMENT TYPE OF ...

10045436 - 'ORA.LISTENER.LSNR' FAILED TO BE FENCED OFF DURING CRSD CLEANUP

10062301 - VALUE FOR FIELD 'CLUSTER_NAME' IS MISSING IN CRSCONFIG_PARAMS

10110969 - PORTABILITY ISSUES IN FUNCTION TOLOWER_HOST

10175855 - FAILED TO UGPRADE 11.2.0.1 + ARU 12900951 -> 11.2.0.2

9891341 - CRSD CORE DUMP IN PROATH_MASTER_EXIT_HELPER AT PROATH.C:1834

11655840 - RAC1 DB' STATE_DETAILS IS WRONG AFTER KILL GIPCD

10634513 - OHASD DUMPS CORE WHEN PLUG IN UNPLUGGED PRIVATE NETWORK NIC

10236074 - ASM INSTANCES CRASH SEVERAL TIMES DURING PARALLEL CRS STARTUP

10052529 - DB INST OFFLINE AFTER STOP/START CRS STACK ON ALL NODES IN PARALLEL

10065216 - VIRTUAL MEMORY USAGE OF ORAROOTAGENT IS BIG(1321MB) AND NOT DECREASING

10168006 - ORAAGENT PROCESS MEMORY GROWTH PERIODICALLY.

CSS

9907089 - CSS CORE DUMP DURING EXADATA ROLLING UPGRADE

9926027 - NODE REBOOTED AFTER CRS CLEAN-UP SUCCEEDED 11202 GI + 10205 RAC DB

10014392 - CRSCTL DELETE NODE FAILS WITH CRS-4662 & CRS-4000

10015460 - REMOVAL OF WRONG INCARNATION OF A NODE DUE TO MANUAL SHUTDOWN STATE

10040109 - PMON KILL LEAD TO OS REBOOT

10048027 - ASM UPGRADE FAILS

10052721 - 11201- 11202 NON-ROLLING,CRSCTL.BIN CORE AT CLSSNSQANUM, SIGNAL 11

10083789 - A NODE DOESNT INITIATE A RECONFIG DUE TO INCORRECT RECONFIG STATE

9944978 - FALSE CSS EVICTION AFTER PRIVATE NIC RESUME

9978195 - STOP DB ACTION TIMED OUT AND AGENT EXITS DUE TO FAILURE TO STOP EVENT BRIDGE

10248739 - AFTER APPLY THE PATCH, THE NODE EVICTED DURING START CRS STACK

CVU

9679401 - OUI PREREQ CHECKS FAILED FOR WRONG OWNSHIP OF RESOLV.CONF_`HOST`

9959110 - GNS INTEGRITY PREREQUISITE FAILED WITH PRVF-5213

9979706 - COMP OCR CHECK FAILS TO VERIFY SIZE OF OCR LOCATION

10029900 - CVU PRE NODEADD CHECK VD ERROR

10033106 - ADDNODE.SH SHOULD INDICATE WHAT HAPPENS WHEN ERROR OCCURRING

10075643 - UNABLE TO CONTINUE CONFIG.SH FOR CRS UPGRAD

10083009 - GIPCD FAILS TO RETRIEVE INFORMATION FROM PEERS DUE TO INVALID ENDPOINT

GIPC

9812956 - STATUS OF CRSD AND EVMD GOES INTERMEDIATE FOR EVER WHEN KILL GIPC

9915329 - ORA-600 [603] IN DB AND ORA-603 IN ASM AFTER DOWN INTER-CONNECT NIC

9944948 - START RESOUCE HAIP FAILED WHEN RUN ROOT.SH

9971646 - ORAROOTAGENT CORE DUMPED AT NETWORKHAMAINTHREAD::READROUTEDATA

9974223 - GRID INFRASTRUCTURE NEEDS MULTICAST COMMUNICATION ON 230.0.1.0 ADDRESSES WORKING

10053985 - ERROR IN NETWORK ADDRESS ON SOLARIS 11

10057680 - OHASD ORAROOTAGENT.BIN SPIN CPU AFTER SIMULATE ASM DISK ERROR

10078086 - ROOTUPGRADE.SH FAIL FOR 'CRSCTL STARTUPGRADE' FAIL,10205-> 11202

10260251 - GRID INSTALLATION FAILS TO START HAIP DUE TO CHANGE IN NETWORK INTERFACE NAME

10111010 - CRSD HANGS FOR THE HANAME OF PEER CRSD

11782423 - OHASD.BIN TAKES CPU ABOUT 95% ~ 100%

11077756 - STARTUP FAILURE OF HAIP CAUSES INSTALLATION FAILURE

10375649 - DISABLE HAIP ON PRIMECLUSTER

10284828 - INTERFACE UPDATES GET LOST DURING BOUNCE OF CRSD PROCESS

10284693 - AIX EPIPE FAILURE

10233159 - NEED 20 MINS TO STARTUP CRS WHEN 1/2 GIPC NICS DOWN

10128191 - LRGSRG9 AND LRGSRGE FAILURE

GNS

9864003 - NODE REBOOT DUE TO 'ORA.GNS' FAILED TO BE FENCED OFF DURING CRSD

GPNP

9336825 - GPNPD FLUSH PROFILE PUSH ERROR MESSAGES IN CRS ALERT LOG

10314123 - GPNPD MAY NOT UPDATE PROFILE TO LATEST ON START

10105195 - PROC-32 ACCESSING OCR; CRS DOES NOT COME UP ON NODE

10205290 - DBCA FAILED WITH ERROR ORA-00132

10376847 - [ORA.CRF] [START] ERROR = ERROR 9 ENCOUNTERED WHEN CONNECTING TO MOND

IPD-OS

9812970 - IPD DO NOT MARK TYPE OF DISKS USED FOR VOTING DISK CORRECTLY

10057296 - IPD SPLIT BRAIN AFTER CHANGE BDB LOCATION

10069541 - IPD SPLIT BRAIN AFTER STOPPING ORA.CRF ON MASTER NODE

10071992 - UNREASONABLE VALUES FOR DISK STATISTICS

10072474 - A NODE IS NOT MONITORED AFTER STOP AND START THE ORA.CRF ON IT

10073075 - INVALID DATA RECEIVED FROM THE CLUSTER LOGGER SERVI

10107380 - IPD NOT STARTED DUE TO SCRFOSM_GET_IDS FAILED

OCR

9978765 - ROOTUPGRADE.SH HANG AND CRSD CRASHED ON OTHER NODES,10205-> 11202

10016083 - 'OCRCONFIG -ADD' NEEDS HELPFUL MESSAGE FOR ERROR ORA-15221

OPSM

9918485 - EMCONFIG FAIL WITH NULLPOINTEREXCEPTION AT RACTRANSFERCORE.JAVA

10018215 - RACONE DOES NOT SHUTDOWN INSTANCE DURING RELOCATION

10042143 - ORECORE11 LWSFDSEV CAUSED SEGV IN SRVM NATIVE METHODS

OTHERS

9963327 - CHMOD.PL GETS CALLED INSTEAD OF CHMOD.EXE

10008467 - FAILS DUE TO WRONG VERSION OF PERL USED:

10015210 - OCTSSD LEAK MEMORY 1.7M HR ON PE MASTER DURING 23 HOURS RUNNI

10027079 - CRS_SHUTDOWN_SYNCH EVENT NOT SENT IN SIHA

10028637 - SCLS.C COMPILE ERRORS ON AIX UNDECLARED IDENTIFIERS

10029119 - 11201-11202 CRS UPGRADE OUI ASKS TO RUN ROOTUPGRADE.SH

10036834 - PATCHES NOT FOUND ERROR WHILE UPGRADING GRID FROM 11201 TO 11202

10038791 - HAS SRG SRV GETTING MANY DIFS FOR AIX ON LABEL 100810 AND LATER

10040647 - LNX64-112022-UD; AQ AND RLB DO NOT WORK AFTER UPGRADING FROM 11201

10044622 - EVMD FAILED TO START AFTER KILL OHASD.BIN

10048487 - DIAGCOLLECTION CANNOT RETRIEVE IPD REPORTS

10073372 - DEINSTALL FAILED TO DELETE CRS_HOME ON REMOTE NODE IF OCR VD ON NFS

10089120 - WRONG PROMPT MESSAGE BY DEINSTALL COMMAND WHILE DELETING CRS HOME

10124517 - CRS STACK DOES NOT START AUTOMATICALLY AFTER NODE REBOOT

10157622 - 11.2.0.2 GI BUNDLE 1 HAS-CRS TRACKING BUG

RACG

10036193 - STANDBY NIC DOESN'T WORK IF DOWN PUBLIC NIC

10146768 - NETWORK RESOURCE FAILS TO START WITH IPMP ON SOLARIS 11

USM Miscellaneous

10146744 - ORA.REGISTRY.ACFS BECOME UNKOWN AND ACFS FS DISMOUNT

10283058 - RESOURCES ACFS NEEDS AN OPTION TO DISALLOW THE MOUNTING OF FILE SYSTEMS ON RESOURCE START

10193581 - ROOT.SH CRS-2674: START OF 'ORA.REGISTRY.ACFS' FAIL

10244210 - FAIL TO INSTALL ADVM/ACFS ON SOLARIS CONTAINER

10311856 - APPLY ASSERTION FAILURE:PBOARDENTRY>USRGBOARDRECENTRY_RECORD

Generic

9591812 - incorrect wait events in 11.2 ("cursor: mutex s" instead of "cursor: mutex x")

9905049 - ebr: ora-00600: internal error code, arguments: [kqlhdlod-bad-base-objn]

10052141 - exadata database crash with ora-7445 [_wordcopy_bwd_dest_aligned] and ora-600 [2

10052956 - ora-7445 [kjtdq()+176]

10157402 - lob segment has null data after long to lob conversion in parallel mode

10187168 - obsolete parent cursors if version count exceeds a threshold

10217802 - alter user rename raises ora-4030

10229719 - qrmp:12.2:ora-07445 while performing complete database import on solaris sparc

10264680 - incorrect version_number reported after patch for 10187168 applied

10411618 - add different wait schemes for mutex waits

11069199 - ora-600 [kksobsoletecursor:invalid stub] quering pq when pq is disabled

11818335 - additional changes when wait schemes for mutex waits is disabled

High Availability

10018789 - dbmv2-bigbh:spin in kgllock caused db hung and high library cache lock

10129643 - appsst gsi11g m9000: ksim generic wait event

10170431 - ctwr consuming lots of cpu cycles

Oracle Space Management

6523037 - et11.1dl: ora-600 [kddummy_blkchk] [6110] on update

9724970 - pdml fails with ora-600 [4511]. ora-600 [kdblkcheckerror] by block check

10218814 - dbmv2: ora-00600:[3020] data block corruption on standby

10219576 - ora-600 [ktsl_allocate_disp-fragment]

Oracle Transaction Management

10358019 - invalid results from flashback_transaction_query after applying patch:10322043

Oracle Utilities

10373381 - ora-600 [kkpo_rcinfo_defstg:objnotfound] after rerunning catupgrd.sql

Oracle Virtual Operating System Services

10127360 - dg4msql size increasing to 1.5gb after procedure executed 250 times

Server Manageability

11699057 - ora-00001: unique constraint (sys.wri$_sqlset_plans_tocap_pk) violated

6 Appendix A: Manual Steps for Apply/Rollback Patch

Steps for Applying the Patch


Note:

You must stop the EM agent processes running from the database home, prior to patching the Oracle RAC database or GI Home. Execute the following command on the node to be patched.

As the Oracle RAC database home owner execute:

%<ORACLE_HOME>/bin/emctl stop dbconsole

Execute the following on each node of the cluster in non-shared CRS and DB home environment to apply the patch.

  1. Stop the CRS managed resources running from DB homes.

    If this is a GI Home environment, as the database home owner execute:

    $<ORACLE_HOME>/bin/srvctl stop home -o <ORACLE_HOME> -s <status file location> -n <node name>
    

    If this is an Oracle Restart Home environment, as the database home owner execute:

    $<ORACLE_HOME>/bin/srvctl stop home -o <ORACLE_HOME> -s <status file location>
    

    Note:

    You need to make sure that the Oracle ACFS file systems are unmounted (see Section 2.8) and all other Oracle processes are shutdown before you proceed.

  2. Run the pre root script.

    If this is a GI Home, as the root user execute:

    #<GI_HOME>/crs/install/rootcrs.pl -unlock
    

    If this is an Oracle Restart Home, as the root user execute:

    #<GI_HOME>/crs/install/roothas.pl -unlock
    
  3. Apply the CRS patch using.

    As the GI home owner execute:

    $<GI_HOME>/OPatch/opatch napply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/12419353
    

    As the GI home owner execute:

    $<GI_HOME>/OPatch/opatch napply -oh <GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/12419331
    
  4. Run the pre script for DB component of the patch.

    As the database home owner execute:

    $<UNZIPPED_PATCH_LOCATION>/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome <ORACLE_HOME>
    
  5. Apply the DB patch.

    As the database home owner execute:

    $<ORACLE_HOME>/OPatch/opatch napply -oh <ORACLE_HOME> -local <UNZIPPED_PATCH_LOCATION>/12419353/custom/server/12419353
    $<ORACLE_HOME>/OPatch/opatch napply -oh <ORACLE_HOME> -local <UNZIPPED_PATCH_LOCATION>/12419331
    
  6. Run the post script for DB component of the patch.

    As the database home owner execute:

    $<UNZIPPED_PATCH_LOCATION>/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome <ORACLE_HOME>
    
  7. Run the post script.

    As the root user execute:

    #<GI_HOME>/rdbms/install/rootadd_rdbms.sh
    

    If this is a GI Home, as the root user execute:

    #<GI_HOME>/crs/install/rootcrs.pl -patch
    

    If this is an Oracle Restart Home, as the root user execute:

    #<GI_HOME>/crs/install/roothas.pl -patch
    
  8. Start the CRS managed resources that were earlier running from DB homes.

    If this is a GI Home environment, as the database home owner execute:

    $<ORACLE_HOME>/bin/srvctl start home -o <ORACLE_HOME> -s <status file location> -n <node name>
    

    If this is an Oracle Restart Home environment, as the database home owner execute:

    $<ORACLE_HOME>/bin/srvctl start home -o <ORACLE_HOME> -s <status file location> 
    

Steps for Rolling Back the Patch

Execute the following on each node of the cluster in non-shared CRS and DB home environment to rollback the patch.

  1. Stop the CRS managed resources running from DB homes.

    If this is a GI Home environment, as the database home owner execute:

    $<ORACLE_HOME>/bin/srvctl stop home -o <ORACLE_HOME> -s <status file location> -n <node name>
    

    If this is an Oracle Restart Home environment, as the database home owner execute:

    $<ORACLE_HOME>/bin/srvctl stop home -o <ORACLE_HOME> -s <status file location> 
    

    Note:

    You need to make sure that the Oracle ACFS file systems are unmounted (see Section 2.8) and all other Oracle processes are shut down before you proceed.

  2. Run the pre root script.

    If this is a GI Home, as the root user execute:

    #<GI_HOME>/crs/install/rootcrs.pl -unlock
    

    If this is an Oracle Restart Home, as the root user execute:

    #<GI_HOME>/crs/install/roothas.pl -unlock
    
  3. Roll back the CRS patch.

    As the GI home owner execute:

    $<GI_HOME>/OPatch/opatch rollback -local -id 12419353 -oh <GI_HOME> 
    $<GI_HOME>/OPatch/opatch rollback -local -id 12419331 -oh <GI_HOME> 
    
  4. Run the pre script for DB component of the patch.

    As the database home owner execute:

    $<UNZIPPED_PATCH_LOCATION>/12419353/custom/server/12419353/custom/scripts/prepatch.sh -dbhome <ORACLE_HOME>
    
  5. Roll back the DB patch from the database home.

    As the database home owner execute:

    $<ORACLE_HOME>/OPatch/opatch rollback -local -id 12419353 -oh <ORACLE_HOME> 
    $<ORACLE_HOME>/OPatch/opatch rollback -local -id 12419331 -oh <ORACLE_HOME>
    
  6. Run the post script for DB component of the patch.

    As the database home owner execute:

    $<UNZIPPED_PATCH_LOCATION>/12419353/custom/server/12419353/custom/scripts/postpatch.sh -dbhome <ORACLE_HOME>
    
  7. Run the post script.

    As the root user execute:

    $<GI_HOME>/rdbms/install/rootadd_rdbms.sh
    

    If this is a GI Home, as the root user execute:

    #<GI_HOME>/crs/install/rootcrs.pl -patch
    

    If this is an Oracle Restart Home, as the root user execute:

    #<GI_HOME>/crs/install/roothas.pl -patch
    
  8. Start the CRS managed resources that were earlier running from DB homes.

    If this is a GI Home environment, as the database home owner execute:

    $<ORACLE_HOME>/bin/srvctl start home -o <ORACLE_HOME> -s <status file location> -n <node name>
    

    If this is an Oracle Restart Home environment, as the database home owner execute:

    $<ORACLE_HOME>/bin/srvctl start home -o <ORACLE_HOME> -s <status file location> 
    

7 Modification History

Table 2 lists the modification history for this document.

Table 2 Modification History

DateModification

19-July-2011

Released

09-Aug-2011

Corrected: Section 2.3, "One-off Patch Conflict Detection and Resolution" to indicate the correct patch number "12419331" in Steps 1 and 2.

Corrected: Section 3, "Known Issues", Issue #3 text to read: "After applying GI PSU 11.2.0.2.3, the Grid Infrastructure Configuration Wizard fails with error..."


8 Documentation Accessibility

For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.

Access to Oracle Support

Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.


Patch 12419353 - 11.2.0.2.3 GI Patch Set Update Release 11.2.0.2.3 for Grid Infrastructure

Copyright © 2006, 2011, Oracle and/or its affiliates. All rights reserved.

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:

U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值