读书笔记整理十四:

《Oracle 10g/11g Data and Database Management Utilities》读书笔记整理

有空再仔细的整理下,外国人写的就是不错



Data Pump 11g new features
Compression
The compression feature in 10g is related to the metadata, not the actual data part of
the dump files. With 11g, this feature was improved to allow either the metadata, the
row data or the complete dump file set to be compressed. This shrinks the dump file
set by 10 to 15 percent.
Encrypted dump file sets
In 11g it is possible to use the encrypted dump file sets feature to have the dump set
encrypted. Data Pump in 11g includes other keywords to manage encryption, such
as ENCRYPTION_ALGORITHM, and ENCRYPTION_MODE which requires the Transparent
Data Encryption (TDE) feature to perform the encryption process. This feature will
be addressed in more depth in the security chapter
Enhancements for Data Pump External Tables
In 10g, when a row in an external table was corrupted, it led to the entire process
being aborted. Data Pump 11g is more tolerant under these circumstances, allowing
the process to continue with the rest of the data.
Single partition transportable for Oracle
Data Pump
With this feature, it is possible to move just one partition or sub partition between
databases without having the need to move the whole table. A partition can be
added as part of an existing table or as an independent table.
Overwrite dump files
In 10g dump files had to be removed by the DBA prior to any attempt to
overwrite them. In 11g a new keyword was added, REUSE_DUMPFILES,
which defaults to 11g, and when activated simply overwrites the existing dump files
(if they already exist).
XML Datatypes
In previous Data Pump releases, the XML data type was not supported, all Oracle
XML data types are supported with Oracle Data Pump. You can use all other
datatypes, however you should be aware that the Data Pump driver for external
tables restricts the use of certain data types.

SQL*Loader
Load on the fly
SQL*Loader has a powerful interface that allows loads from different sources;
it can read data from disk, tape or a named pipe. This feature allows loads on the fly
without the creation of an intermediate data file, which is a very convenient strategy
for loads that will only take place once and whose data source is dynamic. Let's
have a named pipe created, so there is no datafile in-between, and data is directly
consumed by the database through SQL*Loader.
On the OS prompt, let's create a named pipe. A process will then send data to the
named pipe and SQL*Loader will read it from there.

At a Unix prompt, let's create a regular named pipe:
mkfifo abcpipe.dat
Send data to the named pipe and leave the process in the background:
cat abc.dat > abcpipe.dat &
Load data from the named pipe:
sqlldr sqlldrdemo/oracle direct=true control=abc data=abcpipe.dat
It can be seen that SQL*Loader performs the load seamlessly. The data source was
obtained from the named pipe and then read and loaded by SQL*Loader.
You can remove the named pipe, just like any other regular file:
rm abcpipe.dat
The advantage of this approach is that there is no intermediate file created,
saving space and performing a clean one time load from a dynamic data source.

By querying the DBA_RESUMABLE view (A), it can be seen that a problem with an
INSERT statement arose, this problem produced an ORA-01653 error, the same as the
one reported in the previous log file.

As the resumable feature is a database feature, it is useful not only for SQL*Loader,
but also for any other database batch related process that requires
self-healing routines to keep the process up and running. There is an API named
DBMS_RESUMABLE that can be used inside PL/SQL code.

When performing a parallel load some issues should be considered:
? Indexes are not maintained and will be marked as UNUSABLE, so the user
must schedule an index maintenance task afterwards.
? The user should look for the constraint status after the load. Both constraints
and triggers must be manually enabled after the load.

In order to take advantage of maintenance windows to perform the data load, here is
some advice for improving load performance and better using the time frame.
? When performing a load, do not use logical records, map in one-to-one
physical records to logical records.
? Use LMT with ASSM, this combination is available from Oracle 9i
Rel. 2 onwards.
? Use a fixed size field data file format over the variable sized with delimiter
characters.
? Try to avoid character set conversions, try to use the same character set on
the client side and at the server side.
? If possible use direct load; this is the fastest way to load data.
? When loading data try to have the data preordered at the data file by the
most important index, this way when the index is created the clause NOSORT
can be used. The index will be created faster.
? If possible, use parallel loads, and parallel index maintenance.
When loading LOBS, use Secondary Data Files (SDF) instead of embedding
them in the same datafile.
? When performing direct path loads, it is advisable to mark indexes
as unusable. This way the overhead will be avoided in the temporary
tablespace due to the space consumption for the index maintenance task
that takes place when data is loaded. Once data load is over, a regular index
rebuild operation can be scheduled.

Database cloning procedure
The clone procedure is shown below:
1. Start by creating a password file for the Cloned (CLONEDB) instance:
orapwd file=/u01/app/oracle/product/10.2.0/db_1/dbs/orapwCLONEDB
password=password entries=10
2. Configure the connectivity (tnsnames.ora and listener.ora). Properly
identify the database at the tnsnames.ora and have the instance manually
registered against the listener.ora files, both files located at the $ORACLE_
HOME/network/admin directory.
3. Manually register the database against the listener (listener.ora):
(SID_DESC =
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(SID_NAME = CLONEDB)
)
4. Add the target CLONEDB to the tnsnames.ora:
CLONEDB =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = myhost.mydomain.com)(PORT
= 1521))
)
(CONNECT_DATA =
(ORACLE_SID = CLONEDB)
)
)
5. Reload the listener:
$ lsnrctl reload
6. Next, create an init.ora file for the cloned database. In case the same
database file paths cannot be used on the auxiliary host, either because it
is the same source host or because those paths are not reproducible on the
target, then proper values for DB_FILE_NAME_CONVERT and LOG_FILE_NAME_
CONVERT are required.
DB_NAME=CLONEDB
CONTROL_FILES=(/u02/oradata/CLONEDB/control01.ctl,
/u02/oradata/CLONEDB/control02.ctl,
/u02/oradata/CLONEDB/control03.ctl)
# Convert file names to allow for different directory structure.
DB_FILE_NAME_CONVERT=(/u02/oradata/SRCDB/,/u02/oradata/CLONEDB/)
LOG_FILE_NAME_CONVERT=(/u01/oradata/SRCDB/,/u01/oradata/CLONEDB/)
# block_size and compatible parameters must match those of the
source database
DB_BLOCK_SIZE=8192
COMPATIBLE=10.2.0.1.0

7. Connect to the cloned instance:
ORACLE_SID=CLONEDB; export ORACLE_SID
sqlplus /nolog
conn / as sysdba
8. Create an SPFILE based on the init.ora:
CREATE SPFILE FROM PFILE='/u01/app/oracle/admin/CLONEDB/pfile/
init.ora';
9. Start the database in NOMOUNT mode:
STARTUP FORCE NOMOUNT;
10. Connect to the TARGET, CATALOG, and AUXILIARY databases. In this code
snippet, the clone database process is performed using the catalog database,
but it is not required, it can be performed with or without a recovery catalog.
At the auxiliary site, using RMAN, three connections are open, one for the
Source Database (SOURCEDB), another for the Catalog database (RCAT),
and one more for the cloned database (CLONEDB):
ORACLE_SID=CLONEDB; export ORACLE_SID
rman TARGET sys/password@SRCDB CATALOG rman/rman@RCAT AUXILIARY /

11. Recovery Manager provides the DUPLICATE command to perform the clone
operation. The cloned database can either be completely recovered up to the
last redo entry available or it can be cloned to a point in time, this would be
equivalent to an incomplete recover operation at the auxiliary site. The two
cases are shown below:
°° First case, clone the database in complete recovery mode:
DUPLICATE TARGET DATABASE TO CLONEDB;
°° Second case, clone the database up to a defined point in time in
the past using an incomplete recover:
DUPLICATE TARGET DATABASE TO CLONEDB UNTIL TIME 'SYSDATE-2';
At the end of this point, the process is finished; the newly created CLONEDB
database is ready to be used as an independent new database.
The DUPLICATE command will take care of the final details. It will create the
controlfile for the cloned environment, restore all datafiles and manage the auxiliary
instance. As an additional task, it will open the database with the resetlogs option
and it will create a new DBID for the cloned database, except for the DUPLICATE ...
FOR STANDBY case.

Database cloning on the fly (11g only)
As you have seen, Oracle 10g required the database to have a backup, and some
manual preparation must be done prior to the clone process execution. Starting with
11g the DBA is leveraged from some tasks. With 11g, it is no longer required to have
the target spfile created, this can be created on the fly, and there is no need to have
a pre-existing backup from the source database. RMAN reads the original database
files the same way it reads the datafiles for a backup operation and transfers the
on-the fly backup to the auxiliary database using an inter-instance network
connection. RMAN utilizes an in-memory rman script to perform the cloning tasks
at the auxiliary location. Some preparation at the destination site must be still
performed prior to the clone process; the cloned environment must be already
identified with a password file, which holds the same password as that defined at
the source site.

The following diagram illustrates the clone on the fly procedure using the
DUPLICATE ... FROM ACTIVE DATABASE command. Intermediate files
are not required to clone the database.

When the clone operation takes place, it may be possible that the destination site
doesn't have the same paths used at the source database, so it is required to specify
the DB_FILE_NAME_CONVERT and the LOG_FILE_NAME_CONVERT, which directs RMAN
as to where the database files (at the destination) are to be created. These parameters
as well as many other parameters can be specified as arguments at the time the
DUPLICATE command is issued. This feature allows the DBA to save time issuing the
ALTER SYSTEM commands at the destination instance.
DUPLICATE TARGET DATABASE
TO CLONEDB
FROM ACTIVE DATABASE
SPFILE
PARAMETER_VALUE_CONVERT '/u01','/u02'
SET LOG_FILE_NAME_CONVERT '/u01','/u02'
SET DB_FILE_NAME_CONVERT '/u01','/u02'
SET SGA_MAX_SIZE 512M
SET SGA_TARGET 400M;

The PARAMETER_VALUE_CONVERT is used to avoid specifying each path related
parameter, this parameter doesn't define the LOG_FILE_NAME_CONVERT or DB_FILE_
NAME_CONVERT parameters.

Migrate to and from an ASM environment

This code snippet performs the migration of not only a datafile, but also the whole
database from a regular environment to an ASM storage unit. The commands are
launched from an rman prompt:
# Start nomount to perform controlfile level operations.
STARTUP NOMOUNT;
# The controlfile is copied from its current position
# to the DiskGroup inside the +ASM instance
RESTORE CONTROLFILE FROM '/u01/oracle/oradata/orcl/control01.ctl';
# The database can be mounted now
ALTER DATABASE MOUNT;
# This operation will perform the database copy
# to the +DiskGroup destination
BACKUP AS COPY DATABASE FORMAT '+DiskGroup';
# This command updates the controlfile and declares
# that the official database files are those recently copied
SWITCH DATABASE TO COPY;
# Renames all redolog files to logfiles inside the ASM
# this command must be issued for each existing redo log file
SQL "ALTER DATABASE RENAME '/u01/oracle/oradata/rdo01.log' to
'+DiskGroup';
# This opens the database and creates a new redolog file set
# inside the ASM
ALTER DATABASE OPEN RESETLOGS;

# Don't forget about the temporary datafiles, rman doesn't
# manage the temporary datafiles, so the DBA must be aware
# of this prior to releasing the database to production.
# Finally get rid of the old temporary datafiles
SQL "ALTER TABLESPACE TEMP ADD TEMPFILE";
SQL "ALTER DATABASE TEMPFILE '/u01/oracle/oradata/temp01.dbf' DROP;
The old database files at the regular file system can now be removed. At this point
the database has been migrated to the ASM environment. If the DBA wants to
manage a hybrid environment, then it is possible to use the COPY command to
manage individual database files.

KILL SNIPED sessions.
TEMPFILE=/tmp/$$.tmp
sqlplus system/system_password <<EOF
spool $TEMPFILE
select p.spid from v\$process p,v\$session s
where s.paddr=p.addr
and s.status='SNIPED';
spool off
EOF
for i in 'cat $TEMPFILE | grep "^0123456789"'
do
kill -9 $i
done
rm $TEMPFILE

This example shows how DBCA executes silently with no graphical interface
displayed to the user.
dbca -silent -responseFile <response file>
The response file referred to in the previous example must be created by the user, as
unlike the Oracle Universal Installer, the DBCA does not have a 'record' mode.
The next example starts DBCA in batch mode, showing the progress bar, if you want
to run in character mode only use the previous example, as presenting the progress
bar requires a graphical environment, you must have the DISPLAY environment
variable properly set.
dbca -progress_only -responseFile <response file>

This case shows how the DBCA creates a database in silent mode with clone template.
dbca -silent -createDatabase -cloneTemplate -responseFile <response file>
A database can also be removed using the batch mode:
dbca -silent -deleteDatabase -responseFile <response file>

Recovering a lost Inventory
What should you do in case a central inventory gets lost? There are a couple of
scenarios here; you could have a valid Oracle Home either intact or restored from a
backup, but the central inventory is lost, if this is the case Oracle will work, but you
will notice the missing inventory when you try to do an upgrade or apply a patch. In
Oracle 10gR1 and earlier releases there is no other option but to restore the inventory
from a backup, so you should include in your backup policies a periodic ORACLE_
BASE backup which includes the different Oracle Homes and the central inventory.
In Oracle 10gR2 and 11gR1, you can register it using the following procedure:
Change to the oui/bin directory inside the target Oracle Home:
cd $ORACLE_HOME/oui/bin

From this point, run the runInstaller (or setup.exe command in Windows) and
use the attachHome and invPtrLoc modifiers:
./runInstaller -silent -attachHome -invPtrLoc ./oraInst.loc ORACLE_
HOME="<Oracle_Home_Location>" ORACLE_HOME_NAME="<Oracle_Home_Name>"
Your Oracle Home will be back again in the OUI Inventory.


Cloning Oracle Home using OUI
You can clone an existing Oracle Home in the same host using this procedure:
1. Install Oracle Home in its source directory, include all necessary patchsets
and patches.
2. Perform a recursive copy of the source Oracle Home to the target Oracle
Home This step must be run as root to preserve the file permissions (use the
cp -Rp command options).
3. Verify Oracle has the proper file and directory permissions in the
target directory.
4. Run the following command to clone the installation with the OUI:
cd $ORACLE_HOME/clone/bin
perl clone.pl ORACLE_HOME="<target_home>" ORACLE_HOME_
NAME="<unique_home_name>"
°° An alternative method of cloning can be achieved using
the following commands:
cd $ORACLE_HOME/oui/bin
./runInstaller -clone -silent -ignorePreReq ORACLE_
HOME="<target_home>" ORACLE_HOME_NAME="<unique_home_
name>"
If required, add -invPtrLoc <path>/oraInst.loc
or -ignoreSysPrereqs to the command line.
5. As root, run the root.sh file which is located at the target
Oracle Home directory.

How to find out if the console components are
currently installed
You can easily find out if your current Oracle installation includes the Enterprise
Manager DB control console components installed by querying the installation
with the opatch tool. The command used to find out if the console components are
installed is:
<Oracle Home>/opatch lsinventory -detai

配置EMCA
The basic command line options used to create a console configuration on a database
where it has not previously been configured is as follows:
emca -config dbcontrol db -repos create
If the database comes from a seed database, this one will already have the SYSMAN
user created, if you proceed with using this database then the EMCA will fail and the
log files will show an error letting you know about the problem. If this happens, you
must drop the repository and configure a new one, using this command:
emca -deconfig dbcontrol db -repos drop

Once you are ready, just issue the command below with the DBCONTROL_HTTP_PORT
parameter set.
emca -reconfig ports -DBCONTROL_HTTP_PORT 1158



EMCA log files
When EMCA is used it records the activity in log files located at $ORACLE_HOME/
cfgtoollogs/emca/<ORACLE_SID> you may just want to optionally backup and
purge these files when they are no longer required.
The SYSMAN configuration files
There are two main configuration files located in the
<Oracle Home>/<hostname>_<Oracle SID>/sysman/log directory. One is named
emd.properties and a second file named emoms.properties. The first file defines
the URLs used by the Upload Manager, the agent, the agent version and the agent
TZ region; these two files should not be managed manually except when explicitly
directed by Oracle Support. The second file, emoms.properties, defines how is
the SYSMAN user going to connect to the target database, it also contains the SYSMAN
password (encrypted). There is a procedure to change the password of the SYSMAN
user if required.
The SYSMAN log files
Log and trace files are the first source of information when you want to troubleshoot
the console. If the console doesn't start or it happens to be inaccessible you should
first inspect the files located at <Oracle Home>/<hostname>_<Oracle SID>/
sysman/log directory, there you will find several log and trace files. The log files
will be growing on demand as required by the involved processes, the DBA must be
aware of this and regularly maintain the files. The DBA can manage the files' growth
by configuring the maximum file size, the maximum number of files as well as the
log file location. The procedure to manage log files is outlined next.
As you are working with sensitive configuration files,
you must backup these files prior to modifying their contents.


Changing the IP address or host name
Changing the IP address means Enterprise Manager Console won't be available
and it won't be possible to start it; all the Oracle connectivity configuration files are
affected too.
As this is an administrative change, it is strongly suggested to backup the current
environment by taking a full Oracle Home and database backup. This will back-up
the current configuration files as well as the current EM repository.
The procedure used to get the environment working is as follows:
1. Shut down enterprise manager, the database and all Oracle related services.
2. Perform the IP and/or host name changes.
3. Edit the <Oracle Home>/network/admin/listener.ora and <Oracle
Home>/network/admin/tnsnames.ora files and replicate the change to
these files. This list is not at all exhaustive, depending on the network
configurations you have set up you may have to modify other files or
network connectivity configuration systems such as LDAP centralized
entries, wallets, and so on.
4. Start up the listener and databases do not start enterprise manager at this
time as its configuration won't work with the environment setup, and it
will need to be modified.
5. Deconfigure the EM Console and drop the repository.
6. Reconfigure the EM Console and have the repository created.
7. At the end of this procedure your Enterprise Manager console will
be brought back to business.
Oracle RDBMS should work on fixed IP address servers, if your server has a
dynamically assigned IP then you will have to configure a loopback adapter so you
can fix the IP references to this adapter and have the OS hosts file configured so
the official hostname and optionally the domain are associated with the loopback
adapter's IP address instead of the host's actual IP address. This information will be
read at the time EMCA performs the configuration.

Changing administrative passwords
Changing the SYSMAN password requires more than just changing the password
at the database level, if you just change the password at this level then Enterprise
Manager won't be accessible any more. The Enterprise Manager DB Console requires
two users, one of them is the Agent monitor user (DBSNMP), and the other one is
the owner of the EM repository (SYSMAN), there is a procedure to change these
administrative passwords.

Changing SYSMAN password
The procedure to change the password starts with properly setting the environment
variables: ORACLE_HOME, ORACLE_SID, and PATH.
1. Shut down Enterprise Manager Console and make sure it is completely off.
emctl stop dbconsole
emctl status dbconsole
2. From a SQL*Plus prompt connected with a privileged account (SYS, SYSTEM
or SYSMAN) modify the SYSMAN's password with a regular ALTER USER
command
SQL> alter user SYSMAN identified by <SysmanNewPassword> ;
3. Verify you can open a SQL*Plus session using the SYSMAN user identified
with the recently set password.
sqlplus SYSMAN/<SysmanNewPassword>
4. The next phase has to do with replicating the change on the EM configuration
files.
5. From an OS prompt change the current directory to <Oracle Home>/
<Hostname>_<Oracle SID>/sysman/config directory.
6. Backup the emoms.properties configuration file.
7. Edit the emoms.properties file with a text editor, look for the oracle.
sysman.eml.mntr.emdRepPwd entry and replace the text string with the new
SYSMAN password written in clear text, then look for the oracle.sysman.
eml.mntr.emdRepPwdEncrypted entry and change the value to FALSE.
Enterprise Manager will automatically change the value to TRUE and it will
rewrite the password with the encrypted version.
8. Once you are ready, start the console using the regular emctl start
dbconsole command and when this step is finished verify the password
written in the previously modified configuration file has changed to the
encrypted version.
Your Enterprise Manager console should be up and running now with the new
password. You can verify the access of the SYSMAN user sessions in the database
with a simple query to the V$SESSION dynamic view.

Changing DBSNMP password
This is the second administrative user related to Enterprise Manager and when you
need to change its password there is a procedure you must follow.
It is assumed that the ORACLE_HOME, ORACLE_SID, and PATH environment variables
are properly set.
1. Stop the standalone console:
emctl stop dbconsole
2. Verify both the console and the agents are down:
emctl status dbconsole
emctl status agent
3. Connect with a privileged user at SQL*Plus and change the DBSNMP
password:
SQL> alter user DBSNMP identified by <DBSMPNewPassword>;
4. Verify you can open a SQL*Plus session using DBSNMP and the recently
assigned password:
sqlplus DBSNMP/<DBSNMPNewPassword>
5. At the OS level change to the directory and use a text editor to modify
the targets.xml file (you must backup this file prior to proceed with the
modification).
<Oracle Home>/oc4j/j2ee/OC4J_DBConsole_<host>_<Oracle SID>/emd
6. Look for the line:
<Property NAME="password" VALUE="<encrypted_string>"
ENCRYPTED="TRUE"/>
7. Replace the encrypted value with the new password value, and the
ENCRYPTED entry value to TRUE, this will allow you to write the password
in clear text format, later EM will change this value with the encrypted
password version.
8. Finally, start the console and verify the entry you modified has been changed
to encrypted and the encryption flag is set back to TRUE.

The configuration changes won't be considered until Enterprise Manager restarts, so
you should stop and start the console at your earliest convenience.
emctl stop dbconsole db
emctl start dbconsole db

Opatch
Even though OPatch is an executable, it is not located in the ORACLE_HOME/bin
directory. It has its own directory located at ORACLE_HOME/OPatch. The opatch
executable is a shell script that launches the OPatch Java class, the actual
OPatch executable.

OPatch requirements
In order for you to use OPatch, some requirements must be met:
1. Set the Oracle Home environment variable to point to a valid Oracle
Home directory. This Oracle Home must match the one used during
the installation.
2. Java SDK 1.4 or higher must be installed.
3. The environment variable that points to the shared library must be properly
set (LD_LIBRARY_PATH or SHLIB_PATH, which depends on the OS platform).
4. OPatch creates a rollback script in case the patch installation has to be
undone. It also performs a backup of the Inventory, so you must make sure
you have enough free space for these operations.
5. Use a compatible version, 10gR2 requires OUI to be 10.2.0.1.0 or higher.
6. When working in RAC environments, make sure the user equivalence
between hosts is correctly set.
7. Verify the Oracle Inventory is valid, you can check this by issuing the
command OPatch lsinventory -detail. This command displays the
software installed on the target Oracle Home. If this command returns no
information or it shows an error message, it means that the Inventory within
the Oracle Home is either missing or corrupt.

OPatch options
OPatch 10g Release 2 has five main options. The main options are used to list the
contents of the inventory, apply the patch, and retrieve information about the patch
and the system to be patched. These options are further detailed in the next section.
lsinventory: This option lists the inventory for a particular Oracle Home. This is
used to list the installations that can be found. When launched with no options, this
command shows the top level components found for the current Oracle Home.
apply: This option applies an interim patch to an Oracle Home from the current
directory. The patch location can be specified using the parameter patch_location.
query: This option provides information about the patch and the system
being patched.
rollback: This option is used to remove a specific interim patch from the current
Oracle Home.
version: This option is used to display the version number of the OPatch utility
being used.

Oracle maintenance using OPatch
There are several situations when an interim patch is required. The most common
one is to apply patches that correct specific bugs which you have encountered.
Another situation where software maintenance is required is when Oracle releases
the quarterly Critical Patch Update bundle. The patch task can consist of applying
a single patch or several patches at once. Specific detailed instructions are always
available in the companion README file.

Applying a single patch using OPatch
In order for you to be able to apply a patch, a generic procedure can be defined:
Thoroughly read the companion README file. Instructions stated
there supersede any procedure. This outline is merely a suggestion
that can be overridden by any specific patch instructions.
1. Make sure the Oracle environment variables are properly set.
2. Include the <Oracle Home>/OPatch directory in the PATH
environment variable.
3. Each patch requires the execution of OS commands, the PATH variable
must be able to see them.
4. Unzip the file patch file to a stage area, if you don't already have a predefined
location, the <Oracle Home>/OPatch directory is suggested as a stage area.
5. Once you are ready, at the OS prompt, change the current working
directory to the patch directory <Oracle Home>/OPatch /<PatchNumber>.
Now issue the opatch apply command, or whatever options were defined in
the README file.
6. Once the OPatch task is finished, read the contents of the log files to verify if
the patch apply task ended successfully. If a problem shows up, the log files
are the starting point for a troubleshooting session.
In the next image, a simple OPatch session is started to apply a patch against a given
Oracle Home.

Querying the Oracle inventory
The simplest way to verify if a given patch has been applied is by listing the
Oracle Inventory contents. This is achieved by issuing the opatch lsinventory
[-detail] command.

Rolling back a failed OPatch session
During the patch apply session, Oracle creates a directory under the Oracle Home
named .patch_storage. Oracle creates a structure to store the procedures and
backup files to undo a patch apply session.
A basic rollback session can be started using opatch rollback -id PatchNumber.
You may require a rollback session if a patch-apply session fails:
? if you find a conflicting patch
? if the patch doesn't meet the user expectations
? if the patch does not fix the problem

opatch rollback -id 8262446

Considerations after applying a patch
After a patch has been applied, there will be things that will change. It is a good
idea to perform a backup after the patch has been applied and keep it in a safe place
in case the Oracle Home or the Oracle Inventory get compromised due to a media
failure or accidental deletion in future.
Oracle databases can still be created using DBCA, but if the patch modified the
structure or contents of the database dictionary, then you must be aware that these
changes are not replicated against the seed databases. So if you are using DBCA
to create a new database using the current seeds, the scripts or other post apply
procedures must be manually applied against the new database. If you create a
new database using the CREATE DATABASE command, there is no need to apply the
scripts that modify the database dictionary. These are already considered when the
catproc.sql or catalog.sql scripts are run.
You should have a test plan so that after applying a patch you ensure the system
works as expected. A good set of regression tests is important to verify that the patch
has not accidently broken application functionality.


OPatch in Oracle 11g
You should be aware that OPatch is tool sensitive to the version. You cannot use the
OPatch tool from one release to patch another release. Oracle 11g introduced several
new options. The next three images show the OPatch syntax in 11g, how it changed
from the 10g release, and which options were added in this release.

Oracle releases a patch bundle on a quarterly basis known as a Critical Patch
Updates (CPU). It is strongly advised to install this patch as this provides
security fixes on a regular basis. The CPUs are released every January, April, July,
and October. For further information about CPU releases, you can refer to the
information provided on the Critical Patch Updates and Security Alerts page
located at http://www.oracle.com/technology/deploy/security/alerts.htm.

The procedure to install the CPU is detailed in the companion README.txt or
README.html file which describes the steps required to install the CPU in detail.
You must read this file prior to starting the apply session. The key tool to perform
the CPU install is OPatch and the specific options required to perform this task may
vary from CPU to CPU, so you must read the instructions included in the CPU.
CPUs are cumulative, so you don't have to apply all the CPUs for a given release. If
you apply the latest CPU available, you will automatically be applying all available
CPU patches released so far for the given RDBMS version. At the above URL you
will find CPU availability. When a CPU is made public, it doesn't mean it will be
available to all platforms and it won't be available to all patchset levels, you must
first verify if your platform qualifies for the released CPU. The patch number related
to the CPU is not the same for all the platforms or for all the patchset levels. You
must first find out which specific patch number corresponds to your platform and
patchset level.

Find out the installed patches
As OPatch uses the same Oracle inventory used by OUI, you can use it to get
information about the patches applied in the first instance. The information is
displayed in the form of interim patches. The following command line lists all
the applied patches so far:
opatch lsinventory -all

This query can be used to get the CPU information (the use of column formatters
is suggested):
SELECT ACTION_TIME,
ACTION ,
VERSION ,
COMMENTS ,
BUNDLE_SERIES,
FROM REGISTRY$HISTORY;

Hot patching (11g only)
Normally when a patch is applied the Oracle services must be shutdown. This means
a downtime and a maintenance window must be open while the operation takes
place. The DBA must ensure that the process will be successful on the first attempt
otherwise there it must be a fall back procedure in place.
Among the high availability features provided by 11g, Oracle introduced the Hot
patching concept. Hot patching allows the DBA to install, enable, and disable a patch
online without disruption to Oracle services. Hot patches don't require instance
shutdown, and they are installed with the traditional OPatch tool. This tool can
detect conflicts between hot patches.

Not all patches in 11g can be installed in Hot patch mode. First you must find out if
the patch supports the hot patch apply feature. You can use the following command
to determine if this mode is allowed:
opatch query -is_online_patch <PatchLocation>
or
opatch query <PatchLocation> -all
The patches reported as Hot Patch enabled are shipped as dynamic or shared
libraries which are mapped into memory by each Oracle process. When installing a
patch in hot patch mode the oracle binary is actually not changed; even though the
patch persists across instance restart operations.
Not all OS platforms currently support hot patching, you must refer to Oracle
support to find out if your platform supports this mode.

Troubleshooting OPatch
There may be several circumstances that cause OPatch to fail in the patch apply task.
The DBA must always read the instructions and make sure they are fully understood
and the prerequisites are fully met. This reduces the number of possible failures
during the patch apply process.
Let's assume a scenario; the DBA tries to apply a patch but it is only partially
applied, OPatch works in an idempotent way, that is, the steps required by OPatch
to apply a patch are executed only once. It doesn't matter how many times the user
manually restarts the patch apply task. So if the user wants to rollback a partially
applied patch the only way to start the rollback procedure is by first finishing the
started patch apply task.

PATH environment variable
The PATH environment variable is critical. You must always make sure it is properly
set and the OS commands required by OPatch are visible; otherwise, the patch
process will fail.
OPatch log files
The OPatch log files are located under the ORACLE_HOME/patch_storage/
patchNumber directory. The log file is named <PatchNumber>_Apply_<date>.log.
This file contains all the steps sequence applied by OPatch. If a patch fails, the DBA
should refer to this file to start diagnosing what could have gone wrong during the
apply phase.

Using Enterprise Manager for software
maintenance
Enterprise Manager can be used to perform software maintenance tasks. EM can be
configured to access Metalink, query the patches required by the database, download
them, and store them in a reserved area known as the Patch Cache; the DBA can
take them from this region and schedule them to be applied at a later time. You
must remember that there are licensing concerns; you require the Configuration
Management Pack.


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值