Oracle Database uses a unified log directory structure to consolidate the Oracle Clusterware component log files. This consolidated structure simplifies diagnostic information collection and assists during data retrieval and problem analysis.
Oracle Clusterware uses a file rotation approach for log files.
With Oracle grid infrastructure 11g release 2 (11.2), Oracle Automatic Storage Management (Oracle ASM) and Oracle Clusterware are installed into a single home directory, which is referred to as the Grid Infrastructure home. Configuration assistants start after the installer interview process that configures Oracle ASM and Oracle Clusterware.
The installation of the combined products is called Oracle grid infrastructure. However, Oracle Clusterware and Oracle Automatic Storage Management remain separate products.
Notations
$GRID_HOME is used in 11.2 Oracle clusterware to specify the grid (clusterware + ASM location). In previous releases we used$CRS_HOME or $ORA_CRS_HOME as environment variable for the clusterware software location (oracle cluster home). For this reason, we can set all these 3 environment variable with the same value (in.profile), but this is not mandatory. In this case we can consider GRID_BASE /oracle/grid if we want.
$GRID_HOME=/oracle/grid/11.2
$ORA_CRS_HOME=/oracle/grid/11.2
$CRS_HOME=/oracle/grid/11.2
In 11.2 grid infrastructure, the Oracle Clusterware Component Log Files are all situated in the$GRID_HOME/log/<hostname>
For instance, if my host name is tzdev1rac, all myOracle Clusterware Component Log Files (for 11.2) are situated in $GRID_HOME/log/tzdev1rac:
pwd
/oracle/grid/11.2/log/tzdev1rac ($GRID_HOME =/oracle/grid/11.2)
ls -altr
total 64
drwxrwxr-t 5 oracle dba 256 Jul 28 20:18 racg
drwxr-x--- 2 root dba 256 Jul 28 20:18 gnsd
drwxrwxr-t 4 root dba 256 Jul 28 20:18 agent
drwxr-x--- 2 oracle dba 256 Jul 28 20:18 admin
drwxrwxr-x 5 oracle dba 256 Jul 28 20:18 ..
drwxr-xr-t 17 root dba 4096 Jul 28 20:18 .
drwxr-x--- 2 oracle dba 256 Jul 28 20:24 gipcd
drwxr-x--- 2 oracle dba 256 Jul 28 20:25 mdnsd
drwxr-x--- 2 root dba 256 Jul 28 20:25 ohasd
drwxr-x--- 2 oracle dba 256 Jul 28 20:27 evmd
drwxr-x--- 2 root dba 256 Jul 31 01:08 ctssd
drwxr-x--- 2 root dba 256 Aug 1 12:44 crsd
drwxr-x--- 2 oracle dba 256 Aug 1 21:15 cssd
drwxr-x--- 2 oracle dba 256 Aug 2 14:06 diskmon
drwxr-x--- 2 oracle dba 256 Aug 2 14:46 gpnpd
-rw-rw-r-- 1 root system 16714 Aug 2 14:46 alerttzdev1rac.log
drwxr-x--- 2 oracle dba 4096 Aug 2 14:51 srvm
drwxr-x--- 2 oracle dba 4096 Aug 3 02:59 client
Oracle Clusterware Components/ Daemons/ Processes | Oracle Clusterware Component Log Files |
Cluster Ready Services Daemon (CRSD) Log Files | crsd |
Oracle High Availability Services Daemon (OHASD) | ohasd |
Cluster Synchronization Services (CSS) | cssd |
Cluster Time Synchronization Service (CTSS) | ctssd |
Grid Plug and Play | gpnpd |
Multicast Domain Name Service Daemon (MDNSD) | mdnsd |
Oracle Cluster Registry records | client : For the Oracle Cluster Registry tools (OCRDUMP, OCRCHECK, OCRCONFIG) record log information
|
Oracle Grid Naming Service (GNS) | gnsd |
Event Manager (EVM) information generated by evmd | evmd |
Oracle RAC RACG | racg Core files are in subdirectories of the log directory. Each RACG executable has a subdirectory assigned exclusively for that executable. The name of the RACG executable subdirectory is the same as the name of the executable. |
Server Manager (SRVM) | srvm |
Disk Monitor Daemon (diskmon) | diskmon |
Grid Interprocess Communication Daemon (GIPCD) | gipcd |
Where can we find the log files related to the listeners?
A) For listener.log ($ORACLE_BASE was set during the instalation and $DIAGNOSTIC_DEST wasn't set)
=>$ORACLE_BASE/diag/tnslsnr/tzdev1rac/listener/trace/listener.log
As of Oracle Database 11g Release 1, the diagnostics for each database instance are located in a dedicated directory, which can be specified through the DIAGNOSTIC_DEST initialization parameter. The structure of the directory specified by DIAGNOSTIC_DEST is as follows:
<diagnostic_dest>/diag/rdbms/<dbname>/<instname> This location is known as the Automatic Diagnostic Repository (ADR) Home.
For example, if the database name is proddb and the instance name is proddb1, the ADR home directory would be <diagnostic_dest>/diag/rdbms/proddb/proddb1.
The following files are located under the ADR home directory:
Trace files - located in subdirectory <adr-home>/trace
Alert logs - located in subdirectory <adr-home>/alert. In addition, the alert.log file is now in XML format, which conforms to the Oracle ARB logging standard.
Core files - located in the subdirectory <adr-home>/cdumd
Incident files - the occurrence of each serious error (for example, ORA-600, ORA-1578, ORA-7445) causes an incident to be created. Each incident is assigned an ID and dumping for each incident (error stack, call stack, block dumps, and so on) is stored in its own file, separated from process trace files. Incident dump files are located in<adr-home>/incident/<incdir#>. You can find the incident dump file location inside the process trace file.
This parameter can be set on each instance. Oracle recommends that each instance in a cluster specify a DIAGNOSTIC_DEST directory location that is located on shared disk and that the same value for DIAGNOSTIC_DEST be specified for each instance.
If you want to see how the ADR Homes are configurated in the database you can run:
column INST_ID format 999
column NAME format a20
column VALUE format a45
select INST_ID, NAME, VALUE from V$DIAG_INFO;
B) for SCAN listeners
=> $GRID_HOME/log/diag/tnslsnr/<NodeName>/listener_scan1/trace/listener_scan1.log
$GRID_HOME/log/diag/tnslsnr/<NodeName>/listener_scan2/trace/listener_scan2.log
$GRID_HOME/log/diag/tnslsnr/<NodeName>/listener_scan3/trace/listener_scan3.log
If you want to see the SCAN listeners status you can run :
srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node tzdev2rac
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node tzdev1rac
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node tzdev1rac