Oracle OCP 1Z0-050(131-140题)解析

Oracle OCP 1Z0-050(131-140题)解析

QUESTION 131:

Which two kinds of failures make the Data Recovery Advisor (DRA) generate a manual checklist? (Choose two.)

A. failures because a data file is renamed by error

B. failures due to loss of connectivity-for example, an unplugged disk cable

C. failures that require no archive logs to be applied for recovery

D. failures when no standby database is configured

Answer: A,B

解析:参考Oracle官方文档

https://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmrepai.htm#BRADV89714

 

QUESTION 132:

The following databases are registered in the base recovery catalog: PROD1, PROD2, and PROD3. The database user CATOWNER owns the base recovery catalog. You want a new user VPC1 to have access to only the PROD1 database and create a virtual private catalog.

Given below are some of the commands required to achieve this:

1.SQL> GRANT recovery_catalog_owner TO vpc1;

2.RMAN> CONNECT CATALOG vpc1/password@catdb;

3.RMAN> GRANT CATALOG FOR DATABASE prod1 TO vpc1;

4.RMAN> CONNECT CATALOG catowner/password@catdb;

5.RMAN> CREATE VIRTUAL CATALOG;

What is the correct sequence in which the commands have to be executed?

A. 1, 4, 3, 2, 5

B. 2, 3, 4, 5, 1

C. 4, 5, 2, 3, 1

D. 1, 4, 2, 3, 5

E. 1, 4, 5, 2, 3

Answer: A

 

QUESTION 133:

Exhibit #1:

Exhibit #2:

View the Exhibits exhibit1 and exhibit2.

Both the processes use PROG_1 as the job template that is already available. The time taken by the jobs are recorded in the TEST_LOG table. While comparing the time taken to create the jobs, the process in exhibit1 takes less time than the process in exhibit2.

What is the reason for this?

A. It updates several tables in the SYSTEM tablespace instead of creating new tables.

B. It creates jobs temporarily in memory only.

C. It creates less metadata for the jobs.

D. It writes the job metadata to disk in compressed format.

Answer: C

 

QUESTION 134:

Which two prerequisites are needed for performing workload capture and replay?

(Choose two.)

A. Close all sessions performing queries using database links.

B. running the database in shared server mode

C. The database on which the workload is replayed has to be a restore of the original database to a specific SCN.

D. setting up the directory to capture the workload

Answer: C,D

解析:参考Oracle官方文档

https://docs.oracle.com/cd/E11882_01/server.112/e41481/dbr_intro.htm#RATUG104

You can use Database Replay to capture a workload on the production system and replay it on a test system with the exact timing, concurrency, and transaction characteristics of the original workload. This enables you to test the effects of a system change without affecting the production system.

Database Replay supports workload capture on a system running Oracle Database 10g Release 2 and newer releases. In order to capture a workload on a system running Oracle Database 10g Release 2, the database version can be 10.2.0.4 or higher. Workload replay is only supported on systems running Oracle Database 11g Release 1 and newer releases.

he first step in using Database Replay is to capture the production workload. Capturing a workload involves recording all requests made by external clients to Oracle Database.

When workload capture is enabled, all external client requests directed to Oracle Database are tracked and stored in binary files—called capture files—on the file system. You can specify the location where the capture files will be stored. Once workload capture begins, all external database calls are written to the capture files. The capture files contain all relevant information about the client request, such as SQL text, bind values, and transaction information. Background activities and database scheduler jobs are not captured. These capture files are platform independent and can be transported to another system.

https://docs.oracle.com/cd/E11882_01/server.112/e41481/dbr_consolidated.htm#RATUG4114

 

13.4.4 Running Consolidated Database Replay Using APIs

This section describes how to run Consolidated Database Replay using the DBMS_WORKLOAD_REPLAY package. For information about consolidated replay, see "Replaying Database Workloads for Consolidated Database Replay".

Before running Consolidated Database Replay, ensure that the following prerequisites are met:

  • All workload captures are preprocessed using the PROCESS_CAPTURE procedure on a system running the same database version as the replay system, as described in Chapter 10, "Preprocessing a Database Workload".
  • All capture directories are copied to the replay directory on the replay system
  • Replay directory is set using the SET_REPLAY_DIRECTORY procedure, as described in "Setting the Replay Directory Using APIs".
  • Database is logically restored to the same application state as that of all the capture systems at the start time of all workload captures.

To run Consolidated Database Replay: 

  1. Initialize the replay data, as described in "Initializing Consolidated Database Replay Using APIs".
  2. Remap connections strings, as described in "Remapping Connection Using APIs".
  3. Remap users, as described in "Remapping Users Using APIs".

Remapping users is optional.

  1. Prepare the consolidated replay, as described in "Preparing for Consolidated Database Replay Using APIs".
  2. Set up and start the replay clients, as described in "Setting Up Replay Clients".
  3. Start the consolidated replay, as described in "Starting Consolidated Database Replay Using APIs".
  4. Generate reports and perform analysis, as described in "Reporting and Analysis for Consolidated Database Replay".

 

QUESTION 135:

Which two statements are true regarding Health Monitor checks in Oracle Database 11g?

(Choose two.)

A. Health Monitor checks can be used to scan the contents of the redo log and archive logs for accessibility and corruption.

B. Health Monitor checks can be used to verify the integrity of database files and report failures if these files are inaccessible, corrupt or inconsistent.

C. Health Monitor checks can be used to verify the contents of dictionary entries for each dictionary object and fix it automatically.

D. Health Monitor checks are always initiated manually when there is some critical error.

Answer: A,B

https://docs.oracle.com/cd/E11882_01/server.112/e25494/diag.htm#ADMIN11269

 

About Health Monitor

Beginning with Release 11g, Oracle Database includes a framework called Health Monitor for running diagnostic checks on the database.

About Health Monitor Checks

Health Monitor checks (also known as checkers, health checks, or checks) examine various layers and components of the database. Health checks detect file corruptions, physical and logical block corruptions, undo and redo corruptions, data dictionary corruptions, and more. The health checks generate reports of their findings and, in many cases, recommendations for resolving problems. Health checks can be run in two ways:

  • Reactive—The fault diagnosability infrastructure can run health checks automatically in response to a critical error.
  • Manual—As a DBA, you can manually run health checks using either the DBMS_HM PL/SQL package or the Enterprise Manager interface. You can run checkers on a regular basis if desired, or Oracle Support may ask you to run a checker while working with you on a service request.

Health Monitor checks store findings, recommendations, and other information in the Automatic Diagnostic Repository (ADR).

Health checks can run in two modes:

  • DB-online mode means the check can be run while the database is open (that is, in OPEN mode or MOUNT mode).
  • DB-offline mode means the check can be run when the instance is available but the database itself is closed (that is, in NOMOUNT mode).

All the health checks can be run in DB-online mode. Only the Redo Integrity Check and the DB Structure Integrity Check can be used in DB-offline mode.

Types of Health Checks

Health monitor runs the following checks:

  • DB Structure Integrity Check—This check verifies the integrity of database files and reports failures if these files are inaccessible, corrupt or inconsistent. If the database is in mount or open mode, this check examines the log files and data files listed in the control file. If the database is in NOMOUNT mode, only the control file is checked.
  • Data Block Integrity Check—This check detects disk image block corruptions such as checksum failures, head/tail mismatch, and logical inconsistencies within the block. Most corruptions can be repaired using Block Media Recovery. Corrupted block information is also captured in the V$DATABASE_BLOCK_CORRUPTION view. This check does not detect inter-block or inter-segment corruption.
  • Redo Integrity Check—This check scans the contents of the redo log for accessibility and corruption, as well as the archive logs, if available. The Redo Integrity Check reports failures such as archive log or redo corruption.
  • Undo Segment Integrity Check—This check finds logical undo corruptions. After locating an undo corruption, this check uses PMON and SMON to try to recover the corrupted transaction. If this recovery fails, then Health Monitor stores information about the corruption in V$CORRUPT_XID_LIST. Most undo corruptions can be resolved by forcing a commit.
  • Transaction Integrity Check—This check is identical to the Undo Segment Integrity Check except that it checks only one specific transaction.
  • Dictionary Integrity Check—This check examines the integrity of core dictionary objects, such as tab$ and col$. It performs the following operations:
    • Verifies the contents of dictionary entries for each dictionary object.
    • Performs a cross-row level check, which verifies that logical constraints on rows in the dictionary are enforced.
    • Performs an object relationship check, which verifies that parent-child relationships between dictionary objects are enforced.

The Dictionary Integrity Check operates on the following dictionary objects:

tab$, clu$, fet$, uet$, seg$, undo$, ts$, file$, obj$, ind$, icol$, col$, user$, con$, cdef$, ccol$, bootstrap$, objauth$, ugroup$, tsq$, syn$, view$, typed_view$, superobj$, seq$, lob$, coltype$, subcoltype$, ntab$, refcon$, opqtype$, dependency$, access$, viewcon$, icoldep$, dual$, sysauth$, objpriv$, defrole$, and ecol$.

 

QUESTION 136:

Examine the section of the Health Check report given below:

DBMS_HM.GET_RUN_REPORT('HM_RUN_1061')

------------------------

Run Name : HM_RUN_1061

Run Id : 1061

Check Name : Data Block Integrity Check

Mode : REACTIVE

Status : COMPLETED

Start Time : 2007-05-12 22:11:02.032292 -07:00

End Time : 2007-05-12 22:11:20.835135 -07:00

Error Encountered : 0

Source Incident Id : 7418

Number of Incidents Created :0

Which two statements are true regarding the Health Check report? (Choose two.)

A. Health Check was performed to verify the integrity of database files and report failures.

B. Health Check was performed to check the disk image block corruptions.

C. Health Check was performed to check interblock and intersegment corruption.

D. Health Check was performed manually.

E. Health Check was performed by the Health Monitor automatically in response to a critical error.

Answer: B,D

https://docs.oracle.com/cd/E11882_01/server.112/e25494/diag.htm#ADMIN11276

 

QUESTION 137:

Which two statements are true regarding hot patching? (Choose two.)

A. It can detect conflicts between two online patches.

B. It requires relinking of the Oracle binary.

C. It is available for installing all patches on all platforms.

D. It works only in a single database instance environment.

E. It does not require database instance shutdown.

Answer: A,E

https://docs.oracle.com/cd/E11882_01/em.112/e12255/oui7_opatch.htm#OUICG354

https://docs.oracle.com/cd/E11882_01/em.112/e12255/oui7_opatch.htm#OUICG353

Online Patching

Regular patches typically contain .o (object) files and/or .a (archive) libraries, and therefore require a relink of the RDBMS binary. Online patches, however, contain .so files, which are dynamic/shared libraries, and do not require a relink of the RDBMS binary. Consequently, since a relink is not needed, you can apply or roll back online patches while the RDBMS instance is running. This simplifies administration, because no downtime is needed, and also results in a much quicker turnaround time for installing or de-installing Online Patches.

A regular RDBMS patch can require many minutes to install, since it requires instance shutdown, a relink, and instance startup. On the other hand, you can install an online patch in just a few seconds.

Online patches are only applicable for Oracle RDBMS and not any other products. Online patches are currently supported on the following Windows and UNIX platforms for version 11.2.0.1.0 and later:

  • Linux x86
  • Linux x86_64
  • HP-UX Itanium
  • Solaris SPARC 64-bit
  • Solaris AMD 64-bit
  • AIX (AIX 6.1 and later)

Patch Conflict Detection and Resolution

OPatch detects and reports any conflicts encountered when applying an Interim patch with a previously applied patch. The patch application fails in case of conflicts. You can use the -force option of OPatch to override this failure. If you use this option, the installer first rolls back any conflicting patches and then proceeds with the installation of the desired interim patch.

You may encounter a bug conflict and might want to remove the conflicting patch. This process is known as patch rollback. During patch installation, OPatch saves copies of all the files the new patch replaced before the new versions of these files are loaded and stores them in $ORACLE_HOME/.patch_storage. These saved files are called Rollback files and are the key to making patch rollback possible. When you roll back a patch, these Rollback files are restored to the system. You should only override the default behavior by using the -force flag if you completely understand the patch Rollback process. To roll back a patch, execute the following command:

$ OPatch/opatch rollback -id <Patch_ID>

 

QUESTION 138:

Exhibit:

View the Exhibit for some of the current parameter settings.

A user logs in to the HR schema and issues the following commands:

SQL> CREATE TABLE emp

(empno NUMBER(3),

ename VARCHAR2(20),

sal NUMBER(8,2));

SQL> INSERT INTO emp(empno,ename) VALUES(1,'JAMES');

At this moment, a second user also logs in to the HR schema and issues the following

command:

SQL> ALTER TABLE emp MODIFY sal NUMBER(10,2);

What happens in the above scenario?

A. The second user's session immediately produces the resource busy error.

B. The second user's command executes successfully.

C. The second user's session waits for a time period before producing the resource busy error.

D. A deadlock is created.

Answer: C

https://docs.oracle.com/cd/E11882_01/server.112/e40402/initparams082.htm#REFRN10051

A DML lock is a lock obtained on a table that is undergoing a DML operation (insert, update, delete). DML_LOCKS specifies the maximum number of DML locks—one for each table modified in a transaction. The value should equal the grand total of locks on tables currently referenced by all users. For example, if three users are modifying data in one table, then three entries would be required. If three users are modifying data in two tables, then six entries would be required.

The default value assumes an average of four tables referenced for each transaction. For some systems, this value may not be enough.

Enqueues are shared memory structures that serialize access to database resources. If you set the value of DML_LOCKS to 0, enqueues are disabled and performance is slightly increased. However, you should be aware of the following restrictions when you set you DML_LOCKS to 0:

  • You cannot use DROP TABLE, CREATE INDEX statements
  • You cannot use explicit lock statements such as LOCK TABLE IN EXCLUSIVE MODE
  • Enterprise Manager cannot run on any instances for which DML_LOCKS is set to 0

Oracle holds more locks during parallel DML than during serial execution. Therefore, if your database supports a lot of parallel DML, you may need to increase the value of this parameter.

 

QUESTION 139:

You want the Automatic SQL Tuning process to stop accepting and implementing the recommended SQL profiles automatically. Which action would you perform to achieve this?

A. Edit the automatic maintenance window group configuration.

B. Use the DBMS_SQLTUNE.SET_TUNING_TASK_PARAMETERS procedure to set ACCEPT_SQL_PROFILES to FALSE.

C. Set the CURSOR_SHARING parameter to EXACT for the database instance.

D. Set the SQLTUNE_CATEGORY parameter to DEFAULT for the database instance.

Answer: B

https://docs.oracle.com/cd/E11882_01/server.112/e41573/sql_tune.htm#PFGRF94851

https://docs.oracle.com/cd/E11882_01/server.112/e41573/sql_tune.htm#PFGRF94851

当希望SQL Tuning process自动接受并执行推荐的SQL Profiles时,需要将ACCEPT_SQL_PROFILES参数值设置为TRUE;否则该参数值设置为FALSE

If a SQL profile is recommended, the database tests the new profile by executing the SQL statement both with and without the profile. If the performance improvement improves at least threefold, then the database accepts the SQL profile, but only if the ACCEPT_SQL_PROFILES task parameter is set to TRUE. Otherwise, the automatic SQL tuning reports merely report the recommendation to create a SQL profile.

To use the DBMS_AUTO_SQLTUNE package, you must have the DBA role, or have EXECUTE privileges granted by an administrator. The only exception is the EXECUTE_AUTO_TUNING_TASK procedure, which can only be run by SYS.

To configure automatic SQL tuning: 

  1. Start SQL*Plus, and connect to the database with DBA privileges (or connect as SYS if you plan to run EXECUTE_AUTO_TUNING_TASK).
  2. Run the DBMS_AUTO_SQLTUNE.SET_AUTO_TUNING_TASK_PARAMETER procedure.

The following example configures the automatic SQL tuning task to automatically accept SQL profiles recommended by SQL Tuning Advisor:

BEGIN

  DBMS_AUTO_SQLTUNE.SET_AUTO_TUNING_TASK_PARAMETER(

    parameter => 'ACCEPT_SQL_PROFILES', value => 'TRUE');

END;

/

QUESTION 140:

In which situations will the ASM metadata backup help you recover the ASM disk in a disk group? (Choose all that apply.)

A. when one or more file directory paths are accidentally deleted from an ASM disk group

B. when one of the disks in a disk group is accidentally unplugged

C. when the data file on an ASM disk group gets corrupted

D. when one or more disks in an ASM disk group are lost

Answer: A,D

https://docs.oracle.com/cd/B28359_01/server.111/b31107/asmcon.htm#OSTMG94050

An ASM instance is built on the same technology as an Oracle Database instance. An ASM instance has a System Global Area (SGA) and background processes that are similar to those of Oracle Database. However, because ASM performs fewer tasks than a database, an ASM SGA is much smaller than a database SGA. In addition, ASM has a minimal performance effect on a server. ASM instances mount disk groups to make ASM files available to database instances; ASM instances do not mount databases.

ASM metadata is the information that ASM uses to control a disk group and the metadata resides within the disk group. ASM metadata includes the following information:

The disks that belong to a disk group

The amount of space that is available in a disk group

The filenames of the files in a disk group

The location of disk group datafile data extents

A redo log that records information about atomically changing data blocks

ASM and database instances require shared access to the disks in a disk group. ASM instances manage the metadata of the disk group and provide file layout information to the database instances.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值