SGA

Overview of the System Global Area

 

The SGA is a read/write memory area that, along with the Oracle background processes, make up a database instance. All server processes that execute on behalf of users can read information in the instance SGA. Several processes write to the SGA during database operation.

 

Note:

The server and background processes do not reside within the SGA, but exist in a separate memory space.

 

 

1.Database Buffer Cache

 

The database buffer cache, also called the buffer cache, is the memory area that stores copies of data blocks read from data files. A buffer is a main memory address in which the buffer manager temporarily caches a currently or recently used data block. All users concurrently connected to a database instance share access to the buffer cache.

 

Oracle Database uses the buffer cache to achieve the following goals:

 

    Optimize physical I/O

 

    The database updates data blocks in the cache and stores metadata about the changes in the redo log buffer. After a COMMIT, the database writes the redo buffers to disk but does not immediately write data blocks to disk. Instead, database writer (DBWn) performs lazy writes in the background.

 

    Keep frequently accessed blocks in the buffer cache and write infrequently accessed blocks to disk

 

    When Database Smart Flash Cache (flash cache) is enabled, part of the buffer cache can reside in the flash cache. This buffer cache extension is stored on a flash disk device, which is a solid state storage device that uses flash memory. The database can improve performance by caching buffers in flash memory instead of reading from magnetic disk.

 

    Note:

    Database Smart Flash Cache is available only in Solaris and Oracle Enterprise Linux.

 

1.1Buffer States

 

The database uses internal algorithms to manage buffers in the cache. A buffer can be in any of the following mutually exclusive states:

 

    Unused

    The buffer is available for use because it has never been used or is currently unused. This type of buffer is the easiest for the database to use.

 

    Clean

    This buffer was used earlier and now contains a read-consistent version of a block as of a point in time. The block contains data but is "clean" so it does not need to be checkpointed. The database can pin the block and reuse it.

 

    Dirty

    The buffer contain modified data that has not yet been written to disk. The database must checkpoint the block before reusing it.

 

Every buffer has an access mode: pinned or free (unpinned). A buffer is "pinned" in the cache so that it does not age out of memory while a user session accesses it. Multiple sessions cannot modify a pinned buffer at the same time.

 

The database uses a sophisticated algorithm to make buffer access efficient. Pointers to dirty and nondirty buffers exist on the same least recently used (LRU) list, which has a hot end and cold end. A cold buffer is one that has not been recently used. A hot buffer is frequently accessed and has been recently used.

 

Note:

Conceptually, there is only one LRU, but for concurrency the database actually uses several LRUs.

 

 

1.2Buffer Modes

 

When a client requests data, Oracle Database retrieves buffers from the database buffer cache in either of the following modes:

 

    Current mode

    A current mode get, also called a db block get, is a retrieval of a block as it currently appears in the buffer cache. For example, if an uncommitted transaction has updated two rows in a block, then a current mode get retrieves the block with these uncommitted rows. The database uses db block gets most frequently during modification statements, which must update only the current version of the block.

 

    Consistent mode

    A consistent read get is a retrieval of a read-consistent version of a block. This retrieval may use undo data. For example, if an uncommitted transaction has updated two rows in a block, and if a query in a separate session requests the block, then the database uses undo data to create a read-consistent version of this block (called a consistent read clone) that does not include the uncommitted updates. Typically, a query retrieves blocks in consistent mode.

 

 

1.3Buffer I/O

 

A logical I/O, also known as a buffer I/O, refers to reads and writes of buffers in the buffer cache. When a requested buffer is not found in memory, the database performs a physical I/O to copy the buffer from either the flash cache or disk into memory, and then a logical I/O to read the cached buffer.

Buffer Writes

The database writer (DBWn) process periodically writes cold, dirty buffers to disk. DBWn writes buffers in the following circumstances:

 

    A server process cannot find clean buffers for reading new blocks into the database buffer cache.

 

    As buffers are dirtied, the number of free buffers decreases. If the number drops below an internal threshold, and if clean buffers are required, then server processes signal DBWn to write.

 

    The database uses the LRU to determine which dirty buffers to write. When dirty buffers reach the cold end of the LRU, the database moves them off the LRU to a write queue. DBWn writes buffers in the queue to disk, using multiblock writes if possible. This mechanism prevents the end of the LRU from becoming clogged with dirty buffers and allows clean buffers to be found for reuse.

 

    The database must advance the checkpoint, which is the position in the redo thread from which instance recovery must begin.

 

 

Buffer Reads

 

When the number of clean or unused buffers is low, the database must remove buffers from the buffer cache. The algorithm depends on whether the flash cache is enabled:

 

    Flash cache disabled

    The database re-uses each clean buffer as needed, overwriting it. If the overwritten buffer is needed later, then the database must read it from magnetic disk.

 

    Flash cache enabled

    DBWn can write the body of a clean buffer to the flash cache, enabling reuse of its in-memory buffer. The database keeps the buffer header in an LRU list in main memory to track the state and location of the buffer body in the flash cache. If this buffer is needed later, then the database can read it from the flash cache instead of from magnetic disk.

 

When a client process requests a buffer, the server process searches the buffer cache for the buffer. A cache hit occurs if the database finds the buffer in memory. The search order is as follows:

 

    The server process searches for the whole buffer in the buffer cache.

 

    If the process finds the whole buffer, then the database performs a logical read of this buffer.

 

    The server process searches for the buffer header in the flash cache LRU list.

 

    If the process finds the buffer header, then the database performs an optimized physical read of the buffer body from the flash cache into the in-memory cache.

 

    If the process does not find the buffer in memory (a cache miss), then the server process performs the following steps:

 

        Copies the block from a data file into memory (a physical read)

 

        Performs a logical read of the buffer that was read into memory

 

Figure 14-6 illustrates the buffer search order. The extended buffer cache includes both the in-memory buffer cache, which contains whole buffers, and the flash cache, which contains buffer bodies. In the figure, the database searches for a buffer in the buffer cache and, not finding the buffer, reads it into memory from magnetic disk.

 

Figure 14-6 Buffer Search

 

In general, accessing data through a cache hit is faster than through a cache miss. The buffer cache hit ratio measures how often the database found a requested block in the buffer cache without needing to read it from disk.

 

The database can perform physical reads from either a data file or a temp file. Reads from a data file are followed by logical I/Os. Reads from a temp file occur when insufficient memory forces the database write data to a temporary table and read it back later. These physical reads bypass the buffer cache and do not incur a logical I/O.

 

 

The database measures the frequency of access of buffers on the LRU list using a touch count. This mechanism enables the database to increment a counter when a buffer is pinned instead of constantly shuffling buffers on the LRU list.

 

Note:

The database does not physically move blocks in memory. The movement is the change in location of a pointer on a list.

 

When a buffer is pinned, the database determines when its touch count was last incremented. If the count was incremented over three seconds ago, then the count is incremented; otherwise, the count stays the same. The three-second rule prevents a burst of pins on a buffer counting as many touches. For example, a session may insert several rows in a data block, but the database considers these inserts as one touch.

 

If a buffer is on the cold end of the LRU, but its touch count is high, then the buffer moves to the hot end. If the touch count is low, then the buffer ages out of the cache.

 

Buffers and Full Table Scans

 

When buffers must be read from disk, the database inserts the buffers into the middle of the LRU list. In this way, hot blocks can remain in the cache so that they do not need to be read from disk again.

 

A problem is posed by a full table scan, which sequentially reads all rows under the table high water mark (see "Segment Space and the High Water Mark"). Suppose that the total size of the blocks in a table segment is greater than the size of the buffer cache. A full scan of this table could clean out the buffer cache, preventing the database from maintaining a cache of frequently accessed blocks.

 

Blocks read into the database cache as the result of a full scan of a large table are treated differently from other types of reads. The blocks are immediately available for reuse to prevent the scan from effectively cleaning out the buffer cache.

 

In the rare case where the default behavior is not desired, you can change the CACHE attribute of the table. In this case, the database does not force or pin the blocks in the buffer cache, but ages them out of the cache in the same way as any other block. Use care when exercising this option because a full scan of a large table may clean most of the other blocks out of the cache.

 

1.4Buffer Pools

 

A buffer pool is a collection of buffers. The database buffer cache is divided into one or more buffer pools.

 

You can manually configure separate buffer pools that either keep data in the buffer cache or make the buffers available for new data immediately after using the data blocks. You can then assign specific schema objects to the appropriate buffer pool to control how blocks age out of the cache.

 

The possible buffer pools are as follows:

 

    Default pool

    This pool is the location where blocks are normally cached. Unless you manually configure separate pools, the default pool is the only buffer pool.

 

    Keep pool

    This pool is intended for blocks that were accessed frequently, but which aged out of the default pool because of lack of space. The goal of the keep buffer pool is to retain objects in memory, thus avoiding I/O operations.

 

    Recycle pool

    This pool is intended for blocks that are used infrequently. A recycle pool prevent objects from consuming unnecessary space in the cache.

 

A database has a standard block size (see "Database Block Size"). You can create a tablespace with a block size that differs from the standard size. Each nondefault block size has its own pool. Oracle Database manages the blocks in these pools in the same way as in the default pool.

 

Figure 14-7 shows the structure of the buffer cache when multiple pools are used. The cache contains default, keep, and recycle pools. The default block size is 8 KB. The cache contains separate pools for tablespaces that use the nonstandard block sizes of 2 KB, 4 KB, and 16 KB.

 

Figure 14-7 Database Buffer Cache


2.Redo Log Buffer

 

The redo log buffer is a circular buffer in the SGA that stores redo entries describing changes made to the database. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by DML or DDL operations. Database recovery applies redo entries to data files to reconstruct lost changes.

 

Oracle Database processes copy redo entries from the user memory space to the redo log buffer in the SGA. The redo entries take up continuous, sequential space in the buffer. The background process log writer (LGWR) writes the redo log buffer to the active online redo log group on disk. Figure 14-8 shows this redo buffer activity.

 

LGWR writes redo sequentially to disk while DBWn performs scattered writes of data blocks to disk. Scattered writes tend to be much slower than sequential writes. Because LGWR enable users to avoid waiting for DBWn to complete its slow writes, the database delivers better performance.

 

The LOG_BUFFER initialization parameter specifies the amount of memory that Oracle Database uses when buffering redo entries. Unlike other SGA components, the redo log buffer and fixed SGA buffer do not divide memory into granules.

 

 

3.Shared Pool

 

The shared pool caches various types of program data. For example, the shared pool stores parsed SQL, PL/SQL code, system parameters, and data dictionary information. The shared pool is involved in almost every operation that occurs in the database. For example, if a user executes a SQL statement, then Oracle Database accesses the shared pool.

 

The shared pool is divided into several subcomponents, the most important of which are shown in Figure 14-9.

 

This section includes the following topics:

 

    Library Cache

 

    Data Dictionary Cache

 

    Server Result Cache

 

    Reserved Pool

 

3.1Library Cache

The library cache is a shared pool memory structure that stores executable SQL and PL/SQL code. This cache contains the shared SQL and PL/SQL areas and control structures such as locks and library cache handles. In a shared server architecture, the library cache also contains private SQL areas.

 

When a SQL statement is executed, the database attempts to reuse previously executed code. If a parsed representation of a SQL statement exists in the library cache and can be shared, then the database reuses the code, known as a soft parse or a library cache hit. Otherwise, the database must build a new executable version of the application code, known as a hard parse or a library cache miss.

The database represents each SQL statement that it runs in the following SQL areas:

 

    Shared SQL area

 

    The database uses the shared SQL area to process the first occurrence of a SQL statement. This area is accessible to all users and contains the statement parse tree and execution plan. Only one shared SQL area exists for a unique statement.

 

    Private SQL area

 

    Each session issuing a SQL statement has a private SQL area in its PGA (see "Private SQL Area"). Each user that submits the same statement has a private SQL area pointing to the same shared SQL area. Thus, many private SQL areas in separate PGAs can be associated with the same shared SQL area.

 

The database automatically determines when applications submit similar SQL statements. The database considers both SQL statements issued directly by users and applications and recursive SQL statements issued internally by other statements.

 

The database performs the following steps:

 

    Checks the shared pool to see if a shared SQL area exists for a syntactically and semantically identical statement:

 

        If an identical statement exists, then the database uses the shared SQL area for the execution of the subsequent new instances of the statement, thereby reducing memory consumption.

 

        If an identical statement does not exist, then the database allocates a new shared SQL area in the shared pool. A statement with the same syntax but different semantics uses a child cursor.

 

    In either case, the private SQL area for the user points to the shared SQL area that contains the statement and execution plan.

 

The database also removes a shared SQL area from the shared pool in the following circumstances:

 

    If statistics are gathered for a table, table cluster, or index, then by default the database gradually removes all shared SQL areas that contain statements referencing the analyzed object after a period of time. The next time a removed statement is run, the database parses it in a new shared SQL area to reflect the new statistics for the schema object.

 

    If a schema object is referenced in a SQL statement, and if this object is later modified by a DDL statement, then the database invalidates the shared SQL area. The optimizer must reparse the statement the next time it is run.

 

    If you change the global database name, then the database removes all information from the shared pool.

 

You can use the ALTER SYSTEM FLUSH SHARED_POOL statement to manually remove all information in the shared pool to assess the performance that can be expected after an instance restart.

 

    Oracle Database Reference for information about V$SQL and V$SQLAREA dynamic views

 

3.2Data Dictionary Cache

The data dictionary is a collection of database tables and views containing reference information about the database, its structures, and its users. Oracle Database accesses the data dictionary frequently during SQL statement parsing.

 

The data dictionary is accessed so often by Oracle Database that the following special memory locations are designated to hold dictionary data:

 

    Data dictionary cache

 

    This cache holds information about database objects. The cache is also known as the row cache because it holds data as rows instead of buffers.

 

    Library cache

 

All server processes share these caches for access to data dictionary information.

 

 

3.3Server Result Cache

 

Unlike the buffer pools, the server result cache holds result sets and not data blocks. The server result cache contains the SQL query result cache and PL/SQL function result cache, which share the same infrastructure.

 

A client result cache differs from the server result cache. A client cache is configured at the application level and is located in client memory, not in database memory.

 

3.3.1SQL Query Result Cache

The database can store the results of queries and query fragments in the SQL query result cache, using the cached results for future queries and query fragments. Most applications benefit from this performance improvement.

 

For example, suppose an application runs the same SELECT statement repeatedly. If the results are cached, then the database returns them immediately. In this way, the database avoids the expensive operation of rereading blocks and recomputing results. The database automatically invalidates a cached result whenever a transaction modifies the data or metadata of database objects used to construct that cached result.

 

Users can annotate a query or query fragment with a RESULT_CACHE hint to indicate that the database should store results in the SQL query result cache. The RESULT_CACHE_MODE initialization parameter determines whether the SQL query result cache is used for all queries (when possible) or only for annotated queries.

See Also:

 

    Oracle Database Reference to learn more about the RESULT_CACHE_MODE initialization parameter

 

    Oracle Database SQL Language Reference to learn about the RESULT_CACHE hint

 

3.3.2PL/SQL Function Result Cache

The PL/SQL function result cache stores function result sets. Without caching, 1000 calls of a function at 1 second per call would take 1000 seconds. With caching, 1000 function calls with the same inputs could take 1 second total. Good candidates for result caching are frequently invoked functions that depend on relatively static data.

 

PL/SQL function code can include a request to cache its results. Upon invocation of this function, the system checks the cache. If the cache contains the result from a previous function call with the same parameter values, then the system returns the cached result to the invoker and does not reexecute the function body. If the cache does not contain the result, then the system executes the function body and adds the result (for these parameter values) to the cache before returning control to the invoker.

 

Note:

You can specify the database objects that are used to compute a cached result so that if any of them are updated, the cached result becomes invalid and must be recomputed.

 

The cache can accumulate many results—one result for every unique combination of parameter values with which each result-cached function was invoked. If the database needs more memory, then it ages out one or more cached results.

 

3.4Reserved Pool

The reserved pool is a memory area in the shared pool that Oracle Database can use to allocate large contiguous chunks of memory.

 

Allocation of memory from the shared pool is performed in chunks. Chunking allows large objects (over 5 KB) to be loaded into the cache without requiring a single contiguous area. In this way, the database reduces the possibility of running out of contiguous memory because of fragmentation.

 

Infrequently, Java, PL/SQL, or SQL cursors may make allocations out of the shared pool that are larger than 5 KB. To allow these allocations to occur most efficiently, the database segregates a small amount of the shared pool for the reserved pool.

 

 

4.Large Pool

 

The large pool is an optional memory area intended for memory allocations that are larger than is appropriate for the shared pool. The large pool can provide large memory allocations for the following:

 

    UGA for the shared server and the Oracle XA interface (used where transactions interact with multiple databases)

 

    Message buffers used in the parallel execution of statements

 

    Buffers for Recovery Manager (RMAN) I/O slaves

 

By allocating session memory from the large pool for shared SQL, the database avoids performance overhead caused by shrinking the shared SQL cache. By allocating memory in large buffers for RMAN operations, I/O server processes, and parallel buffers, the large pool can satisfy large memory requests better than the shared pool.

 

Figure 14-11 is a graphical depiction of the large pool.

Figure 14-11 Large Pool

 

The large pool is different from reserved space in the shared pool, which uses the same LRU list as other memory allocated from the shared pool. The large pool does not have an LRU list. Pieces of memory are allocated and cannot be freed until they are done being used. As soon as a chunk of memory is freed, other processes can use it.

 

 

5.Java Pool

 

The Java pool is an area of memory that stores all session-specific Java code and data within the Java Virtual Machine (JVM). This memory includes Java objects that are migrated to the Java session space at end-of-call.

 

For dedicated server connections, the Java pool includes the shared part of each Java class, including methods and read-only memory such as code vectors, but not the per-session Java state of each session. For shared server, the pool includes the shared part of each class and some UGA used for the state of each session. Each UGA grows and shrinks as necessary, but the total UGA size must fit in the Java pool space.

 

The Java Pool Advisor statistics provide information about library cache memory used for Java and predict how changes in the size of the Java pool can affect the parse rate. The Java Pool Advisor is internally turned on when statistics_level is set to TYPICAL or higher. These statistics reset when the advisor is turned off.

 

 

6.Streams Pool

 

The Streams pool stores buffered queue messages and provides memory for Oracle Streams capture processes and apply processes. The Streams pool is used exclusively by Oracle Streams.

 

Unless you specifically configure it, the size of the Streams pool starts at zero. The pool size grows dynamically as required by Oracle Streams.

 

7.Fixed SGA

 

The fixed SGA is an internal housekeeping area. For example, the fixed SGA contains:

 

    General information about the state of the database and the instance, which the background processes need to access

 

    Information communicated between processes, such as information about locks (see "Overview of Automatic Locks")

 

The size of the fixed SGA is set by Oracle Database and cannot be altered manually. The fixed SGA size can change from release to release.

 

Overview of Software Code Areas

 

Software code areas are portions of memory that store code that is being run or can be run. Oracle Database code is stored in a software area that is typically more exclusive and protected than the location of user programs.

 

Software areas are usually static in size, changing only when software is updated or reinstalled. The required size of these areas varies by operating system.

 

Software areas are read-only and can be installed shared or nonshared. Some database tools and utilities, such as Oracle Forms and SQL*Plus, can be installed shared, but some cannot. When possible, database code is shared so that all users can access it without having multiple copies in memory, resulting in reduced main memory and overall improvement in performance. Multiple instances of a database can use the same database code area with different databases if running on the same computer.

 

Note:

The option of installing software shared is not available for all operating systems, for example, on PCs operating Windows. See your operating system-specific documentation for more information.


来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/26844646/viewspace-1067687/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/26844646/viewspace-1067687/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值